The answer from @Rodrigodebonasartor works perfectly and the big takeaway is that the algorithm makes an adjustment considering the corners of the image that are "hidden" after the rotation, as a way to avoid that the image is truncated in those regions.
The graph of Rodrigo’s own response helps a lot in this understanding, but I’ll try to explain it in a little more detail. I will use as an example the following (square) image of Lenna:
Rotation is an operation that simply transforms each pixel of the image to a given angle (see this Wikipedia page for more information). By applying a 45º rotation without translating adjustment, the result causes the upper left corner (coordinate (0,0)
) and the lower left corner (coordinated (0, height)
) are positioned outside the original image area, producing the following result:
A code like...
Point2D ponto = tx.transform(new Point2D.Double(rotateImage.getWidth(), 0.0), null);
...simply maps the original point (in the example above, (width, 0)
) according to the transformation described by tx
(that could have a series of concatenated transformations - the result would be the final mapping after all transformations on the point). Note that in the code of this question this is equivalent to calculating the rotation of the point (x' = x * cosseno(angulo) - y * seno(angulo), y' = x * seno(angulo) + y * cosseno(angulo)
), since the transformation tx
includes only one rotation.
With an angle of 45º, the rotation of each corner of the image produces the following results:
Point2D.Double(0.0, 0.0) => Point2D.Double[127.99999999999999, -53.01933598375615]
Point2D.Double(rotateImage.getWidth(), 0.0) => Point2D.Double[309.01933598375615, 128.0]
Point2D.Double(0.0, rotateImage.getHeight()) => Point2D.Double[-53.019335983756164, 128.00000000000003]
Point2D.Double(rotateImage.getWidth(), rotateImage.getHeight()) => Point2D.Double[128.0, 309.01933598375615]
As in the Java language the coordinate system originates (coordinate (0,0)
) in the upper left corner, it can be observed that the "hidden" corners (that is, outside the original area of the image because they are to the left or above the origin) have coordinates mapped in negative value. This goes for both the X-axis and the Y-axis.
Thus, what the algorithm proposed by Rodrigo does is simply check which corners are "hidden" (according to the range of the given angle) and calculate the translation difference per axis in order to "move" the excondidos corners to the visible limit of the image (i.e., close to the origin (0,0)
). It’s a simple balcony, and it works.
However, to make the code a little more streamlined (and potentially simpler to understand), simply calculate the rotation for the four corners of the image and choose, among the results, the lowest negative values for the X and Y axes.
Here’s an example:
public static BufferedImage rotateImage(BufferedImage rotateImage, double angle) {
AffineTransform tx = new AffineTransform();
tx.rotate(Math.toRadians(angle), rotateImage.getWidth() / 2.0, rotateImage.getHeight() / 2.0);
// Rotaciona as coordenadas dos cantos da imagem
Point2D[] aCorners = new Point2D[4];
aCorners[0] = tx.transform(new Point2D.Double(0.0, 0.0), null);
aCorners[1] = tx.transform(new Point2D.Double(rotateImage.getWidth(), 0.0), null);
aCorners[2] = tx.transform(new Point2D.Double(0.0, rotateImage.getHeight()), null);
aCorners[3] = tx.transform(new Point2D.Double(rotateImage.getWidth(), rotateImage.getHeight()), null);
// Obtém o valor de translação para cada eixo com um canto "escondido"
double dTransX = 0;
double dTransY = 0;
for(int i = 0; i < 4; i++) {
if(aCorners[i].getX() < 0 && aCorners[i].getX() < dTransX)
dTransX = aCorners[i].getX();
if(aCorners[i].getY() < 0 && aCorners[i].getY() < dTransY)
dTransY = aCorners[i].getY();
}
// Aplica a translação para evitar cortes na imagem
AffineTransform translationTransform = new AffineTransform();
translationTransform.translate(-dTransX, -dTransY);
tx.preConcatenate(translationTransform);
return new AffineTransformOp(tx, AffineTransformOp.TYPE_BILINEAR).filter(rotateImage, null);
}
As in Rodrigo’s answer, this algorithm also correctly adjusts the image to avoid clippings:
In the case of the previous example image, it is easy to notice that the necessary amount of translation is the same both in the X axis and in the Y axis (53 pixels approximately), but this is due to the angle used (45º) and the fact that the image is square. As the proposed algorithm checks the amounts individually per axis, it works correctly for other examples, as below (rotation of 260º and rectangular image - note that there are two negative values for the Y axis, but the algorithm chooses the smallest = -104 approximately):
Point2D.Double(0.0, 0.0) => Point2D.Double[123.93876331951267, 328.9969705899713]
Point2D.Double(rotateImage.getWidth(), 0.0) => Point2D.Double[54.47949225274054, -64.92613061491193]
Point2D.Double(0.0, rotateImage.getHeight()) => Point2D.Double[345.52050774725944, 289.926130614912]
Point2D.Double(rotateImage.getWidth(), rotateImage.getHeight()) => Point2D.Double[276.06123668048735, -103.99697058997123]
Translation required: in X = [0.0], in Y = [-103.99697058997123]
Okay, I like the answer. Unfortunately I can’t accept both answers, so I’m going to think a little bit to decide which one I accept.
– Victor Stafusa
Thanks. I only offered an additional explanation. The original idea of the algorithm is from @Rodrigodebonasartor. :)
– Luiz Vieira