To rotate an image without knowing the angle you need estimate the inclination (in the documentation, read the topic "Parameter Estimation"). This pet works by comparing notable points between an "origin" (the "ideal" image, in which CNH is positioned as expected) and a "destination" (the real image you have, with the whole pie CNH).
That is, there is no miracle: even not knowing the inclination you yet
so you need to know where those dots in the image are
processing.
Consider, below, the example I prepared. The image has size 930 x 659 and the CNH is rotated at any angle:
The following code makes the "adjustment":
import matplotlib.pyplot as plt
from scipy import misc
import numpy as np
from skimage import transform as tf
src = np.array((
(155, 110),
(774, 110),
(155, 548),
(774, 548)
))
dst = np.array((
(94, 248),
(664, 7),
(266, 651),
(835, 410)
))
tform3 = tf.ProjectiveTransform()
tform3.estimate(src, dst)
cnh = misc.imread("cnh2.jpg")
cnh_ajustada = tf.warp(cnh, tform3, output_shape=(930, 1000))
misc.imsave('cnh2-ajustada.jpg', cnh_ajustada)
_,ax = plt.subplots (1,2)
ax[0].imshow (cnh)
ax[0].plot(dst[:, 0], dst[:, 1], '.r')
ax[1].imshow (cnh_ajustada)
ax[1].plot(src[:, 0], src[:, 1], '.r')
plt.show()
And produces the following result:
Note this last image (resulting from plt.show()
) red dots. On the left side (on the original image) dots are plotted manually defined in the variable dst
because these points are the "destination" of the estimate (the image you have, as explained above). And on the right side (on the final, adjusted image) the dots are plotted, too manually defined in the variable src
because these are the "origin" of the pet (the idealized image, also as explained earlier).
I marked the points manually because it is easier to illustrate the process, but in your case you will need to be able to identify them automatically somehow. The values of src
are easy: you can keep them fixed! (note the example in the documentation, as this parameter has exactly the size of the final image). Thus, transformation estimation will include not only rotation and translation but also scaling if necessary (i.e., it will change the size of the final image if, by case, the original image is with CNH larger or smaller). The values of dst
will require some additional effort.
If your image is "well behaved" (that is, the background is uniform and white, etc.), it should be possible to identify such points by looking at the corners of the segmented image (read on "corner Detection" in the documentation). This works because the object of interest of your problem domain is fixed (i.e., it is always a rectangular document). :)
Another alternative, much simpler, is to simply ask the user to interactively select/mark the captured image where your corners are. Many software systems do this, as it requires only 4 clicks of the user.
Pretty cool - it’s likely that for the type of images of this particular question, the corner detection works and it’s possible to automate the whole process, as O.P. requested
– jsbueno