1
I’m developing an algorithm that rotates images every 10 degrees. For this, I am identifying the center of my region of interest, not the center of the image, because it has region of interest that are close to the corners. With this I can rotate each image centered, keeping the original image dimensions. The problem I’m having is that by identifying the center of the image, I convert the input image to grayscale, so I can’t convert the post-rotated image to the original color'.
Follows code below:
import cv2
import numpy as np
POS_ROT_STEP = 18
IMG = 'IMG006'
img = cv2.imread(IMG+'.png')
gray = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
_, contours,hierarchy = cv2.findContours(gray,2,1)
cnt = contours
for i in range (len(cnt)):
(x,y),radius = cv2.minEnclosingCircle(cnt[i])
center = (int(x),int(y))
print ('Circle: ' + str(i) + ' - Center: ' + str(center) + ' - Radius: ' + str(radius))
# girando e cortando a imagem para produzir amostras sinteticas
for j in range(1, POS_ROT_STEP):
(h, w) = img.shape[:2]
rotated = cv2.getRotationMatrix2D(centro, -(360 / POS_ROT_STEP) * j, 1.0)
nx = img.shape[1]
ny = img.shape[0]
rotated[0, 2] += (nx / 2) - x
rotated[1, 2] += (ny / 2) - y
output_aux = cv2.warpAffine(gray, rotated, (nx, ny))
backtorgb = cv2.cvtColor(output_aux,cv2.COLOR_GRAY2RGB)
cv2.imwrite('~/rotate/'+IMG+'-'+str( j )+'.png', backtorgb)
I used copy() and it worked, obgdo
– Carlos Diego