Problem getting information from the Azure Cognitive Service (OCR)

Asked

Viewed 71 times

2

I’m developing a windows service that downloads a web image, "flips" it horizontally and sends it to the cognitive service of Azure (OCR), the idea is to capture the text of this image. However, when sending the image via request the return is empty.

I have tested the same method with different images and worked normally.

I have also tested, via the online test of Azure(https://azure.microsoft.com/pt-br/services/cognitive-services/computer-vision/) which also worked normally (with both images).

public void FlipaImagem(string url)
{
    using (var wc = new WebClient())
    {
        using (var imgStream = new MemoryStream(wc.DownloadData(url)))
        {
            using (var objImage = System.Drawing.Image.FromStream(imgStream))
            {
               objImage.RotateFlip(RotateFlipType.Rotate180FlipY);
                if (File.Exists("arquivoFlipado.jpeg"))
                    File.Delete("arquivoFlipado.jpeg");
                objImage.Save("arquivoFlipado.jpeg", ImageFormat.Jpeg);
                objImage.Dispose();
            }
        }
    }
}

The above method saves the "flied" image, if I take this saved image and send it via online test (which I mentioned above), it works.

Below how I’m sending to Azure(Remembering that if I pass other images URL works normally):

public RespostaAzure PostServicoAsync(string urlImagem)
    {
        httpClient = new HttpClient();
        httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionkey);
        FlipaImagem(urlImagem);
        var imagem = Image.FromFile(@"arquivoFlipado.jpeg");

        ImageConverter _imageConverter = new ImageConverter();
        byte[] xByte = (byte[])_imageConverter.ConvertTo(imagem, typeof(byte[]));

        MemoryStream stream = new MemoryStream(xByte);



        var streamContent = new StreamContent(stream);

        streamContent.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        var queryString = HttpUtility.ParseQueryString(string.Empty);
        queryString["language"] = "pt";
        queryString["detectOrientation "] = "true";
        var novo = urlAzure + queryString;

        try
        {
            var response = httpClient.PostAsync(novo, streamContent).Result;

            var x = response.Content.ReadAsStringAsync().Result;

            var obj = JsonConvert.DeserializeObject<RespostaAzure>(x);


             return obj;
        }
        catch (Exception ex)
        {
            throw ex;
        }
        finally
        {
            _imageConverter = null;
            xByte = null;
            imagem.Dispose();
        }
    }

Any idea what it might be?

Thank you.

===================UPDATE===================

When testing directly from the API(https://brazilsouth.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fc/console), I realized that it shows the same behavior, unsuccessfully recognizing "easy to analyze" images, but on the Azure site works perfectly.

Ps: In Azure he says he is using API 2.0 but when using it, it has the same behavior as 1.0, and when checking via console I saw that the Azure site sends the request to a totally different URL that I believe is restricted only to them.

Maybe it’s clearer if I leave an image I’m trying to read and the problem occurs, along with the links:

  • I don’t understand, if you send an image flied by your code in the sandbox it traces the OCR?

  • I didn’t understand what you meant about "trace", but when I send the image returns 200 from Azure, but the answer comes this way: {"language":"pt","orientation":"Notdetected","textAngle":0.0,"Regions":[]} No Regions comes empty, that would be the text. But when I take this same image and test by the site of Zure works normally.

  • the same image generated by the code shown?

  • Yes, I am saving the image before sending via POST, if I take the image I am saving and put in Azure it is read perfectly. And stranger than that is that I if I get a picture of the web (other than the ones I need) works.

  • So your mistake is elsewhere, not in the script you posted

  • I’m sorry, I’m new here. But the script I posted was to present the way I’m saving the image that’s being tested. But in a nutshell, doubt is how an image can be presenting results online but not via a request, you know? And I tested the upload method with other images and it worked normally, passing the URL of both.

  • Just as I commented, if you are testing directly in the API the image generated by your method and there it works, the problem is not with it, but in sending it. And that’s another code.

  • I edited the topic with the sending method to Azure.

  • the stream content shouldn’t be Json? you’ve already simulated a submission via Postman?

  • Only if I sent the image online, then I would send it as Json, with the link inside the content, but as I needed to flip the image, it is only local. I did not test in the Postman. I will test and bring a return.

Show 5 more comments

1 answer

1


Apparently the Azure example API is another, in the case of Recognition Text. I took as "solution" this topic. As much as it wasn’t what I was looking for it circumvents the problem and delivers the same answer from website.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.