Serialization of a large list in JSON.Net

Asked

Viewed 981 times

3

I’m having a problem to be able to serialize an object (a list of a class with approximately 5000 items).

I am using JSON.NET to generate the Json string but it is getting the following problem, in the middle of it is a text like this:

,{"State":0,"DataAlteracao":null,"Id":0,"IdDadosRastreamento":0,"CodigoPeriferico":"0","ValorPeriferico":"0"}
,{"State":0,"DataAlteracao":null,"Id":0,"IdDadosRastreamento":0,"CodigoPeriferico":"0","ValorPeriferico":"0"}
,{"State":0,"DataAl:..."0","ValorPeriferico":"1840"}
,{"State":0,"DataAlteracao":null,"Id":0,"IdDadosRastreamento":0,"CodigoPeriferico":"0","ValorPeriferico":"1380"}
,{"State":0,"DataAlteracao":null,"Id":0,"IdDadosRastreamento":0,"CodigoPeriferico":"0","ValorPeriferico":"62"}

Note in bold that he cut the name of the tag as well as put "..." and then proceeded to create the file normally.

does anyone know what this problem might be and how can I solve it? the code to perform serialization is as follows:

string jsonReq = Newtonsoft.Json.JsonConvert.SerializeObject(request);

Where request is the list with 5000 positions.

  • You are looking directly at the result of jsonReq or are you handling this string later? Also, this request object is what? Is it a list of a typed class or are you using an internal serializer in the class?

  • Rodrigo a jsonReq I write directly on an Httpwebrequest, the request is a class that has two Authdata properties: {} and a Authorship: [], which is the array that has 5000 positions. The goal is to realize the consumption of a Rest api.

  • People found that json is actually generating normally, the problem is in transmitting via POST to the API Url, the file. json being generated has 12 megabytes, is there any configuration I have to do in the winforms app.config or Asp.net web.config or IIS itself to be able to do this post? what you advise?

  • Look I don’t know exactly why the result is so great. Ideally you would pass filter parameter or at least pagination. Ideal resquests should not exceed a few kb. The rest is paged by the customer. This will avoid too much timeout, server time and, in case of crashes, the client requests only the required page.

  • 1

    Rodrigo I was able to locate in the Microsoft doc the resolution, in my case this list would not have problems in being large because it is a transmission via internal network company, is a service that keeps obtaining the tracking data of vehicles, and enters in batch these records in the bank

2 answers

1

As commented, I believe that if you are returning a very large list, the ideal is for you to add in your server the following properties:

  1. Receive pagination variables: page number requested, number of records per page (the client can specify this, but on the server you check that both the page number starts with 1 and does not pass the number of existing records and also that the number of records per page is not higher than, for example, 100 records, or something that allows Sponse not to be too big);

  2. Set a standard sort even if the client has no authority to ask for this information. In general, it depends on the context. But assuming it’s an incremental list, it might be nice to sort by nonconsecutive date. If the customer asks the first page with 100 records, he takes the first 100 most recent. So if you see that he didn’t have any of this information, he uses it, and then he goes to the next page. Until you get a record that he has locally. Then he doesn’t need to ask for the previous pages. Overall this approach makes more sense than starting from the earliest to the most recent date. If the return list is never ordered, your customer can never perform this operation.

  3. In general it is cool you add this information with the Headers in response to your request. So your client can check, for example, that he asked for the first 100 records but the server returned that the X-Record-Count of Sponse says that there are 1 million records. If the idea is to always paginate until the end, you will have idea how much more is left to finish. Using Headers also facilitates by not needing to encapsulate the result in another object with this information.

-1


People I was able to find out was due to the size of the POST that was being made, I configured the web.config according to the Microsoft documentation:

put the following additional items on the web.config:

<system.web>
<httpRuntime maxRequestLength="2147483647" targetFramework="4.5" />
</system.web>

and

  <system.web.extensions>
    <scripting>
      <webServices>
        <jsonSerialization maxJsonLength="50000000"/>
      </webServices>
    </scripting>
  </system.web.extensions>
  • This is one way around the problem. But without filters, ordering and paging you will run the risk of the client application not getting all the necessary information. You will also not be able to control the server processing time. If your base has 1 million records, you will sue them all just to serialize you throw a piece away. While setting a maximum size, you know it will not take long to get the return, even if it is necessary to call the service more often.

  • This response has nothing to do with the problem. Changing the message size limit of an HTTP Request does not relate to the serialization result. That’s why it was negative.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.