Decode JSON return faster

Asked

Viewed 647 times

0

The System captures the user’s address through the ZIP code, I am using the Viacep, I do a front-end validation with jquery validate and a back-end with PHP, the validation with jquery takes the user address very fast and shows in a "disabled" field, with PHP it takes a while to process and this is slowing down the script, how best to speed up JSON reading of a PHP page?

Example of the Viacep return:

{
  "cep": "01001-000",
  "logradouro": "Praça da Sé",
  "complemento": "lado ímpar",
  "bairro": "Sé",
  "localidade": "São Paulo",
  "uf": "SP",
  "unidade": "",
  "ibge": "3550308",
  "gia": "1004"
}

URL:: https://viacep.com.br/ws/01001000/json/

In PHP:

$cep = 'xxxx';

$cepUrl = "https://viacep.com.br/ws/{$cep}/json/";

$ch = curl_init();

curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, $cepUrl);
$result = curl_exec($ch);

curl_close($ch);
$result = json_decode($result, 1);

1 answer

1

From Javascript the connection is given via client-side, ie by the browser. A URL, once accessed, has the content cached on the user’s computer.

In server-side this cache is not saved automatically when accessing a URL. To optimize in the backend, save a cache of the result.

Note also that there are other factors that influence response time such as the environment network, for example, and several other backend processes.

In general, what can greatly optimize the process is to save a cache of the result.

Of course, before that you should evaluate whether it is feasible to save the cache. The data returned from the URL may change, although the zip code and address do not change as easily, but it is not impossible to do so. The decision is at the discretion of your business model.

Enough with the blah blah bla bla And let’s get down to business.

An optimization, saving cache of results with PHP:

$cep = '01001000';

/*
Local onde será salvo o cache.
É recomendado otimizar com uma melhor organização para evitar ter dezenas de milhares de arquivos numa única pasta. Mas esse não é o foco da questão.
*/    
$file = __DIR__.DIRECTORY_SEPARATOR.'cep/'.$cep.'.php';

if (file_exists($file)) {
    /*
    Encontrou um arquivo de cache. O resultado será obtido desse arquivo.
    Esse arquivo já está em formato PHP, portanto, nem precisa converter de json para array do PHP.
    */
    $result = include $file;
} else {

    $cepUrl = 'https://viacep.com.br/ws/'.$cep.'/json/';

    $ch = curl_init();

    curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_URL, $cepUrl);
    $result = curl_exec($ch);

    curl_close($ch);
    $result = json_decode($result, true);

    /*
    Aqui salvamos o cache já em formato PHP para otimizar também o processo de json_decode().
    Assim não precisará invocar json_decode() sempre que consultar esses dados do cache.
    */
    $content = '<?php'.PHP_EOL.'return array(';
    foreach ($result as $k => $v) {
        $content .= PHP_EOL.'   \''.$k.'\' => \''.$v.'\',';
    }
    $content .= PHP_EOL.');';
    file_put_contents($file, $content);
    unset($file, $content);
}

print_r($result);

In the first query, when you don’t have the cache yet, the process costs 1 to 1.8 seconds. When the query comes from the cache the process costs from 0.0001 to 0.00016 milliseconds.

The above example is purely didactic. I suggest you create some more sophisticated control where you can check the date this cache was saved, for example. If it was saved 6 months ago, for example, then force the script to fetch the data online and update the cache. This will ensure greater integrity.




In Javascript you can force queries cacheless thus:

    $().ready(function() {
    	$("#button").click(function(){
    		$("div").html("");
    		var d = new Date();
    		var n = "?" + d.getTime();
    		//n = "";
    		console.log(n);
    		
    		$.getJSON("https://viacep.com.br/ws/01001000/json/"+n, function(result){
    			$.each(result, function(i, field){
    				$("div").append(field + "<br>");
    			});
    		});
    	});
    });
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>

    <input type="button" id="button" value="request">
    <br>
    <div class="foo"></div>

Even consulting without cache, the browser still continues with a good response speed, with a small delay due to connection but obviously still much faster than searching for the backend without cache because there are fewer processes.

If you think about it, you could even drop the use of PHP CURL and consider leaving this query work to Javascript. Even if you want to save the data or access it in the backend, you can then send a json to the backend where you would save the results. Of course this should not be open to the public. Implement this logic only in private environments where you can trust the user.

Anyway, it is only a momentary idea. As mentioned above, it is possible to improve and adapt according to your needs.

  • I understand, I had already thought about cache, but because it is such a simple query I think there must be some way to improve it, and even with client side being fast, it is so easy to change by browser and melar all the back end , I thank the reply :)

  • 1

    There is no magic. Optimization is by cache. If you wait for some fancy technique or a magic function, I don’t think you will find it. Maybe by using file_get_contents() can reduce a few millionths of seconds compared to Curl because it is a lighter function. But it will not make a significant difference. When running something that always brings the same result, generate cache because it is redundant to always run the same processes to get the same results. In this example, the time saving is huge, from 1 to 0.0001 seconds. It is possible to improve with micro optimizations.

  • I implemented the cache, the difference was huge, thanks again.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.