Cache Redis on Azure

Asked

Viewed 67 times

-1

I hired the cache service Azure Cache Redis, but to import the current database I had to migrate it to Premium P1, after importing I tried to return to the C0, impossible, the Azure Cache Redis only makes upscale never downscale.

I need to create my database, but on C3 timeout. By default I believe it would run on a B2, but the database creation process is heavy and complex. So how to import or generate the initial database if scaling is only up?

If I do, 2 connections, one with my current redis server and the other on Azure, and make a select and then a set on Azure, will it be fast? It’s 90,000 Keys to generate, each one very big.

Information on Redis Azure plans https://azure.microsoft.com/pt-br/pricing/details/cache/

Request to be able to downscale in the service of Redis on the Azure site https://feedback.azure.com/forums/169382-cache/suggestions/10560624-smaller-premium-redis-instances

  • The "downscale request" link must not be correct. The subject matter is different.

2 answers

1


Unfortunately the Azure Cache Redis (Paas) service still has some limitations like:

  • Import only premium layer;
  • Impossible to downgrade the layer, after putting in a Premium for import, will always be stuck in this.

My solution was, hire a windows VM and install Redis Cache (could be a linux VM too) and so the cost was lower and with performance superior to Premium P1.

-1

I imagine you’re not getting the downscaling because, after charging 90K Keys, only a few plans support stocking all this data.

I don’t know the size of your data mass, but, for example, the C0 supports only 250MB cache - this size is directly proportional to its data mass, but the ratio is not 1 to 1.

About the timeout, if you can share the snippet of your code that is sweating to make the migration, I can try to help identify improvement points, but anyway I recommend to follow the following steps:

  1. Create your Azure Redis Cache service on a plan that’s interesting for your business - in your case, probably on B2;
  2. Provision of a VM in it Datacenter - if possible in the same Resource Group - of your cache service;
  3. Copy your backup to that VM and import the data from there.
  4. After the import is finished, delete the VM.

This will cause you to import via "local network", making the change of getting timeout or packet loss less.

  • 90k not even 180mb

  • 180mb of exported data. If you add redundancies, more indexes, etc, you can yes pass the 250mb.

  • I agree but the plan I tested was C3 of more than 2gb

  • Yeah, but that means I can’t do the downscale

  • but that was not the problem, according to the Azure manual Redis does not yet have downscale.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.