Are you sure that the long
will burst? There are 18 quintillions, if you only store it and nothing else will be hundreds of exabytes occupied, or millions of Hdds of the largest that exists only for this table. And it would take millions of years to record everything over and over (I’ll consider that you can record a million lines per second, but you can’t do it in HDD, only in SSD that are much smaller and would need more). It is true that it could take less than a year to parallelize in all Ssds, but I want to see coordinate millions of them all in one DB. If you actually have multiple computers doing this the GUID may be useful, but only because it allows distribution, not because it allows more lines and therefore more ID
s.
Why do you think the GUID is so much bigger? It can accept larger numbers, but not necessarily more different numbers. You cannot use and immense majority of theoretical Guids available. See how it is composed a GUID. It’s not sequential or can represent every possible number. Its function is not to be larger, it is to allow creating on different machines in parallel and if one day this information comes together do not give collision. But in many problems where there is distribution even the information will come together. But if your problem is this, it has nothing to do with long
not being able.
The practicality of having something this big is questionable and is probably worrying about something theoretical that doesn’t exist. Of course, if the problem had been made explicit we could say something more specific.
If you really want to insist on using something bigger, you need to see the context, you can create a specific format that meets the need. How it will be impossible to use all numbers has to have a logic of generating the numbers.
If you want to know about the theory of sequential making you use two long
s and would increase only the second, keeping the first always zeroed, when you use all the numbers of the long
you increment 1 on the first long
and Zera the second to start all over again on it. All this has to be done manually or by a function or stored Procedure. Of course, if it is useful, the database will allow you to do it automatically. There is a reason why it is not ready.
If you need to change one day it is easy to do, only the basic care of changing a primary key, after all you have to change all references to the line that now has another key. Of course external references also need to change. If you cannot guarantee that you can change them all, then you will have to have a secondary key with the ID
old and detect that is accessing by the old and search for it instead of the primary.
Are you sure that the
long
will burst? There are 18 quintillions, if you only store it and nothing else will be hundreds of exabytes, or millions of Hdds of the largest that exists only for this table. And it would take millions of years to record everything non-stop (I’ll consider that you can record a million lines per second, but you can’t record it in HDD, only in SSD that are much smaller, and not much more). It is true that it could take only 1 year to parallelize in all Ssds, but I want to see coordinate millions of them all in one DB. And any reason to use GUID? Or why do you think it’s so much bigger?– Maniero
Use 2 longs, Ué.
– Bacco
Did the answer solve your question? Do you think you can accept it? See [tour] if you don’t know how you do it. This would help a lot to indicate that the solution was useful to you. You can also vote on any question or answer you find useful on the entire site
– Maniero