In the commentary I think you are talking about the relationship N:1. Many people confuse it with 1:1. Unless the text is just confused because of the short maximum space imposed.
The N:1 is because of the normalization, there the example eventually becomes 1:1, but by circumstance, not by obligation, is not formally 1:1. In this case, isolating the data you can do maintenance on it and this is reflected everywhere it was used. If it is on the table itself, it becomes repetitive and a maintenance on it has to find out everywhere it was used and change everything. And you’re still running the risk of it being misinterpreted in a non-standard way and you can’t even identify where it was used. Normalization creates a canonical way to deal with it. Many people don’t realize that this is DRY, probably the concept that if one totally dominates not need to learn as well others.
Often we look at 1:(0,1) as 1:1. The 1:1 even though obliging the two sides to have 1, in some cases it may be that one side must have one, but the other may have zero.
1:1 is usually used for some reasons:
The domain (object) is totally different, and the two sides can only have one.
One example I give a lot is the person who has details of their role as customer, supplier, employee, etc. The person can only have a given of each of these papers (each paper has its table) and each paper registration can only belong to one person. This is clearly 1:1, or 1:(0,1) since most of the time a person may not have a specific role. Many people don’t see it that way because they duplicate entities in the database or do tables/classes "gods". Most models out there are copies of what already existed, one doesn’t think about it, and does wrong.
Other examples 1:1 or 1:(0,1):
One person picks up a cell phone from the company. One person can only take one cell phone at a time (company rule) and this cell phone is only with one person (it can have 0 cell phones). But if you want to keep cell phone history already caught the model no longer works.
There is only one head in the department and only one head manages a department (some rule may be different, but we will treat that always is so in this company). But if any rule changes, it’s gone
A person only has a current spouse (has a country that is not like this).
A guest can only book a room at the hotel and the room can only be reserved for one customer (whereas it is not done overbooking).
Wikipedia has more (the example of the capital has exception, although you can circumvent it and leave 1:1). That is, all work if the rule is too rigid.
Many data are optional, in theory one can go up to the sixth normal way to solve this. A lot of times when this is happening, it’s because maybe the domain was different and the person didn’t understand.
If people knew how to use the right relational model and Sgdbrs had put some features before maybe there would be no Norel (who people call Nosql).
Secondary parts of a table/object data are rarely accessed together with the main part. So it is an optimization to keep separate and not have to touch (read or write) on these parts (helps the cache and reduces the automatic locking area in competition and may have fewer contacts with the permanent mass storage, which is slow, in long imports). This optimization needs to be very well thought out because it can often be worse than expected. It can also help if the person abuses the SELECT *
, but it’s the solution to a problem that shouldn’t even exist.
What is faster depends on a number of factors that we often cannot anticipate before the system is used effectively. And we often optimize things that they didn’t need or that get worse than we expected. Almost always do this before you know you need it is premature optimization.
Usually people ignore logical modeling. People always think about physical modeling. It is possible to make the logical modeling in the most organized way and the physics in the most optimized way.
Many people use it thinking about it, but in the background the data were of a different domain and the real reason that the person used, without realizing, is the previous.
Some cases have so many columns that it’s almost unmanageable that way. It divides, but again, they’re probably creating a wrong modeling and mixing data from objects that should be different.
Each application/operation must have privileges to access one part, but not another and to give these privileges usually have to separate the tables (database deficiency). It was probably another domain anyway.
It can take a long time to add a new column in the table and could block applications from using the table for a long time (DB deficiency), then it is better to create a new table associating. Of course it’s funny.
Because it is so for reasons of replication or some specific strategy.
The ORM model or other tool that the person uses or for any issue related to the legacy requires something like this.
One has no idea what they’re doing.
You have no anomalies with update/delete or information loss, a database table is also not an xD excel spreadsheet. Super recommended reading => What is database normalization?
– rray
For example if in the table
funcionario
has a column with the description ofcargo
let’s say that when the last intern is promoted or fired (delete in this case) no other employee can have this position because he no longer exists. Now if in place of the job description you had the job id that event would not cause any loss of information.– rray
I get it, and can I get anything out of performance? Being that in 95% of the cases where I perform query in this table, I will not bring all the data, but only specific data.
– Erico Souza
Yes and no. Yes pq breaking into several tables will eliminate duplicated values which results in fewer rows in a table or if you need something specific just takes the necessary. Not performace generally diminu pq will need some joins to bring all the information needed by the query since now they are separate.
– rray