Well, I start with a phrase "Just because you can do something doesn’t mean you should do it."
The solution presented only serves to illustrate what can be done using only basic validations between columns and temporary tables. But with the increasing number of columns the solution becomes difficult to read, becomes susceptible to errors and very likely with low performance.
The assumption is that, in each row, the values in each column are always different.
The solution, which can be visible here at Fiddler , uses only 4 columns (as opposed to 10) but the reasoning is extrapolable (again - does not mean that one should do). I leave here only the summary explanation - quite confused - of reasoning.
In the first step the value of each column is replaced by the sequencing of NVL
's. That is, the first column is the result of NVL(COLUMN1, NVL(COLUMN2, NVL(COLUMN3, NVL(COLUMN4, NULL))))
, to the second NVL(COLUMN2, NVL(COLUMN3, NVL(COLUMN4, NULL)))
, etc..
After this step, the values of the original table are no longer important and only the values returned in the previous step are used. For each column:
- Check if its value is equal to any of the previous columns. If this happens, step 2 is passed, otherwise you get the value of the column itself;
- Check if there is equality with the columns on the right (starting at the immediately right, until the last sequence). Column keeps the value of the left-most column until which match was found.
Like I said it’s possible to do it but it’s not a good idea.
This can’t be done in a query, it needs programming. If you go to the bank you need to write a protocol for this, but it seems to me very complicated to do it line by line
– Ricardo Pontual
It would be easier in a real example believe , 1) "unpivotear" by "Union" the columns eliminating the nulls 2) "pivotear" again by "max" and "case", but the problem seems "gaseous" at least to me, outside the model that also looks bad.
– Motta