Is doing unstable code commits in Git bad practice?

Asked

Viewed 95 times

5

I have a very big change to make or I have cause of a class that has many relationships. Do I need to leave the project in a stable situation before doing local commits? Or is tested and stable code only required for pull request?

I ask because I would like to keep the versions of my own work to myself, even if not stable yet, but I don’t know if it might upset the person who will validate the merge request.

2 answers

5


There is controversy about it. There is no "right" way, there is the right one for you.

Many say yes, but before sending to the remote repository you have to make a rebase to simplify this. The rebase gives a "clean" in the history.

Some say you don’t even have to worry about the rebase. So you have to start doing and see if it meets your needs and the team’s needs too, which is even more important.

In fact if you’re too worried about the commits can not do when it needs to and Git run out of the most useful. One of the reasons version control was invented and greatly improved with decentralized control is that you can experiment, improve and have a history, and be able to return to the previous state, all done with confidence. This encourages making more restrained changes, taking care of one problem at a time, giving more granularity to work.

4

It’s not about good practice, but Kent Beck suggested the experimental scheme test && commit || revert.

I didn’t get to try it, but the idea serves as a kind of Quick dojo to increase code expertise and assertiveness.

Preliminaries: explaining the name

Anyone who is used to the shell of the Unix world must have already understood the name.

The idea of && and of || is from the conditional shell execution. These are conditional command execution operators, shorting boolean.

So what do you mean test && commit || revert? Just this: do the test, if successful, do the commit; otherwise reverse the changes.

Comparison with Classic TDD

Classic TDD predicts that changes in production are caused by an error. For example, a build error because a new method now needs to exist Rebimboca for the class Parafuseta. On top of this build error, you change the production code to pass the test.

One advantage of TDD is that every new production code will already be covered and the known/easy programmatic testability cases it applies to will already be covered. So it’s a code that guarantees security for a part of the universe of inputs.

Yes, this advantage is critical, there are other factors that come into play. One of these factors is the expenditure of resources on things that do not have commercialization value. I am not in the merit of TDD, the focus here is the comparison.

So, the TDD code sort of makes the following evolution (the color is whether the tests were successful or failed, in parentheses is the action taken that made the evolution arrive in that color):

  • green (prior code, which may include no code)
  • red (written test case)
  • green (correction of the production code correcting the test)
  • green (code cleaning to remove any waste/dirt from refactoring)

In a classic environment of atomic and indivisible commits (this definition of commit atomicity is not so solid, but let’s get an informal idea), the writing of the test followed by its correction would be an atom, maybe the cleanup would come together since before this change should not have this specific dirt.

Another approach to atomizing commits would be to commit to the test code (a delta of code that exists by itself), then another to fix and clean.

In the first approach, you’re indicated to do a large commit after a bit of coding time. And the longer you commit, the harder it will be to commit.

In the second approach, you can raise an "unstable" code for the repository, since this is the intention of the TDD (report executable instabilities so that it can be repaired).

Already in the test && commit || revert, you will encode a refactoring or a new test case along with the new feature. If it worked, you need to commit everything that was done without exception. If there is a failure, then all change will be discarded. What effect will this have in practice?

Well, it starts that the repository will always be "green". You never commit with unstable code (after all, committing is tied to the success of the test). And also, the programmer will be instructed by himself to carry out the tests earlier. The earlier one tests, the less changes were made and, therefore, the smaller the scope of introduction of instabilities.

With this, we also have that the code will have several "save points", where it’s easy to give a game over and resurface near the problem point.

Frequent commits, always green repository, seems to be all flowers and beauties, is not?

I particularly think that the TDD has a great advantage: the guarantee that the old code gave error for that specific scenario and that the new one does not give more.

Initial criticism by Kent Beck

Paraphrasing the second section of the article:

How could you make Progress if the tests Always have to work? Don’t you make mistakes sometimes? What if you write a Bunch of code and it just gets wiped out? Won’t you get frustrated?

In free translation:

How do you progress if the tests always need to work? Don’t you make mistakes sometimes? What if you write a bunch of code and puff does it add up? Wouldn’t it be frustrating?

The interesting thing is that he answers himself, after experiencing this flow:

  1. it is possible yes
  2. you will do nonsense yes, but everything will be clean before giving continuity (avoid the "sunk cost fallacy")
  3. If you don’t want to lose a code stub between two green lights, then don’t write a code stub between two green lights
  4. yes, it frustrates a lot, but the solution found next is usually better, more assertive and more incremental

On the relevance of your question

It all depends on the development flow adopted. As Maniero himself said: there are controversies.

The test && commit || revert is a suggested flow that values automation tests (if it doesn’t turn XGH) and small incremental changes, properly atomized commits.

The idea of small and incremental changes is not new. Even, if you were working as a team and your code went through revision, the less context of change is carried out, the better it will be for the reviewer. Fix a bug? Then avoid touching that whole patched class that has absolutely nothing to do with that bug, leave it to another pull request. Despite revision of pull request be made considering the entire delta of the pull request, already know how to atomize at the level of commits will facilitate to "atomize" the pull request.

Since this stream of code writing requires a minimum of developers on automated testing, it may not be exactly the best of worlds to learn how to code. As it is a stream of high frustration, it may not be the best to learn about version control tools.

But in the end, if you can adopt with merit the test && commit || revert, know that I know code will always be chuchu, and that you will easily be able to make small and incremental changes (which is good for code health and the reviewer’s psyche).

Browser other questions tagged

You are not signed in. Login or sign up in order to post.