Your question is somewhat broad in the sense that drag-and-drop can mean a lot in the context of system development.
For me, for example, what comes to mind in the very first place is the pattern of interaction where the user needs to click the mouse pointer to select an object, hold the mouse button pressed to "hold" such an object and "drag" it to some other place of interest (the such drag), and then release the mouse button to let the object "fall" in the new location (the such drop). But your question can also be interpreted as why programming tools and languages (specifically in the construction of graphical interfaces) use little of this feature through building more WYSIWYG, where the programmer builds the graphical interface literally by drawing it instead of programming with textual commands.
I think its original intention with the question is more in the sense of the second interpretation, but this is unclear. It would be nice to improve the issue. Anyway, I still think the question has potential for useful content according to the two interpretations, so I’ll try to provide an answer.
1. The Interaction Pattern Drag-and-Drop
Actually the pattern of interaction drag-and-drop is widely used, simply because it makes a lot of sense in worktable metaphor (desktop) that most modern operating systems use. In fact, it makes so much sense that this is the second major interaction in virtually any mobile app (the first, of course, is click/tap). This type of interaction with the hands (or the equivalent, considering the mouse almost as an "avatar of your hand"), is very natural to the point that babies try to use magazines the same way they use tablets. So in a computer system, if the metaphor is from a book or magazine you use your finger to "drag" the pages; if it’s from a physical game, you use your finger to "kick" a ball; and so on. All can be considered variations of this type of interaction originally used with the mouse pointer.
Of course there may be problems depending on the construction of the interface, especially when the elements that allow this type of interaction do not make that option clear for the user (I suggest reading the item 3. No Perceived Affordance of this great article). This is why on Pcs it is common to use that mouse pointer that simulates an open and closed hand (the latter, literally "loading" something) to indicate that something can be loaded/is being loaded:
Image source: http://www.shutterstock.com/pic.mhtml?id=156119834&language=pt
2. Use in Programming and Interface Building
Nor do I think that this type of interaction is little used in programming. As colleagues have mentioned in comments, proprietary tools like Delphi and Visual Studio have very good features in this regard. Some others might leave something to be desired. Qt Designer, for example, which I often use, allows you to just draw the interface (dragging the components and defining their properties in appropriate dialogs) with more basic things: creating toolbars, adding buttons, menus, texts, organizing layouts, etc. However, when I needed to put a toolbar in a dockable window (dock area) I had to do it manually because the tool simply won’t allow it. I think it should happen in a very similar way with Netbeans, for example (I don’t know, I haven’t used it for a long time).
However, it should be noted that the purpose of these tools is precisely to support the development of software. Since they are tools used by programmers, it eventually becomes so much easier and trivial to simply textually program the construction of the graphical interface than to draw it, to the point of making it seem that this feature is little used.
ADDENDUM: It was not always so. In a past fairly recent, in which
there were (or were not used) construction standards for
graphical interfaces called Layout Managers, the programmers
needed literlamente describe in the text of the programming
screen coordinates where the components should be positioned. By
example, the code in Advpl below, reproduced from blog siga0984, does just that (and the @Maniero will remember this heritage of Fivewin! hehehe):
#include 'protheus.ch'
User Function APPINT01()
Local oDlg
Local oBtn1, oSay1
DEFINE DIALOG oDlg TITLE "Exemplo" FROM 0,0 TO 150,300 COLOR CLR_BLACK,CLR_WHITE PIXEL
@ 25,05 SAY oSay1 PROMPT "Apenas uma mensagem" SIZE 60,12 OF oDlg PIXEL
@ 50,05 BUTTON oBtn1 PROMPT 'Sair' ACTION ( oDlg:End() ) SIZE 40, 013 OF oDlg PIXEL
ACTIVATE DIALOG oDlg CENTER
Return
It should be easy to imagine that, under these conditions, a visual construction tool of
interfaces would be greatly welcomed (however basic it was).
I say that just "it seems" that this feature is little used because my understanding is that it exists in many of the tools we use, only that generally as a support in the construction of graphical interfaces. Less experienced users with a tool/language most likely make great use of these features, perhaps decreasing their dependency on it as they become more experienced. You’re probably more experienced with the languages/tools you use, and simply don’t miss the feature much.
This is not the case for most users (I’m using the term in general, since we programmers are users of development tools). Actually, I think there is a strong trend to move more and more away from text input as the main way of constructing scripts, defining behaviors and even pure programming (whether for work, leisure or education). The intention is clearly to make the task easier for a wider audience. There are numerous examples of this trend, among which I think worth mentioning:
- Fungus in Unity 3D. Unity 3D is a tool for game programming that already makes a lot easier for developers. Still, it is necessary to build code (using C# or Javascript). A add-on popular enough called Fungus allows programming the narrative flow in a graphic and visual way. Especially for programming narratives (dialogue sequences) it makes much more sense to have a graphic flow than a block of code text, and that’s where the popularity of this tool comes from.
- Nodes in the Blender. Blender is a software for open source and very complete 3D modeling. Texture building and object rendering order, for example, are complicated processes that used to be executed/automated by action sequences in multiple properties windows (hard to memorize, you might say!) or Python code (generally not accessible to the target audience of this tool). An existing resource called Nodes allows you to configure these actions through a graph of us with properties and input and output plugs, in a much easier way for the general public.
- Scratch. Scratch is a mit project to essentially facilitate programming learning. In it the logical flow is constructed in the same way as with any algorithm in the structured approach (flow control instructions, repetitions, etc.). However, this is done using visual blocks that are - look at this - dragged and combined with the mouse pointer! : ) This interface became so famous that it was used in several tools, among them the Stencyl (an engine for building games).
Example of a program piece (an event, in an object), "written" in Scratch.
Are the answers not based on opinions? I for example have my opinion of why I don’t usually use the graphical interface to draw the screens on Android. Already in the iOS I use a lot.
– Paulo Rodrigues
@Paulorodrigues believe that not because I want to know if the application gets heavier for example.
– Giovani
I do not believe that there is a valuable response to a broad context. In this project they must have drawn the screens writing code simply because the Swing (and along with it the drag-and-drop Netbeans) is terrible, one of the worst beasts ever conceived, and the only way to achieve a more or less acceptable result is to forget the drag-and-drop and doing everything via same code. Already Delphi and Visual Studio offer an incredible facility of drag-and-drop, then everyone who uses these Ides draws the screens preferably using the mouse. Drag-and-drop is actually used a lot.
– Caffé
As the application gets heavier using one or the other, in Delphi, . Net and Java, the answer is nay. There is no difference in dragging the pro form controls instead of adding them in Runtime, since when writing code you use the techniques recommended by each platform.
– Caffé
As in the example of Java Android, in the eclipse is very weak the drag-and-drop. Already in Android Studio I use a lot because it gives a precious help.
– Jorge B.