Too many screens or a screen with too much information?

Asked

Viewed 1,131 times

37

I see an increasing trend, although this has existed before, to create multiple screens, several steps to perform a single action.

Of course, the advent of smaller screens encourages this. But I’m talking about a normal desktop. The same goes for "normal" web applications (which are not made to run on mobile).

I also understand that Overpopulated screens of information, that allow you to do various things, often unrelated, do not make sense. But I realize that before what you did in a click, in modernized interfaces end up requiring two, three or more clicks to achieve the same purpose. Often you clean the screen but makes it difficult to access the desired operation.

This kind of philosophy, it seems to me, undermines the discovery of the operation.

Really gets in the way of having a lot of information? Why?

Are there studies or at least consistent information that show that one is really better than another? Why is hiding operations better? And if not, why are they adopting this? It would be interesting to see a comparison between the two situations to understand the advantages and disadvantages of each one, and with this help to understand when and choose one more than the other.

Is there any way to facilitate discovery, facilitate access to operations, without "soiling" the screen? There are exceptions where we should not apply one of these philosophies?

  • 1

    Would you have an example of each of the two situations? In order to better understand the "philosophies" you are quoting. And their real differences, difficulties and operations "hidden".

  • I’m trying to find one before or after to post, if you find something cool, I’ll edit it. Show it well. But I speak in general terms. Just so you don’t leave without any example: to get to http://answall.com/help/mcve. you need 4 clicks. You need less first. Another example is the use of tabs instead of screen divisions. I see applications that go so far as to require switching screens to compare information. While understanding this is mobile, Windows 8 has raised this: http://ux.stackexchange.com/questions/31207/ Gmail is a good example of how it’s changed.

7 answers

32


TL;DR

Really gets in the way of having a lot of information? Why?

In theory, yes. Because the capacity of human attention is limited. The user may not realize what they should, and even if they do so may generate too much cognitive effort for more continuous interaction.

There are studies or at least consistent information that show that one really is better than another?

There are numerous. But more general studies serve as a standard approach. The best study for you will be the one you do with your users using prototypes of the product that prepares for them.

Why is hiding operations better? And if that’s not why they’re adopting it?

It might be better to hide because it facilitates the user’s local interaction (they don’t have to worry about what isn’t important in the current context). However, the user should never need to remember much information. Again, human capacity is limited.

Is there any way to facilitate discovery, facilitate access to operations without "fouling" the screen?

There are numerous, all using some of the human senses (after all, the user needs to be able to perceive past information). Games, for example, make methodical use - and befitting fantasy (i.e., prior knowledge of the user) - of sounds. The classic example is the sound of swords colliding in the Age of Empires, which indicates that a battle begins outside the user’s screen. Mobile apps also vibrate the device. Anyway, you don’t necessarily need to use the screen to pass on some information.

There are exceptions where we should not apply one of these philosophies?

Certainly there is. The most obvious is building applications for the blind. In this case no matter the amount, the screen is not the best way to transmit the information. It may seem like an extreme example, but the idea is that these exceptions stem from your own analysis of your users, with their preferences, expectations and needs.

Original and more complete version

There are two things that are very important to consider in designing interaction with a product: appeal and engagement.

The appeal is directly related to user preferences and aesthetic issues of the product, such as how nice, beautiful and curious it looks like. It’s a first level of interaction, where the user chooses to start using the product. On the other hand, engagement, although it also includes attributes of preference and aesthetics, has a stronger relationship with satisfaction in the use experience. It is a more continuous contact, in which the user chooses to continue interacting with the product. If the expectations created in the first contact with the product are met or exceeded in a positive way in the continuity of contact, usually have the engagement.

Appeal, Amount of Information and Cognitive Effort

The appeal is closely linked to curiosity, which is a basic human need to understand the world. This need arises from and leads to interaction. We humans interact with the world not only by altering it through our actions, but also by realizing the changes that we ourselves and other agents make in it. It turns out that the mechanism of attention (which filters out the enormous amount of sensory data we continually receive from the world and decides what is relative or not according to our intentions) is limited. It is estimated* that each person can process 126 bits of information per second, which means that we are able to pay attention to a maximum theoretical three simultaneous conversations (if we can completely ignore everything else, such as the internal perceptions of our own organism).

* The source of this information is the psychologist’s work Mihaly Csikszentmihalyi, in the book Flow: The Psychology of Optimal Experience. Those who wish to read on the subject of interaction, attention and motivation, in a way (perhaps) a little easier, can read this my article on fun.

If you don’t believe in this limitation of attention, take the test and try to count the number of passes of the white team in this famous video. :)

That’s why evolution has made the brain so interested in slightly unknown patterns. When observing something that has nothing new, this something is simply uninteresting (we already know the subject, we do not care because it will not help the organism to do better in actions with the world). However, on the other hand, if this something is completely unknown and unable to be at least compared to something else known, it is chaotic and consequently also uninteresting. That is why a TV screen with only static, which has a lot of information from the point of view of Information Theory, is simply boring and does not cause even a start of appeal. But that’s also why very difficult puzzles, as appealing as they are initially, make engagement difficult because users simply can’t figure out how to proceed.

All this theoretical bla-bla is to demonstrate that there is an ideal point in the amount of information to be displayed. It should be enough to cause appeal and excite curiosity (and don’t just think about games; an alarm in a factory control system needs to be able to attract attention quickly!)but cannot exceed the natural limitations of human beings to the point of causing discomfort and thus hinder engagement. Even if a lot of information is understood and causes appeal, it can require a lot of cognitive effort and thus simply tire the user. That’s why there we Principles of Usability is preached, among other things, to prevent the user from having to remember the path in the menus to a command, for example.

Note, however, that the mere existence of much information is not always bad. Bad is flooding the user with all of it at once, making him unable to extract something useful from it. The result of a Google search, for example, has a lot of information, but it is spread over easily navigated pages. And, mainly, it is very clear to the user that this possibility exists (he realizes that there is more data than what he is currently seeing). In fact, here is the link to the physical effort of the next topic: the user may not care so much about having to navigate to the next item many times, as long as this browsing action is relevant and simple.

The example you used in the question (the full field interface) is intrinsically worse because the user doesn’t know what to do next than because he has available all that information. This fantastic article, called Desmystifying UX Design, has much more relevant information in this regard.

Ergonomic Engagement and Effort

Ergonomic or use effort (such as the number of clicks) is more relevant to engagement than appeal. This is natural, because during the first contact the user has not used the product properly and only has expectations about how it will interact with it. The cognitive effort, treated in the previous topic, is already started from the first contact, when the user seeks to understand how the product works.

It is a tendency to believe that too much physical effort undermines engagement because it makes the user simply get tired of interacting with the product. For example, the Nintendo Wii boxing game may seem very interesting at first contact (it has great appeal), but after a few games the physical tiredness of having to constantly punch air can make the experience less satisfying than imagined. Still is not a universal truth, because everything depends on the preferences and expectations that are created in the users during the appeal. There are users who have immense satisfaction in spending physical effort, and will surely realize that this is the type of game for them from the first interactions.

In other words, it is not the effort itself that undermines engagement, but the perception of its relevance by users. In the article I quoted earlier on UX demystification, there is the example of the greater number of clicks on an Assistant interface (Wizard), but which is perceived as facilitator by users because it does not require great physical or cognitive effort individually, do not burden the user with questions, and allow it to reach the goal gradually.

I unfortunately no longer have the reference of this study, but the mechanism of raising the electric glass of cars with just one touch came from tests of Japanese automotive companies with their users, in which it was observed precisely this fact. In early versions, users were required to keep pressing the button for the glass to go up or down. As fast as the glass went up, users always complained that the mechanism was too slow (even if it was much faster than turning a crank, as it was done until then). It turns out that to keep pressing the button means to perform physical effort, even if much less than turning the crank. But the perception of the result by that effort was very small, comparatively. While holding the button - a task so simple as to be ridiculous - time seemed to pass more slowly because the feeling of idleness (idleness) was huge. That’s why the Japanese created the auto up/down from a touch. :)

Incidentally, there are studies who also observed this phenomenon of the relationship between action and time. After intentionally distorting the perception of time ("fooling" participants with watches that pass more quickly), it was found that even tasks considered very boring (such as counting match sticks!) are experienced as more pleasurable because of this perception of flying time...

And how to Plan Interaction

The best way, according to all the UX sources I have studied and my own experience, is to evaluate the interaction directly with users, mainly using low fidelity prototypes (as built on paper). The low fidelity of the prototype facilitates construction (in terms of cost and time) and prevents the designer from attaching to the product created (this interface was so cute and the user did not like it... ah, it is he who does not know what he wants... my Precious!). In addition, the evaluation allows to observe the critical points in all aspects discussed above. On that subject, I suggest also read this other question about what are Wireframes, Mockups and Prototypes and the book Interaction Design: Beyond Human-Computer Interaction.

  • 3

    Who would have thought that soon you would give an answer of this level on this subject? D

  • 2

    +1 If anyone sees a DR;TL at the beginning of this response and skips the rest... go back there and keep reading until the end because it’s worth it!

13

Well, in your case the question may change over time. Because from 3 to 5 years ago today’s interfaces would not be possible.

There is no "best way to work" convention with interface design, until, each project has different requirements.

When developing an interface some criteria should be taken into account, such as:

Clarity
Clarity, second Kevin Matz, is one of the main objectives when developing an interface. Because the intention is to elaborate an interface where the user can interact and understand how it works.
Of course the "number of clicks" should be taken into account, but should not overcome the clarity and usefulness of its interface.

Heed
We live with various factors that draw our attention as the day progresses. Your interface should be able to handle external factors. Do not put on your "screen", information that can draw the user’s attention. Your application should be able to handle this factor.

Goal
Add on a screen several features by making "easier", ends up damaging user interaction with your application. Because amid many features, it ends up "losing", making access difficult. Each screen should be focused on a final goal, thus making the user’s method of learning and memorization easier. After all, when we develop an application, be it Desktop, Web or Mobile, it will be used by N types of users, each with its own form of learning.

Other Operations
When it is necessary to add "Secondary Operations" to your screen, make sure that the user knows that this is a Secondary Operation, that it is there only as an add-on, and not to be the main attention.
Example: An option to share a photo on facebook, is there only as an add-on. The main goal is that people can see and interact with the photo, sharing it becomes a secondary factor.

Answering your question directly, since leaving a larger number of screens will make your application "easier" to interact with, there are no problems, on the contrary. The intention of a "User-friendly Interface" is to elaborate a form of interaction, where the user does not feel difficulties when interacting with your application.
In short: Do not take into account the "number of screens or buttons", but rather "the way of interaction with the user", as this will define whether your interface is clear or not.

I will leave some sources, linked directly and indirectly with your doubt: Gestalt - Indirectly linked to your doubt.
Gestalt Interface Design
User Interface
UI Design

The Future of UI
Evaluation of User Interfaces - Concepts and Methods

11

First an introduction

Application x User Interface

There is a relationship between application and user when lasting models both.

First contact

When the relationship is established, that is, when the user has the first contact with the application, what happens is that the user will like or not of that based on your personal tastes... that in my opinion is closely related to past experiences.

If the user is used to clean interfaces, or more populated interfaces, or line interfaces command, or GUI’s he will immediately have a impression based on past experiences.

But what does that have to do with anything?

For me, there are no good or bad interfaces. What’s there for me is:

  • interfaces following established standards,
  • or try to create their own pattern.

The first case will surely cause a much less impact on users, and will suit to the experiences of a wider range of users.

The second case, will have more difficulty in gain users, precisely by the fact that the things not being where they expect them to be.

Bi-directional molding over time

Over time, the application as much as the users are changing and one is shaping the other. If the application changes, and dislikes someone, it is possible this user migrates to a competitor. If many users ask for change, then a software manufacturer will probably listen to them, thus shaping the software.

Thus the interfaces change, following trends indicators, based on users, and its objectives.

Now try to change something that’s culturally embedded, just like the Start button on Windows. Or else leave the interface totally empty just like they wanted to do with Microsoft Office.

That is, there is a question of past experience even between versions of the same software.

Relation Application x Objective

a specific organisation of any application, in addition to being related to the experiences of user, is also related to its goal (how new!).

Only this has implications, about the questions, after all a more populated interface gives agility to the trained user, while a cleaner interface gives a less scary interface to users occasional, beads and mass in general.

Answers:

Really gets in the way of having a lot of information? Why?

Basing myself on what I said earlier, to me, the answer is no kind of organization gets in the way, if the user is already used, and also the use of the application corresponds to the objective

There are studies or at least consistent information that show that one really is better than another?

Sure... but I’m gonna go Socrates, and try to answer using only the plan of ideas, with my arguments.

If it’s really important, I can look for some. They are usually studies related to UX terms. I have seen studies even on the position that Abels should stay in relation to the boxes where you enter the data... I just can’t remember where.

Why hide operations is better?

I don’t think it’s better... for me, it’s a strategy of marketing, you reach the masses because it doesn’t scare the receiver, the explorer. And also does not limit the guy who will deepen in the application, in spite of this guy be impaired in agility.

And if that’s not why they’re adopting it?

Because the market for users who require agility is minor. Doesn’t mean it doesn’t exist. Imagine a software to monitor a nuclear reactor... everything has to be a finger away... a glance away.

Is there any way to facilitate discovery, facilitate access to operations without "fouling" the screen?

I think so.

In the future maybe just thinking about something already happens... but I attend to the present, I think the cool thing would be to join the best of both worlds:

  1. the application comes from very clean factory
  2. it would be customizable for the user to mount their own control panel, mega-Powerful
  3. if the organization (company) wishes, there must be a way to force a layout, to then train employees on that layout
  4. there could be a way to share a globally customized layout, thus creating a global standard for users of a certain application, whether or not endorsed by the application creator

There are exceptions where we should not apply one of these philosophies?

For sure.

It depends on the level of agility required for users. It depends on the specific client’s aesthetic requirement. There are so many requirements that can limit the interface.

Epilogue

This is my opinion... as a developer, user, thinker... My past experiences certainly influenced this text. So for sure, this is all less than a fraction small of possibilities.

  • 1

    Very good answer. Indeed, some indication of sources was missing, but your Socratic approach is on the right track. :)

8

Having a lot of information a priori doesn’t hurt or help. It depends on who is the focus of your application.

If you use the Pareto Principle/80-20 Principle (which is the basis of the Long Tail concept), you can understand that 80% of visitors will use 20% of the features/information and 20% of visitors will use 80% of the features/information. In other words, providing more functionalities/information will ensure a more refined way for a smaller but specialized audience, but it is possible to distance those less specialized, less intimate from the content. While less information will ensure greater understanding by a larger audience but can frustrate the more specialized.

I think the discussion is related to the text Getting Real, from ex-37Signal, current Basecamp. http://gettingreal.37signals.com/GR_por.php#ch01

They’re very radical at this point. Briefly, they put it as follows:

  • Getting Real is the smallest, fastest and best way to build software.

  • Getting Real is less. Less mass, less software, less functionalities, fewer roles, but everything that is not essential (and most of what you think is essential really isn’t).

  • Getting Real is staying small and being nimble.

  • Getting Real is about iterations and lowering change costs.

  • Getting Real has everything to do with launching, refining and improving constantly, which makes it the perfect path to software web-based.

With the fever of One-page Sites, it is very common to see creators trying to fit their content into this template without at least evaluating whether this solution is the best for their goal.

In fact it may be simpler to put all the information on a single page to tell a story, for example. Scrolling the screen is much more practical and intuitive than clicking 4 or 5 buttons and visiting all the content. But depending on the size of the content and the unfolding, the ideal is to have multiple screens.

Sign up for an online store, for example. Perhaps the simplest is to split the checkout into multiple screens instead of providing a single large page with everything.

There is no single answer to the question, the ideal is to think simple, monitor and increase the complexity of the on-demand application acquired in monitoring - using the statistics that differentiate this medium from other non-digital ones. When I say simpler, it’s thinking about this default user, sometimes it’s really Dummies, and that doesn’t mean the project isn’t good, right?

  • I think you focused your response on the functional character of the application (offering less or more functions). However, the issue is more in the sense of access to functions (be they in any quantities) through the UI. The reference you passed is interesting, but especially in this item you did not quote: "Getting Real starts with the construction of the interface, ie the real screens that people will use. Start with real customer experiences, building from that behind." That is, it starts from the prototyping of the UI to evaluate the interaction of users. :)

  • 1

    @Luizvieira Are you right about the focus of the answer, but the same reasoning does not apply to what is in the question? Objectively evaluating use (either in prototype or in "eternal beta") is a good way to decide what to hide, or remove, depending on the case. Leaving aside the fads of hiding everything, has much "depends" involved. When in your answer you speak in "ideal point", I understand that it varies according to what the application does, and the target audience, right?

  • @bfavaretto I am not sure that the exact same reasoning applies. See, for example, this passage of the answer: "In other words, providing more functionality/information will ensure a more refined form for a smaller but specialized audience [...]". What is a specialized audience? And that’s not necessarily true. Perhaps AR is imagining a Bash expert user on Unix, for example, who already knows many of the existing commands. But the fact is that the interface he uses is simple, and has not all commands available there, displayed all the time.

  • @bfavaretto Moreover, although a prototype can be used to understand the customer’s need, it makes no sense to hide some function that the user will never use. If it is something unnecessary, it should not even be implemented (let alone displayed/hidden). Therefore it seems to me that the focus of the question is more in the sense of experience and usability with the necessary/existing functions. Anyway, I think this answer is nice in the sense that it also helps with this kind of discussion that we’re having. :)

3

Excuse my pragmatism but, after reading the answers so far and reading some articles on the subject, my answer can only be:

Get in the "user skin" when developing/idealizing any UI.

Note: Do not penalize me for posting this as a reply and not as a comment.

  • The answer makes sense, I did not think it bad not, but I think it would save her if you put a little more what this means. For example, my question occurred because I don’t like to keep giving several clicks p/ get where I want. I’m not sure if this is what other people think.

  • 2

    That’s it. : ) Tip where to continue the foundation of your answer: User-Centered Design. "The Chief Difference from other product design philosophies is that user-centered design tries to optimize the product Around how users can, want, or need to use the product, rather than forcing the users to change their behavior to accommodate the product."

  • "don’t like to be giving several clicks p/ get where I want." refers to particular situations or is it always so?. At that point you thought: "I would do it differently?". It’s a matter of opinion. This is a subject widely studied but the results have to be seen only as principles to be taken into account and not as absolute truths. In my opinion here should rule common sense, a balance between amount of information/ relationship between it and usability. Putting myself in "user’s skin" is the rule I follow first, the feedback received is what will "fine-tune" the solution.

  • @Luizvieira, I’ll read it.

2

The book Designing Web Interfaces from Editora O'Reilly (focused only on interface patterns and some "conventions" that aim to provide a better user experience) "says" that we should limit the amount of information hiding it or postponing its appearance to the user (because the large amount of content (whether in a form of registration or news) can bother, discourage and even scare the user) for this is shown the use of some visual patterns some cited below:

  • Accordion: which limits the amount of content and options visible to the user.
  • Dialog overlay: which focuses the user’s view/concentration on a specific content (can be a relevant information or a form).
  • Detail Overlay: which shows the details of a certain element when it is focused by the mouse.

1

I think it varies a lot from application to application. In my view, we have apps with a lot of information that can be useful for those who need more agility in the process and seek to discover everything they need in less time. However, a "heavy" screen for some users can make your application a tiring experience that does not meet their goal (since many people are too lazy to search for things).

The choice between a screen with lots of information or with little information and more screens, should be carefully analyzed according to the purpose of the software and the target audience.

  • 2

    Could you substantiate this? Or is it just your opinion? Note that there are specific questions I have written. They don’t need to be answered directly, but you need information to help you inform something, at least help you make a decision. Your answer says to do just what I’m asking. How to analyze carefully? What criteria to use?

  • It was just my own opinion, but I will seek some sources to better substantiate. I found interesting the discussion :)

Browser other questions tagged

You are not signed in. Login or sign up in order to post.