Here are some interesting finds on UI/UX of the week!
Three Methods to Increase User Autonomy in UX Design
Wars have been waged over autonomy. Having the freedom to do things in your own way is considered by some to be one of…
Increasing User Autonomy in UX Design. Another great article hailing from the Nielsen Norman Group on the topic of enhancing User Autonomy experiences in Product journeys. The article looks at aspects such as Customization, Scannability and Timing & Sequences. Much like the evolution of e-commerce (which also includes Customization and aspects such as AI integration and measurement across all devices, to name but a few), increasing User Autonomy is all about shaping product journeys where the user feels in control of what they’re doing, in a secure manner. Well worth a read and reflection. Highlight of the article includes:
“Users notoriously spend very little time or effort reading content on the web. This is largely because they are trying to move fast and look only for information that is relevant to them. Many eyetracking studies reveal that users follow a variety of patterns for scanning content to decide whether it is worth spending time to read it in detail. Making it difficult for users to scan information limits autonomy because it forces users to interact with the design in a very specific way: they must either read everything to determine whether it is helpful for them or leave. The good use of headings and subheadings gives users autonomy because it allows them to quickly choose what is relevant to read in more detail. This principle applies to every type of design, including printed materials and emails. Consider the scannability in the two examples below from the Morning Brew, a daily news-briefing email.”
Mark Zuckerberg and Stripe Co-Founder Patrick Collison Both Use the Same Question to Choose Who to…
" Personnel is policy," they say in government circles, meaning that the kind of work leaders end up accomplishing is…
Choosing Candidates. Very insightful article from Inc. Magazine and author Jessica Stillman on the topic of choosing candidates/co-workers. While some of the interviewees are polarizing personalities, the insights of the article are undeniable, particularly when it comes to considerations on interviewing and deciding/weighing-in on who should actually join a team. Asking questions such as “can I work with this person on a daily basis” or “would I like this person as a boss” are questions worth pondering. Highlight of the article includes:
“So how does Zuckerberg recommend you choose the people you spent time with? He suggests you ask yourself a simple question: Would I like this person to be my boss? This holds true, Zuckerberg explains, whether you’re picking colleagues, employees, or even peers for a collaborative project. Whatever your real-life relationship, imagine an alternate reality where this individual is your boss. If that makes you want to run screaming from the room, you might want to run (metaphorically) in real life. Zuckerberg explains his philosophy on choosing collaborators in response to a question about how young people can chart a course in life they’re proud of, but he goes on to clarify that he still uses the same approach to hiring today.”
AI must be developed responsibly to improve mental health outcomes
Many mental health startups are integrating AI within their product offerings, but that tech is still far from perfect.
AI and Improving Mental Health Outcomes. Another pertinent article from The Fast Company and author Dan Adler on the topic of AI, and its relationship with Mental Health Startups. In the past I’ve mentioned how AI will have a profound impact on everything, since it allows to understand users at a scale, it will introduce new interfaces, it optimizes permutations, it will allow for prediction of features and personalizations and also identify new opportunities. What this article raises some questions on, pertains on how AI has the ability to specifically address issues on Healthcare, and the nuances that surround Mental Health issues. Well worth a read. Highlight of the article includes:
“Yet we’ve also seen that AI is far from perfect. There have been notable bumps on the road in other areas of medicine that are telling about the limitations of AI and, in particular, the machine learning models that power its decision-making. For example, Epic, one of the largest EHR software developers in the United States, deployed a sepsis prediction tool across hundreds of hospitals. Researchers found that the tool performed poorly across many of these hospital systems. A widely-used algorithm used to refer people to “high-risk care management” programs was less likely to refer black people than white people who were equally sick. As mental health AI products are launched, technologists and clinicians need to learn from past failures of AI tools in order to create more effective interventions and limit potential harms.”