UI/UX Articles and Interesting Tidbits of the Week
November//12//2021
Here are some interesting finds on UI/UX of the week!
1.
Tips for Online Surveys for User Testing. Hailing from the Marvel Design Blog, this article on the topic of Surveys and how to effectively create them, run them and get data from them, is an opportune reminder of just how useful this technique can be, in order to get insightful and dimensional feedback from your users. The article details aspects to be mindful of when creating surveys, which include crafting the testing goals, writing and reviewing the questions thoroughly, always aiming for a concise and substantial type of approach in them, recruiting enough testers, to name but a few. While not hyperbolically insightful, it’s still worth reading through. Highlight of the article includes:
“Be concise. The shorter the survey, the more likely people will complete it. Make sure your questions are specific and easy to understand. Include a mix of open-ended and closed-ended questions. Open-ended questions are great for identifying where you can improve and helping you see any blind spots in your thinking. Pay attention to formatting. The survey should be easy to read with no cluttered fields or small fonts, and it should be accessible on both laptops and mobile devices.”
2.
Onboarding Experiences in Trying Times. Interesting article hailing from the Invision Design blog, focused on Onboarding experiences, this time around specifically on the case of Salesforce. Hopefully the irony isn’t lost that within a company where the overall product experience of what their products are and provide is so unevenly delivered, this approach to onboarding is grounded in making the process simultaneously accessible, humane and dare I say it, personalized. I’ve personally written on the topic of onboarding Designers before, but this approach demonstrates a deliberate effort to bring unity and coherence to a process that can be at times challenging for a new team member. Highlight of the article includes:
“Just as virtual happy hours and shared Spotify playlists help the Salesforce Experience team bond, the team’s leaders acknowledge that everyone responds differently during a crisis. Many are looking to their jobs as a way to take control amidst the uncontrollable. The team has prioritized a few large, meaty projects that offer new and existing team members a different way to bond. The real business challenges at hand–such as turbocharging the design enablement curriculum or developing a research program to measure design’s business and social value–allow the team to practice their craft, use their creativity, and they build relationships. New team members are being given a chance to bring their skills and unique backgrounds–from behavioral economics, social justice activism, brand strategy, or design systems–to relevant and motivating work.”
3.
Suicide Notes, their Statements and Creating Empathetic Chatbots. A very interesting and conflicting from an emotional standpoint article, hailing from The Next Web. As chatbots have become progressively more refined in their execution, and capacity to address questions and relate to people, there’s a subtlety and complexity in the human language, they can’t necessarily grasp. This article dives into a research endeavor, where suicide notes have been studied, in the context of health care, with the study being prompted and supported by the Australian e-Health Research Centre since 2014. As Natural Language Processing only continues to evolve, this article posits some questions not only on the chatbot virtuosity in itself, but also on how we communicate. Highlight of the article includes:
“Idioms such as “the grass is greener on the other side” were also common — although not directly linked to suicidal ideation. Idioms are often colloquial and culturally derived, with the real meaning being vastly different from the literal interpretation. Such idioms are problematic for chatbots to understand. Unless a bot has been programmed with the intended meaning, it will operate under the assumption of a literal meaning. Chatbots can make some disastrous mistakes if they’re not encoded with knowledge of the real meaning behind certain idioms. In the example below, a more suitable response from Siri would have been to redirect the user to a crisis hotline.”