Returning to the Basics After More Than a Decade in UX
Returning to the Basics After More Than a Decade in UX

Over the last couple of months I finally had the time - and the mental space - to slow down and look at where I am as a designer after more than 11 years in product and UX roles.
Working for many years in product and UX can take a real toll.
The constant pace, shifting requirements, unclear goals, low design maturity, pressure to “just deliver”, endless context switching, the need to repeatedly explain the role of design, and the feeling that you rarely have time for deep thinking - all of this accumulates.
At some point, you realise that you’re not growing anymore. You’re just adapting. And you’re tired.
A large-scale survey published by Noam Segal and Lenny Rachitsky in May 2025 highlights this clearly - more than 8,200 people responded, and nearly half reported significant burnout. Design and research roles were among the most affected.
And many experienced designers will recognise something else: not only burnout, but also a lack of motivation, a feeling of stagnation, and simply not having the energy to think about “growth” after work.
Our industry has always relied (maybe sometimes unintentionally) on exploiting our passion for the craft. Passion is powerful, but it’s not infinite.
For me, this was a signal to reset. Not a dramatic reinvention, but a deliberate return to fundamentals.
So, despite a long career and a postgraduate degree in service design (completed in 2017), I completed the Google UX Design Certificate. This might look unusual for someone at my stage, but I knew exactly what the course would and wouldn’t teach.
It will not teach you how to handle the hardest parts of real projects — unclear requirements, misalignment inside teams, interpersonal conflicts, organisational chaos, low design maturity, or building long-term product strategy. It will also not teach you one of the most critical skills: like communicating design decisions, and adjusting the design process, tools and methods to the specific context of the team, the product and the organisation.
These skills come only from many years of real-world practice. But what the course did give me was something I genuinely needed: a clean, structured reset of the fundamentals. Revisiting the basics step by step reminded me why I entered this field in the first place. It brought back clarity that had been slowly fading in the background noise of everyday work.
Alongside this, I spent time on areas that I now consider essential to how I work:
Critical thinking and logical reasoning (focusing on how to evaluate arguments, evidence and assumptions)
Introduction to Psychology ( from Yale University - which helped me understand the basics of how we perceive, learn and make decisions, and why our intuitive impressions often differ from how our minds actually work)
Computer Science fundamentals (from Harvard University from the CS50 program) which gave me a clearer understanding of how software works under the hood: algorithms, data structures, memory, and the logic behind systems we design for
Deep learning foundations and generative AI courses from Stanford University (including the AI course sequence from the Stanford CS230 program) which helped me understand how neural networks learn, where generative models fail, and why the limitations matter as much as the capabilities
Ethics of Artificial Intelligence (Politecnico di Milano) which strengthened my awareness of risks, biases, accountability, and the societal impact of deploying AI without proper safeguards
Google AI Essentials Specialization - which helped me build a practical understanding of how to integrate AI tools responsibly in daily workflows: prompting techniques, evaluating outputs, recognising limitations, and combining AI with human judgement rather than replacing it
I also revisited Thinking, Fast and Slow, which helped me recognise how deeply our cognitive biases shape our decisions, perceptions, communication, and even how we interpret our own work. Being aware of these biases matters, for research, for product decisions, for team communication, and also privately, in everyday life. And equally important is knowing what you can actually do to avoid getting trapped by these biases.
Why mention the survey and these topics together?
Because the broader conversation around artificial intelligence sits right inside this environment. There is a lot of excitement, which is natural, but also a tendency to move too fast and rely on very surface-level narratives. Fast narratives often hide slow risks.
I am not rejecting these technologies. I’m not “anti-AI”. Artificial intelligence is already transforming how we work, and it will continue to do so.
What I’m trying to understand is how we can find a responsible middle ground - using these tools with a real understanding of their limitations, biases, and long-term implications, instead of relying on enthusiasm alone.
To me, responsible adoption means slowing down first, learning deeply, asking better questions, and sometimes taking one or two steps back in order to move four steps ahead with more clarity and confidence. It means building understanding not from hype, but from evidence and thoughtful reflection.
This certificate is not just a line on a resume. It is part of a broader, slower process of rebuilding my foundations, reconnecting with the craft, and creating space for deeper learning. I’m genuinely glad I did it.
I’ll write more soon about cognitive biases, AI’s real limitations, insights from psychology, and how we can responsibly incorporate AI into product and UX work.
This post is just the first step… I hope so :)
Sources mentioned in this post:
State of Tech 2025 — Segal & Rachitsky survey https://www.lennysnewsletter.com/p/the-state-of-tech-2025
Generative AI limitations & hallucinations — MIT Sloan https://sloanreview.mit.edu/article/the-problem-with-generative-ai/
Links to the courses I took:
Google UX Design Certificate: https://www.coursera.org/professional-certificates/google-ux-design
Critical Thinking: A Brain-Based Guide for the ChatGPT Era Specialization: https://www.coursera.org/specializations/critical-thinking
Yale Introduction to Psychology: https://www.coursera.org/learn/introduction-psychology
Harvard CS50 Computer Science Fundamentals: https://cs50.harvard.edu/x/ https://www.youtube.com/watch?v=LfaMVlDaQ24
Stanford CS230 Deep Learning: https://cs230.stanford.edu/ Stanford CS230 | Autumn 2025 | Lecture 1: Introduction to Deep Learning
Politecnico di Milano – Ethics of AI: https://www.coursera.org/learn/ai-ethics
Google AI Essentials https://grow.google/certificates/ai/
Over the last couple of months I finally had the time - and the mental space - to slow down and look at where I am as a designer after more than 11 years in product and UX roles.
Working for many years in product and UX can take a real toll.
The constant pace, shifting requirements, unclear goals, low design maturity, pressure to “just deliver”, endless context switching, the need to repeatedly explain the role of design, and the feeling that you rarely have time for deep thinking - all of this accumulates.
At some point, you realise that you’re not growing anymore. You’re just adapting. And you’re tired.
A large-scale survey published by Noam Segal and Lenny Rachitsky in May 2025 highlights this clearly - more than 8,200 people responded, and nearly half reported significant burnout. Design and research roles were among the most affected.
And many experienced designers will recognise something else: not only burnout, but also a lack of motivation, a feeling of stagnation, and simply not having the energy to think about “growth” after work.
Our industry has always relied (maybe sometimes unintentionally) on exploiting our passion for the craft. Passion is powerful, but it’s not infinite.
For me, this was a signal to reset. Not a dramatic reinvention, but a deliberate return to fundamentals.
So, despite a long career and a postgraduate degree in service design (completed in 2017), I completed the Google UX Design Certificate. This might look unusual for someone at my stage, but I knew exactly what the course would and wouldn’t teach.
It will not teach you how to handle the hardest parts of real projects — unclear requirements, misalignment inside teams, interpersonal conflicts, organisational chaos, low design maturity, or building long-term product strategy. It will also not teach you one of the most critical skills: like communicating design decisions, and adjusting the design process, tools and methods to the specific context of the team, the product and the organisation.
These skills come only from many years of real-world practice. But what the course did give me was something I genuinely needed: a clean, structured reset of the fundamentals. Revisiting the basics step by step reminded me why I entered this field in the first place. It brought back clarity that had been slowly fading in the background noise of everyday work.
Alongside this, I spent time on areas that I now consider essential to how I work:
Critical thinking and logical reasoning (focusing on how to evaluate arguments, evidence and assumptions)
Introduction to Psychology ( from Yale University - which helped me understand the basics of how we perceive, learn and make decisions, and why our intuitive impressions often differ from how our minds actually work)
Computer Science fundamentals (from Harvard University from the CS50 program) which gave me a clearer understanding of how software works under the hood: algorithms, data structures, memory, and the logic behind systems we design for
Deep learning foundations and generative AI courses from Stanford University (including the AI course sequence from the Stanford CS230 program) which helped me understand how neural networks learn, where generative models fail, and why the limitations matter as much as the capabilities
Ethics of Artificial Intelligence (Politecnico di Milano) which strengthened my awareness of risks, biases, accountability, and the societal impact of deploying AI without proper safeguards
Google AI Essentials Specialization - which helped me build a practical understanding of how to integrate AI tools responsibly in daily workflows: prompting techniques, evaluating outputs, recognising limitations, and combining AI with human judgement rather than replacing it
I also revisited Thinking, Fast and Slow, which helped me recognise how deeply our cognitive biases shape our decisions, perceptions, communication, and even how we interpret our own work. Being aware of these biases matters, for research, for product decisions, for team communication, and also privately, in everyday life. And equally important is knowing what you can actually do to avoid getting trapped by these biases.
Why mention the survey and these topics together?
Because the broader conversation around artificial intelligence sits right inside this environment. There is a lot of excitement, which is natural, but also a tendency to move too fast and rely on very surface-level narratives. Fast narratives often hide slow risks.
I am not rejecting these technologies. I’m not “anti-AI”. Artificial intelligence is already transforming how we work, and it will continue to do so.
What I’m trying to understand is how we can find a responsible middle ground - using these tools with a real understanding of their limitations, biases, and long-term implications, instead of relying on enthusiasm alone.
To me, responsible adoption means slowing down first, learning deeply, asking better questions, and sometimes taking one or two steps back in order to move four steps ahead with more clarity and confidence. It means building understanding not from hype, but from evidence and thoughtful reflection.
This certificate is not just a line on a resume. It is part of a broader, slower process of rebuilding my foundations, reconnecting with the craft, and creating space for deeper learning. I’m genuinely glad I did it.
I’ll write more soon about cognitive biases, AI’s real limitations, insights from psychology, and how we can responsibly incorporate AI into product and UX work.
This post is just the first step… I hope so :)
Sources mentioned in this post:
State of Tech 2025 — Segal & Rachitsky survey https://www.lennysnewsletter.com/p/the-state-of-tech-2025
Generative AI limitations & hallucinations — MIT Sloan https://sloanreview.mit.edu/article/the-problem-with-generative-ai/
Links to the courses I took:
Google UX Design Certificate: https://www.coursera.org/professional-certificates/google-ux-design
Critical Thinking: A Brain-Based Guide for the ChatGPT Era Specialization: https://www.coursera.org/specializations/critical-thinking
Yale Introduction to Psychology: https://www.coursera.org/learn/introduction-psychology
Harvard CS50 Computer Science Fundamentals: https://cs50.harvard.edu/x/ https://www.youtube.com/watch?v=LfaMVlDaQ24
Stanford CS230 Deep Learning: https://cs230.stanford.edu/ Stanford CS230 | Autumn 2025 | Lecture 1: Introduction to Deep Learning
Politecnico di Milano – Ethics of AI: https://www.coursera.org/learn/ai-ethics
Google AI Essentials https://grow.google/certificates/ai/


A Manifesto for Meaningful Digital Creation


Why Skipping User Research Might Cost You More Than You Think


Enhancing Startup Success with the Value Proposition Canvas


Kick-start Your User Research with the Right Foundation


Returning to the Basics After More Than a Decade in UX
stay in the loop
Subscribe for more inspiration.
Products
© 80/20 Design 2024. All rights reserved. Privacy Policy. Terms of Use.
Products
© 80/20 Design 2024. All rights reserved.
© 80/20 Design 2024. All rights reserved.