Author Spotlights The Skills That Bridge Technical Work and Business Impact Maria Mouschoutzi discusses her writing process, how to break down complex topics, and the non-technical skill she wishes she had focused on earlier. Share Photo courtesy of Maria Mouschoutzi In the Author Spotlight series, TDS Editors chat with members of our community about their career path in data science and AI, their writing, and their sources of inspiration. Today, we’re thrilled to share our conversation with Maria Mouschoutzi . Maria is a Data Analyst and Project Manager with a strong background in Operations Research, Mechanical Engineering, and maritime supply chain optimization. She blends hands-on industry experience with research-driven analytics to develop decision-support tools, streamline processes, and communicate insights across technical and non-technical teams. In “What ‘Thinking’ and ‘Reasoning’ Really Mean in AI and LLMs,” you address the semantic gap between human and machine reasoning. How does understanding this distinction impact the way you approach model development and interpretation in your professional work? AI has generated huge hype recently. All of a sudden, many old-school ML-based products are instantly rebranded as AI, and there seems to be a renewed demand for anything that has AI slapped on it. Because of this, I believe that it is now essential for everyone to have a basic technical understanding of what AI is and how it works, so that they are in a position to evaluate what it can and cannot do for them. The truth is that we carry lots of baggage about the very nature of AI, originating in narratives from our sci-fi legacy. This baggage makes it easy to get carried away by all of AI’s exciting and promising potential and forget its actual current capabilities, ultimately misjudging it as some kind of magic solution that is going to alleviate all our problems. Non-technical business users are the most prone to this overexcitement about AI, sometimes imagining it as a black-box superintelligence, able to provide correct answers and solutions to anything. For better or for worse, this couldn’t be further from the truth. LLMs — the main scientific breakthrough all the AI fuss is really about — are impressively good at certain things (for instance, generating emails or summaries), but not so good at other things (for example, performing complex calculations or analysing multilevel cause and effect relationships). Having a technical understanding of what AI is and how it fundamentally works has immensely helped me in my professional work. Primarily, it allows me to discover valid AI use cases and to manage business users’ expectations of what can and cannot be done. On a more technical level, it allows me to distinguish the specific components that need to be used in specific contexts, so that the delivered solution has real value for the business. For example, if a RAG application is needed to search specific technical documentation and perform calculations based on information that is found in that documentation, it means that a code terminal component needs to be included in the application to perform the calculations (instead of letting the model directly answer). Where do you draw the initial inspiration for your articles, especially the more philosophical ones like the “Water Cooler Small Talk” series? The initial inspiration for my “Water Cooler Small Talk” series came from actual discussions I’ve experienced in an office, as well as from friends’ stories. I think that due to the tendency of people to avoid unnecessary conflict in corporate setups, sometimes some really outrageous opinions can be expressed in casual discussions around a water cooler. And usually, no one calls out incorrect facts just to avoid conflict or challenge their colleagues. Even though such conversations are benevolent and well-intended — really just a casual break from work — they sometimes lead to the perpetuation of incorrect scientific facts. Especially for complex and not-so-easy-to-intuitively-understand topics like statistics and AI, we can easily oversimplify things and perpetuate invalid opinions. The very first opinion that pushed me to write an entire piece about it was that ‘if you play enough rounds of roulette, you are going to eventually win, because the probabilities are about 50/50, and the results are going to eventually balance out.’ Now, if you’ve ever taken a statistics class, you know that this is not how it works; but if you haven’t had that statistics class, and no one calls this out, you may leave this discussion with some strange ideas about how gambling works. So, my initial inspiration for that series was mainly misunderstood statistics topics. Nonetheless, the same — if not more — misunderstandings apply nowadays to topics related to AI. The huge hype that AI has generated has resulted in people imagining and spreading all kinds of misinformation about how AI works and what it can do, and they sometimes do so with incredible confidence. This is why it is so important to educate ourselves on the fundamentals, no matter if it is statistics, AI, or any other topic. Can you walk us through your typical writing process for a detailed technical article, from initial research to final draft? How do you balance deep technical accuracy with accessibility for a general audience? Every technical post starts with a technical concept that I want to write about — for instance, demonstrating how to use a specific library or how to structure a certain problem in Python. For example, in my Pokémon post , the goal was to explain how to structure an operations research problem in Python. After identifying this core technical concept that I want to focus on, my next step is usually to search for an appropriate dataset that can be used to demonstrate it. I believe that this is the most challenging and time-consuming part — finding a good, open-source dataset that can be freely used for your analysis. While there are lots of datasets out there, it is not so trivial to find one that is freely available, with complete data, and interesting enough to tell a good story. In my view, the flavor of the dataset you are going to use can have a big impact on the popularity of your post. Structuring an operations research problem using Pokémon sounds much more fun than using employee shifts (eww!). Overall, the dataset should thematically fit the technical topic I’ve chosen and make for a somewhat coherent story. Having identified the technical topic of the post and the dataset I am going to use, I then write the actual code. This is a rather straightforward step: write the code using the dataset and get it to run and produce correct results. After I’ve finished the code and I have made sure it runs properly, I start to draft the actual post. I usually start my posts with a brief intro on what initially sparked my interest in this specific topic (for example, I wanted to make a complex visualization for my PhD , and the searoute Python library made my life easier), and how this topic can be useful to the reader (reading my tutorial explaining API calls to the Pokémon data API can help you understand how to write calls to any API). I also add some brief general explanations, wherever appropriate, of the underlying theoretical premise of the use case I am demonstrating, as well as a short introduction to the code libraries that I will be using. In the main part of the technical post, I typically show how to structure the code with Python snippets, and present step-by-step explanations of how everything is playing out and the results you are expected to get if everything runs correctly. I also like to add GIF screenshots demonstrating any interactive diagrams that are incorporated in the code — I believe they make the posts a lot more interesting, easy to understand, and visually appealing to the reader. And there you have it! A technical tutorial! What initially motivated you to start sharing your knowledge and insights with the broader data science community, and what does the process of writing give back to your professional practice? Back in 2017, while writing my diploma thesis, I stumbled upon Medium and the Towards Data Science publication for the very first time. After reading a couple of posts, I remember being completely mesmerized by the abundance of technical material, the variety of topics, and the creativity of the posts. It felt like a data science community, with writers of diverse backgrounds and at different technical levels — there were articles for every level and for various domains. But apart from appreciating the technicality of the tutorials that allowed me to learn and understand more about data science, I also liked the creativity and storytelling of the posts. Unlike a GitHub page or a Stack Overflow answer, there was a certain creativity and artistry in most of the posts. I really enjoyed reading such posts — they helped me learn a lot of stuff about data science and machine learning, and over time, I silently developed the desire to also write such posts myself. After thinking about it for a while, I reluctantly drafted and submitted my very first post, and this is how I published with TDS for the first time in early 2023. Since then, I’ve written several more posts for TDS, enjoying each one as much as that first post. One thing I really enjoy about writing technical pieces for TDS is sharing things that I myself found challenging to understand or especially interesting. Sometimes complex topics like operations research, probabilities, or AI can feel scary and intimidating, discouraging people from even starting to read and learn more about them — I myself am guilty of this. By creating a simplified, straightforward, even seemingly fun version of a complex topic, I feel like I enable people to start reading and learning more about it with a gentle, not-so-formal start and see for themselves that it is not so scary after all. On the flip side, writing has greatly helped me on a personal and professional level. My written communication has greatly improved. Over time, it has become easier for me to present complex, technical topics in a way that business non-technical audiences can grasp. Ultimately, putting yourself in a position to explain a topic to someone else in simple terms forces you to completely understand it and avoid leaving ambiguous spots. Looking back at your career progression, what is a non-technical skill you wish you had focused on earlier? In a data career, the most important non-technical skill is communication. While communication is valuable in any field, it is especially critical in data roles. It is essentially what bridges the gap between complex technical work and practical business understanding, and helps make you a well-rounded data professional. This is because, no matter how strong your technical skills are, if you cannot communicate the value of your deliverables to business users and management, they won’t take you very far. It is important to be able to explain the value of your work to non-technical audiences, speak their language, understand what matters to them, and communicate your findings in a way that shows how your work benefits them. Data and math, as valuable as they are, can often feel intimidating or incomprehensible to business users. Being able to translate data into meaningful business insights and then communicate those insights effectively is ultimately what allows your data analysis projects to have a real impact on a company. To learn more about Maria’s work and stay up-to-date with her latest articles, you can follow her on TDS or LinkedIn . Written By Source: https://towardsdatascience.com/the-skills-that-bridge-technical-work-and-business-impact/