Hans Rusinek: "When all companies use AI, the human factor becomes decisive"
When adopting a "practical optimism", workers and organisations can maintain agency even in the age of AI
Hans Rusinek describes himself as a “pracademic”, a practicing academic or academic researcher. In his work, he combines organisational research, management, and social impact. Hans teaches at the University of St. Gallen (Switzerland) with a focus on the transformation of work and sustainability. His most recent book “Work-Survive-Balance” (Herder) was published in 2023 and shortlisted for the non-fiction book award of the Friedrich Ebert Foundation.
Hans, you have written a lot about hope recently. How much hope do you have that AI will make work better?
I am not hopeless. My hope is that through the use of stochastic intelligence, we will rediscover our genuinely human forms of intelligence. History gives us reason for this optimism: When repetitive tasks have been automated, this often also led to an increase in the genuinely human input to work.
Technology itself is neither good nor bad. Everything depends on the social practices we develop around it. In organisations I work with, many already save time with AI. The crucial question is: What do they do with those savings? Do they fire people? Do they simply produce more? Or do they use the space for reflection – working on the system rather than merely in it? If it’s the latter, I am hopeful.
The optimistic scenario you describe is close to what large tech companies promise. Critics point out that previous digital tools never truly freed us; instead, we ended up with overstuffed calendars and endless communication. Why should AI be different?
It depends on the time horizon. Communication tools did not deliver liberation. But if you look more broadly at automation, there are powerful stories. The invention of the washing machine has contributed a lot to the emancipation of women, for example.
Techno-optimism and cultural pessimism both encourage passivity: In the first narrative, everything will be fantastic, in the second we witness disaster.
Between them lies what I would call “practical optimism” or the art of the possible. A recent paper described AI as a “normal technology”. That framing helps. Automation has a long history, starting with windmills. AI is impressive, but normalising it allows us to act, rather than simply echoing the narratives of tech firms.
In “practical optimism”, leadership matters. Recently many CEOs issued memos on AI, but they often seem to have generated anxiety rather than inspiration. How would you frame AI differently?
First, avoid fear-based communication. Prohibition does not work, people use AI anyway. I see this in universities as well: If we tell students to not use AI, they simply hide it. I prefer a transparent approach; for example, I give my students AI-generated texts and ask them to correct them by applying their own expertise.
Second, language matters. “Artificial intelligence” is really just a sales pitch: nothing about it is artificial, and its “intelligence” is narrow. Using terms such as “stochastic communication” or “machine usefulness” can help employees see AI as less mystical, more manageable.
Third, communication should follow the principles of change management. Too often, leaders frame AI adoption in terms of fear: Adapt or lose your job. That produces performative compliance, not genuine engagement. In workshops, I often use a method called storytelling polarities: Here, you start by acknowledging what is good about the status quo, then highlight its downsides, and only then discuss possible changes. Finally, you underline that every change has costs and requires effort. This balanced approach creates motivation.
Companies often work in silos which are now shaken up by AI. Is this an opportunity for companies to re-think their purpose and the way they work?
Generative AI makes it painfully clear how damaging silos can be: If your data is poorly organised, AI cannot help you. At the same time, reliance on chatbots might increase silos. If people ask a bot instead of their colleagues, loneliness may rise.
There is also a paradox: Because most organisations use similar AI tools, the technology itself will not create an advantage. The human factor becomes a differentiating factor. Management, culture, and the so-called “soft factors” will matter even more, precisely because they cannot be copy-pasted. You cannot buy good leadership with a subscription to OpenAI.
Everyone talks about upskilling, but you have also warned about downskilling. What do you mean?
Recent U.S. data show that graduate unemployment has, for the first time, exceeded general unemployment. One explanation is that AI can do much of the work graduates used to do: research, presentations, drafting. Generative AI will not become a project manager or Partner, but it erodes the lower steps on the career ladder.
I feel this personally: Knowing that ChatGPT can draft in “Hans Rusinek style” makes it harder for me to start writing myself. I deliberately go offline to write in my garden shed to preserve the sequence of starting with “human intelligence” first and only then ask AI for assistance.
The order matters: If students brainstorm with AI first, they rarely add new thoughts afterwards. That risks losing the very friction that true learning requires. AI should take over routine scheduling and predictable emails so that humans can focus on genuine thinking, the kind of thinking that philosopher Byung-Chul Han says begins with goosebumps, with moments that make us pause and reflect. When I work with companies, I often ask “what was the last thing that made you think and gave you goosebumps”?
The real barriers to productivity in today’s workplace are not technological but organisational: Constant interruptions and endless urgency. Technology alone cannot fix that.
Carl Benedikt Frey told us that the trade-offs for graduates changed: Companies agreed to train graduates if they were still doing valuable work for them such as research, presentations etc. These tasks can now be done with AI easily. How do your students prepare for this?
Anecdotally, many of my students now want to become solopreneurs and self-employed advisors. Twenty years ago, they might have strived to become bankers or strategy consultant and ten years ago start-up founders. Today, it seems they no longer want to have anything to do with teams, hierarchies and organisations. I find that worrying: A withdrawal from work itself.
There are no shortcuts in thinking. AI can help summarise literature, but you cannot have innovative ideas without digesting the material yourself. Writing research papers trains intuition and expertise. Without that, you produce only artefacts.
So I encourage students to do things that cause friction and to ask themselves what kind of work they would be willing to suffer for? Good work is not always fun. Career ladders must also be leaned against the right wall. Too many follow expectations from their environment, even if their heart lies elsewhere. My advice is: If you love something, pursue it wholeheartedly. You will excel more than if you half-heartedly chase status.
In the past, mastery typically took decades to achieve, but maybe, AI can compress this process. Why should a 25-years old graduate not be able to advise a CEO, with AI as their team?
Perhaps. Decentralised work with digital agents may make traditional employment less necessary. But real learning still requires friction and time. AI delivers information faster, but without struggle, it does not become knowledge.
Thanks to Google Maps, for example, I no longer remember which subway lines run past my door. Technology has not made me more knowledgeable, it made me less knowledgeable.
That said, some students may indeed use AI to specialise quickly in a niche and strike out on their own. But broadly speaking, I see a worrying trend: Many debates about the future of work are really debates of withdrawal: Remote work, the four-day week, “quiet quitting,” universal basic income, influencer careers: all of these trends are forms of stepping away. This reflects disappointment with work, the burden of care work, or taxing work.
I find it sad that bad is in such a bad state. Work is perhaps the last space where we must engage with people we did not choose to engage with ourselves such as friends, partners, or peers. Work is a fellowship of people we haven’t chosen to be with and while this can be very exhausting, it is also magical. And these social qualities of work are also essential for society and democracy.
So my wish is that work remains a social space where we meet, recognise problems, and solve them together. If AI takes over some routine burdens, we might reclaim that space for reflection and genuine collaboration.
Key Take Aways
Make good use of your time: AI clearly saves time. But it’s important to clarify what to use the saved time for: Do you continue to work in the system or on the system, improving the way how work gets done?
AI should be approached with “practical optimism”: Techno-optimism and pessimism encourage passivity; seeing AI as a “normal technology” gives you more agency to deploy it in a beneficial way.
There is no shortcut to thinking: AI can maybe help you think faster, but innovation still needs more: Intuition, expertise and also suffering. It’s worthwhile to think about what you would be willing to suffer for. Without struggle, information will not become knowledge.
Season 1: AI and the Labor Market | Episode 1: The Future of Work, In Progress | Episode 2: Carl Benedikt Frey: “Professionals are not prepared for the coming changes” | Episode 3: Jonas Andrulis: “Digitize the state! That’s the foundation we all stand on” | Episode 4: Cindy Richter: “AI creates roles that didn’t even exist before” | Episode 5: Matt Nigh: Matt Nigh: “You can’t force AI from the top down – it needs energy from the ground up”






A very important distinction: "think faster" vs. "think better".