top of page
Search
Writer's pictureJVC

Breaking the Mold: Generative AI and the Future of Interaction

When cars first replaced horse-drawn carriages, their designs still resembled the old: a box on wheels with reins swapped for a steering wheel.

Early MP3 players looked like miniature stereos, complete with fake knobs and sliders on their digital interfaces.


This is called the skeuomorphic phase, where new technologies borrow heavily from what came before—not because they must, but because it feels familiar.


Where are we when it comes to this phase with Generative AI? Are we still mimicking the past, or are we beginning to redefine what interaction truly means?


For much of technological history, innovation has followed familiar patterns: new tools often look and feel like the ones they replace. Early cars resembled carriages, digital music players mimicked stereos, and even today, much of our interaction with software still relies on decades-old metaphors like keyboards, menus, and desktop icons.


Generative AI introduces something new—not just another layer of abstraction, but the potential to shift the entire paradigm. It doesn’t simply automate tasks or make existing tools faster; it changes the way we engage with technology itself.


But are we fully embracing what this means, or are we still designing with old assumptions in mind? To understand where we are, we need to look back at the evolution of interaction and ask:


what has brought us here, and how could or should this be different moving forward?


How Interaction Layers Evolve


Human-computer interaction wasn’t always about sleek screens and intuitive designs. In the early days of computing, it was messy, physical, and slow.


Imagine a for a moment the first Turing machine(a "computer" used to help break the enigma code of Germany during WWII): a mechanical marvel of dials and wires where each tiny adjustment represented a different function. If you wanted to compute something, you didn’t just press a button; you reconfigured the entire machine. Each task demanded deep technical knowledge and meticulous effort.


This was the first layer of interaction when it came to the digital world: raw complexity.


Machines spoke their own language in math, and it was up to us to decide the best way to interact with it, speak it.


But humans are pattern-seekers. We simplify, abstract, and build bridges. Enter the first major leap in interaction: the keyboard and the software languages behind it. 


Suddenly, we didn’t need to think about machine logic as much.


Pressing the letter “H” on a keyboard triggered a symphony of processes that translated a simple keystroke into machine-readable code. What once required rewiring, or a complex set of machine commands could now be done with a tap.


From there, interaction layers kept evolving.


Graphical User Interfaces (GUIs) introduced visual metaphors like folders and desktops, allowing us to manipulate digital information as if it were physical.


The internet added yet another layer: instead of being confined to local systems, we could interact with vast networks of information.


Each layer abstracted more complexity, making computers more accessible while simultaneously shaping how we thought about them.


Through all this progress, the challenge, and what is still a challenge today something essential is coding into these systems or interactions the most important thing of all: intent.


Every tool we use—keyboards, GUIs, apps—assumes a structured workflow.


If you want to design something, you open specific software. If you want to adjust a setting, you navigate menus. These tools are powerful, but they demand that users adapt to their logic. The machine doesn’t know what you’re trying to accomplish—it only responds to the inputs it recognizes.


Take a modern example: designing a user interface.


You start by opening a tool like Figma, selecting templates, tweaking sliders, and dragging elements into place. But what if you didn’t need to do any of that? What if the software understood your intent immediately?


Instead of working within the constraints of the tool, the tool worked to express your vision as you needed or wanted it.

This gap between intent and execution is what generative AI begins to solve.


It doesn’t rely on predefined paths or rigid structures. Instead, it interprets what you want or say and dynamically creates what you need. (at least for the most part)



The problem isn’t just with the tools—it’s with how we think about them.


Even today, much of our technology remains stuck in skeuomorphic thinking.


Keyboards, mice, and touchscreens are designed to mimic the physical tools they replaced. Even our interaction with AI, through chatboxes or code systems, feels rooted in these paradigms.


Why do we still think of modern software as something we need to control directly?


Imagine a customizable keyboard where sliders, knobs, and buttons adapt to your needs in real time.


The generative AI behind it doesn’t just interpret your input—it understands the context of your work, dynamically changing functionality of your UI or interface to suit your goals.


This isn’t a distant vision. It’s the beginning of a new layer of interaction, one that’s already starting to take shape.

For decades, technology has been about static interfaces, fixed paths, and predictable outputs.


We may have the opportunity to shatter this mold, opening the door to tools that respond, adapt, and even anticipate but still keep that intuitive strucutre and standard we need to work together and share information.


In the past, every interaction with software required us to speak its language.


Generative AI flips that dynamic. Instead of adapting to a tool, the tool adapts to you.


It’s no longer about choosing the right feature or navigating endless menus; it’s about expressing your intent and letting the system collaborate with you to achieve it.


We can already do this to some extent right now. Let's say you are brainstorming a design idea. Instead of opening templates, tweaking settings, and adjusting elements manually, you describe your vision in plain language: “I want something playful, retro, and bold that reminds me of this and gives people the feeling of this by doing this and that....” 


The AI not only generates options but evolves them based on your feedback. It becomes a creative partner, refining and reshaping the output in real time.


This isn’t limited to design. Writers, researchers, and even engineers are already working with generative AI to draft content, organize ideas, and solve problems (and create some).


What makes this unique is how the AI doesn’t just react—it contributes, expanding possibilities and offering insights you may not have considered.


Now I understand there is a whole other world of issues and ethics surrounding this, and I do write about these a lot, but it is out of the scope of this article and I don't want to detract from the very important concept here.


Real-World Applications


Now I understand I have been constantly bringing up how AI is challenging the boundaries of how we interact with technology, but it’s not about tearing down the systems we’ve built.


It’s about understanding what is essential in the way we work and interact, and remolding those elements into something more fluid, adaptable, and intuitive.


This isn’t about replacing tools; it’s about refining the idea of them

—finding their essence and reimagining how that essence can be expressed in more complex, yet accessible, ways.


Take Aristotle’s concept of a chair. To him, the “real” chair isn’t the physical object itself but the idea of it—the essence of “chairness.”


This concept encompasses its purpose, its function, and its meaning, while leaving the specific form open to interpretation.


A chair can be made of wood, metal, or stone. It can be carved, assembled, or even naturally occurring, like a rock perfectly shaped to sit on. What matters is not how it’s made but that it fulfills the idea of what a chair is meant to be.


Generative AI works the same way. It allows us to focus on the “idea” of our tools and workflows—their essence—without being confined by the specific materials, processes, or constraints of the past.


By leveraging this flexibility, we can create systems that standardize complexity at higher levels while making the experience of using them simpler, more natural, and more aligned with human intent.


Let me try to lay out a few examples of how I think this would look:


Adaptive Interfaces

Imagine a keyboard or UI that evolves to suit the task at hand.


When you’re editing video, sliders and buttons appear for trimming and adjusting clips as you need them or in the ways you need them within the context of your work.


Switch to coding, and the same device shifts to shortcuts and debugging tools. The essence of the keyboard—its ability to translate input into action—remains constant, but its physical and digital expression changes dynamically.


This isn’t about abandoning the keyboard or standard application as a tool right now. It’s about elevating its concept, allowing it to mold itself to your needs while maintaining a sense of familiarity and purpose.


My goal isn't to completely demolish or get rid of what UI/UX or "keyboards" do—but rather reinvent them, keeping its essence intact while freeing it from static limitations.


This idea takes us back to the Turing machine—the ultimate expression of raw computation, where every task required reconfiguring dials and wires.


The machine itself wasn’t the problem; it was the rigid interaction with it that limited its potential.


Generative AI flips this dynamic. Just as transistors, assembly languages and eventually keyboards, and so on freed us from the physicality of rewiring machines, generative AI frees us from static interfaces and workflows.


It keeps the essence of what current UI and apps are striving for—translating human intent into action—but reimagines how that intent is expressed and fulfilled.

Let's look at systems or tools that are attempting to do this now in the physical world: VR and AR tools.


They have immense potential, but their adoption remains limited across industries. It’s not just the technology that holds them back; it’s how we think about their role in our lives.


Gaming communities and the gaming industry as a whole embraces VR because it immerses users in a clearly defined space—a world created for play. But what about work? What about everyday life? The barrier isn’t technical; it’s conceptual.


This logic follows into the workspaces and current UI and UX design.


Imagine a workspace where tools, data, and interfaces emerge exactly when and where you need them, fading away when they’re no longer relevant.


These tools don’t abandon the principles of productivity or organization—they embody them in a form that feels natural, intuitive, and tailored to your intent.


The essence of a workspace—accessibility, clarity, and focus—remains, but its expression becomes fluid, adaptable, and deeply personal


At its core, a workflow is about turning raw materials—ideas, data, or actions—into something useful. Many concepts within machine learning and Generative AI takes this essence and elevates it.


Instead of rigid processes requiring multiple tools and steps, workflows become flexible, adaptable, and almost instantaneous.


For example, researchers can take messy datasets, feed them into this generative ecosystem, and receive clean, actionable outputs in minutes.


The essence of data analysis—insight and understanding—stays the same, but the steps to get there are streamlined, opening possibilities for entirely new ways of working with information.


The Cultural Shift: Rethinking Our Tools


What makes these changes profound isn’t just the technology itself—it’s how it forces us to rethink the role of our tools and more importantly what are part is in relationship to them.


This is something many great technologies before this have done, the smart phone, the internet, the very first computers, machines in general.


The opportunity to shift is no different now, the difference is in how it is shifting.


Integrating Generative AI into our workflows or tools doesn’t aim to erase what we know or how we work.


Instead, it asks: What is the essence of what we’re trying to do? 


By focusing on this essence, we can let go of outdated forms while preserving the functions, ideas, and standards that matter most.



This isn’t about rejecting the systems we rely on but about evolving them.


Just as a rock or branch can fulfill the idea of a chair, generative AI enables new forms of interaction that fulfill the essence of our tools. It creates spaces where old and new coexist, where the familiar is reimagined, and where the possibilities for expression and interaction are limited only by our ability to embrace them.


And so we find ourselves asking:


Are we ready to move beyond the surface of our tools yet again and embrace the idea of them, the essence of what they were always meant to be?


6 views0 comments

Recent Posts

See All

Comments


bottom of page