Spatial Synthesis

Earlier this year I began a project named Spatial Synthesis. Its goal was to develop a novel methodology for designing Extended Reality (XR) applications, with a particular focus on User Experience (UX). At the time I was designing a Virtual Reality (VR) application, Salaryman RESCUE!, which has now been released on Steam and itch.io, and in fact Spatial Synthesis was initially going to be a devlog for the game before I decided to expand its scope.

What constitutes good design practice in spatial computing? There is very little documentation on this currently available and I think I’m in the perfect position as an XR professional and design student to investigate this idea. 

https://dystopianthunderdome.wordpress.com/2023/08/11/advanced-digital-artefact-pitch/

What started as a game development log, aimed at documenting the creative process behind XR game production, gradually shifted gears. It transformed into an insight-based exploration of designing for spatial media in general, focusing on the strategic principle of exaptation. This principle, the art of repurposing practices for novel uses, became the focus of my project, guiding my approach to XR development, and led to further insights related to spatial evolution. I found it interesting that through the use of a concept taken from evolutionary biology, I was able to gleam deeper biological insights about spatial media design.

Exaptation — a feature that performs a function but that was not produced by natural selection for its current use. Perhaps the feature was produced by natural selection for a function other than the one it currently performs and was then co-opted for its current function. For example, feathers might have originally arisen in the context of selection for insulation, and only later were they co-opted for flight. In this case, the general form of feathers is an adaptation for insulation and an exaptation for flight.

University of California, Berkeley, https://evolution.berkeley.edu/exaptations/

Though my initial vision of a vibrant Y2K-tinged website home to a detailed design bible for spatial media, and accompanied by a thriving Discord community, has encountered the realities of a demanding schedule in VR game production & full-time university, the journey has been no less enriching. Throughout this process, I’ve unearthed key insights into the application of exaptation in XR, the importance of an object-oriented UX design to prevent ‘UX whiplash,’ and the development of a unique visual language rooted in ‘future nostalgia’ and vast, open expanses. It is ironic that my initial goal of creating a novel methodology for designing spatial computing applications, which was pushed aside in favour of a shiny, branded website and Discord community, has been achieved almost accidentally along the way. For this reason, even though I don’t have that big, shiny digital artefact to point to, I am still pleased with where I am up to with this project.

This report is not just a narrative of what was achieved (and not achieved), but an honest reflection of the learning moments, the challenges navigated, and the insights gained. These experiences have not only shaped the current state of Spatial Synthesis but have also laid a foundation for my future career as an XR professional, and, I vainly hope, contributed however slightly to the wider field of designing fluently for spatial media.

Exaptation

One of my very first posts on this blog was about the deep link between ecology and technology, and I once again find myself at the crossroads between these two fields. Exaptation is a term taken from evolutionary biology and refers to a trait that evolved to serve one function being coadapted for another. The feather, which was likely selected for thermal regulation but has been exaptated to aid in flight, is the classic example.

Photo by Blake Cheek on Unsplash

One of my instinctual practices when coming up with fun mechanical basics in XR applications has been to find inspiration in the real world around me. This practice is certainly not new to digital media or game development (Miyamoto’s Pikmin concept was famously inspired by his gardening hobby), but the spatial link between both the real world and XR technology makes this process even more suitable. The challenge lies in making XR experiences feel intuitive – and that’s where exaptation shines. By repurposing familiar physical interactions into the digital realm, I could bypass the steep learning curves often associated with new technology. It was about blending the known with the unknown.

The repurposing of existing real-world tools and technologies into the XR environment not only provides the sense of effortless familiarity that UX designers are seeking, but also opens the door to unprecedented interactions and experiences.

https://dystopianthunderdome.wordpress.com/2023/09/15/spatial-synthesis-reflection/


Looking at any successful XR application, we can see physical practices that have been exaptated into digital success stories. Using a sword to slice musical notes in Beat Saber just feels intuitive, because most people have swung a sword before – pretend or otherwise. The key to exaptation for XR is to present an object that has a recognisable function – an object that dares the user to try something, just to see if they can – and reward the user by not only accounting for that use, but by going even further: giving it a use it could never have in the real world. This leads to moments of user delight. Good XR design is less about learning new tricks and more about applying old ones in novel ways. In Salaryman RESCUE!, a key mechanic is purchasing bottled beer from a vending machine. We designed the bottle with a twist-cap and a satisfying fizzy sound effect, because we thought people would try to open it that way. Then we thought it would be fun if you could simply smack the lid off. We didn’t force-feed this information to the user, but if any of them ever try it, they’ll be pleasantly surprised. This leads me to my next insight…

It is much better to simply understand you can’t interact with something than to try and fail. That is an immersion-breaker.

Ben Lang’s Inside XR Design series for RoadtoVR is an analogous artefact.

Spatial Evolutionary Biology and UX Whiplash

Photo by Andre Mouton on Unsplash

Our brains are billions of years in the making. Billions of years of evolving to traverse spatial environments. XR UX, rather than being a continuation of screen media, is a continuation of these ancient cognitive processes. When I brought this up with my professor, Travis Wall, he said it very succinctly: “touchscreen UI is a continuation of a language of interacting with machines that is fairly new (only a few hundred years old), and is a literacy because it is grounded in abstractions and symbology. Where interacting with spatial media is more related to our own physical movement in space, where we have hundreds of thousands of years of evolution and cognitive hardwiring.”

(That’s not to say there aren’t lessons to be learned from touchscreen UI. After all, flatscreen displays work in real space, so they can work in spatial media also.)

When a user tries something in XR that would work in the real world, for example grabbing a drawer handle and trying to open it, and it fails, they are immediately disappointed. There is a disjoint between the lessons they have learned via existing in space their entire lives, and what is happening in this mediated spatial experience. It immediately rips the user out of their immersion and destroys the magic, and they are quite likely to end the session. I call this effect “UX whiplash”. UX whiplash exists in legacy media, too, but the brain’s spatial evolutionary biology makes the effect exponentially more disruptive when experienced in XR.

Humans naturally think of the world as objects. As we evolved in real-world environments, we came to understand our physical experiences in terms of tangibles.

Pradipto Chakrabarty, UX Planet, https://uxplanet.org/object-oriented-user-experience-design-the-power-of-objects-first-design-approach-e65e07488a00

Towards the tail end of the project, my team and I exhibited Salaryman RESCUE! and Pond Scum at PAX 2023 in Melbourne. We set up VR headsets and observed hundreds of players playing our games. It turned out to be a treasure trove of insights, and we left PAX with pages and pages of notes. When designing XR interactions, I found I had to continuously ask myself “I wonder if I can do this?”, pretending I was a new user, rather than someone who knew exactly what I could and couldn’t do, to cover as many possible triggers of UX whiplash as possible. However, there is only so much one person can think of. At PAX, I watched as players tried to interact with things that I had never considered interacting with. We ended up making every object in the game interactable, even if they had no special uses, because of these observations. The exhibition ended up serving as a hyper-focused beta test period, but even more than this they taught me a lot about designing in XR.



Object-Oriented User Experience (OOUX), a UX term coined by Sophia Prager, was influenced by object-oriented programming, and is a model of UX design organised around objects rather than actions. While it was a novel approach for flatscreen media, it’s almost a necessity for XR apps. In implementing OOUX, I focused on creating digital objects in XR that interacted in ways users instinctively expected. The twist-cap bottle in Salaryman RESCUE! wasn’t just about the visual mimicry of a real bottle; it’s about ensuring that the action of twisting the cap off feels as satisfying and intuitive as it does in the real world. This approach extends beyond mere functionality; it’s about crafting an experience that resonates with the user’s innate understanding of object interactions.

Visual Language and Future Nostalgia

In a previous blog post, I reflected on the journey of creating a unique visual language for Spatial Synthesis as a project. The intent was for this theme to be utilised over both screen media and spatial media, and so it was quite an interesting challenge. Developed for a university UX design class, I began by considering the bubble buttons of the Y2K era, in which screen UI first utilised the illusion of depth. It seemed natural to take that illusion and turn it into a (digital) reality. By integrating these elements with the calming tones of blues, oranges, and purples found in natural skies, I aimed to create a user experience that was both exciting and comforting. This visual approach was designed as a bridge, connecting users to the new and unfamiliar world of XR through the lens of familiar and nostalgic design elements. While I never got to the fun part – designing the website in XR – I am very happy with the mobile website I designed, which utilised this aesthetic. It can be viewed here.

The challenge was finding the right design language that would resonate with the spatial depth of XR. Y2K’s gel-like bubble buttons and glossy finishes, best remembered in Apple’s Mac OS X 10.0 Cheetah, caught my eye. Although not initially designed for XR, their shape and transparency were exaptated to function beautifully as 3D objects in this new context.

https://dystopianthunderdome.wordpress.com/2023/09/15/spatial-synthesis-reflection/

Project Outcomes and Future Direction

As for the current state of Spatial Synthesis, it’s a mixed bag. The big, shiny digital artefact – the comprehensive website, the bustling community, the detailed design bible – didn’t materialise as I had initially hoped. The realities of VR game production and academic commitments took precedence. I also didn’t pursue certain elements of XR design as much as I would have liked to: namely imagineering and environmental storytelling ala the work of Don Carson. However, what I did achieve was perhaps more valuable than a questionably useful hypothetical community: a rich set of insights into XR design entrenched in biological understanding, and a unique and exciting visual language that can be repurposed for XR apps.

The question of where Spatial Synthesis goes from here is open. There’s the option to continue as initially planned, but I don’t really want to. I’d much rather take the insights I’ve gained, make the best possible spatial media experiences I can, and let those do the talking. Watch this space.

Conclusion?

This was a little ramble-y and unstructured, so I’ll do us both the favour of wrapping up the key takeaways from my time with Spatial Synthesis.

First and foremost, the application of exaptation in XR development was a game-changer. It wasn’t just about repurposing elements for new uses; it was about rethinking how we interact with technology on a fundamental level, influenced by our evolutionary biology. This approach allowed for the creation of experiences that felt intuitive and familiar, yet excitingly novel. It taught me that innovation in XR doesn’t always mean reinventing the wheel. I was much better off getting out into the world, picking up objects, and thinking, “how can I make this do something new?”

Secondly, addressing UX whiplash through an object-oriented design approach. By focusing on how users naturally interact with objects, I was able to design experiences that minimised dissatisfaction and maximised presence. This insight is something I plan to carry forward in all my XR projects. The key takeaway here is simple: in XR design, I must respect my users’ instinctual interactions with their environment.

Lastly and somewhat lamely compared to the above realisations, I made a cool visual aesthetic that works across flat and spatial media. I’m pretty proud of that, and I want to use it in a future app.

Looking ahead, I think these insights may well be foundational for my future work in XR design. They have shaped my understanding of what makes an XR experience truly engaging and user-friendly, and this project has forced me to put them into words, rather than vague feelings that I somewhat understood. Reflecting on the social utility of Spatial Synthesis, I think the project has aligned well with my career aspirations and I think that any XR designers or developers reading this may have benefited from it. While the project may not have reached all the lofty goals I had set for it, the journey itself has been invaluable, offering insights that will inform my design choices for years to come.

Peer Review #2

As a follow up to our first peer review session with Jackson, we met up again, this time inviting Hugo.


Huge started by briefing us on Online Tribes, a digital artefact focused on revolutionising UoW’s admission letter for international students. I felt that his project was really ambitious and using a variety of emerging technologies. His key learning moment was that this process as it was was a problem and that if it could be solved it would be really useful for international students.

Meanwhile Jackson’s artefact, a PNGtubing project where he plays the part of spooky ghost Jaroth, had developed significantly. Last time we caught up he had felt that he had been lagging behind and hadn’t really achieved much, and this was something we realised that we shared at the time. However this time around he’s finished all his new assets and he was clearly feeling much better about things. His key takeaway was to just “get stuck into things” – to stop worrying and just start. He found if he just threw himself into it though, the work was completed surprisingly quickly.

I quickly summarised Spatial Synthesis and shared the goal. I reiterated my progress, mainly focusing on the mobile website design and experience exhibiting VR games at PAX 2023 in Melbourne. My key learning moments were that a) I surprised myself with how much I liked my mobile web design work (I really didn’t think I would come up with anything I was proud of sharing) and b) that good XR design was based on a language of spatial interaction and really had very little to do with legacy screen UX.

Peer Review #1

Recently, I sat down with Jackson for a podcast episode centred around our individual Digital Artefact projects. Here’s a brief rundown of what we covered and some key takeaways.


Key Highlights from Jackson’s Project:

  1. Concept & Evolution: Jackson shared insights into his DA, a Twitch.tv PNGtuber stream in which he plays a spooky ghost called Jarothorn. This is a continuation of an existing DA, and his major plan for this session was to rebrand/overhaul his aesthetic and branding.
  2. Major Learning Moments: Jackson felt that his major learning moment was his perfectionism; or that he was spending more time on design and therefore hadn’t posted anything yet. Just like me, he wanted to get all his ducks in a row before firing.

Key Highlights from Rahn’s Project:

  1. Concept & Evolution: I elaborated on Spatial Synthesis, talking about how the project had changed and evolved since its inception as a game development log, into a multiplatform XR design bible/community. I talked about how my major impetus to shift gears was that I had to design a website for a UX design class, and so it made sense to incorporate this task into my XR DA.
  2. Major Learning Moments: I identified three major learning moments, and I talked about them in a reflective blog post.
    a. The process of exaptating Y2K visuals and exisiting technologies into XR;
    b. the importance of Imagineering, and;
    c. the fact that I completely abandoned FEFO for this DA, instead opting to spend the bulk of my time on research and design

Common Themes and Shared Insights:

We ran out of time but we definitely observed that we both shared the same perfectionist attitude towards our respective projects. While Jackson was gatekept by safely finding the right artist for his assets, I wanted to ensure that Spatial Synthesis was visually up to scratch before I launched it. I think in hindsight we both could have gone about this differently; I could have posted each section of my website as blog posts, for example, and incorporated them into my website once it was live. Jackson could have used placeholder assets.

Spatial Synthesis – Reflection

Initially conceptualized as a VR game devlog, this project has shifted gears over time. Now, it’s focused on the broader potential of Extended Reality (XR) through Spatial Synthesis, a multiplatform design bible and community. This project is a mix of portfolio work, research, and design insights aimed at tech bros, design students, game developers, and anyone surfing the waves of change. Drawing from works like Hillmann’s “UX for XR” and the work of Don Carson, I’m diving deep into spatial design. This reflection chronicles the process of bringing this concept to life, the lessons imbibed, and the continuous evolution of the project.

In the inception phase, my primary focus was “Salaryman RESCUE!” – a VR game focused on guiding inebriated Japanese office workers through Tokyo’s streets. As producer, I contemplated pitching this as my main digital artifact, detailing the journey via devlog blog posts. However, shortly after, I sensed a broader horizon awaiting exploration beyond just the game.

After speaking with my tutor, I realised there was a significant gap in the market for the documentation of XR design practices. While the devlog had its niche appeal, offering more general insights into XR and spatial design had the potential to cater to a wider audience and provide lasting value, and serve as a valuable portfolio piece for my career.

Thus, Spatial Synthesis was born. Designed as a hub for spatial computing insights, it would not only collate valuable resources but also potentially pave the way for demonstrating practical XR design strategies. Initially it was just going to be a blog, but as I had to design a website for a UX class, I figured it might as well be for Spatial Synthesis. Thus, at this stage, the project encapsulates:

  • A website and shop (stage: design)
  • A webXR experience (stage: concept)
  • A devlog (stage: live)
  • A Discord community (stage: soon ™)
  • More????

Who would benefit the most from Spatial Synthesis? Through conversations and observations, I defined a unique intersection: tech bros and design scholars – pretty niche stuff. I imagine this kind of content will have exponential returns on social value, though, as XR technology slips into societal ubiquitousness. But you know who else will benefit? ME. I want to be an expert on spatial design, and I’ve always learned by doing. I don’t really care if my audience is a tiny minority at this stage; my audience barely exists yet. They will come.

Exaptating Y2K Aesthetics for XR: Turning Nostalgia into Futuristic Innovation

Spatial Synthesis embodies both nostalgia and futurism. The challenge was finding the right design language that would resonate with the spatial depth of XR. Y2K’s gel-like bubble buttons and glossy finishes, best remembered in Apple’s Mac OS X 10.0 Cheetah, caught my eye. Although not initially designed for XR, their shape and transparency were exaptated to function beautifully as 3D objects in this new context. By adding liquid shaders and physics, these buttons become much more complex interactables. The use of negative space – common in Y2K media – emphasized spatial depth, and Three.js allowed for the tactile representation of XYZ dimensions, all set against a backdrop of calm natural environments which serve to insert the tangible into the virtual – an exaptative echo.

An early experiment with ‘Aqua’ visuals in XR.

Though I documented this realisation in my pitch blog post, a core moment for the project was when it dawned on me that many of our everyday tools and technologies which weren’t designed for XR could easily be exaptated into novel digital experiences. Think about how a pistol can be used to select a menu option, or a saber used to slice musical beats. This methodology is core to good XR design philosophy. The repurposing of existing real-world tools and technologies into the XR environment not only provides the sense of effortless familiarity that UX designers are seeking, but also opens the door to unprecedented interactions and experiences.

Incorporating this insight, my next step is to continually scout for everyday objects and technologies that can be repurposed in XR. Whether it’s a menu-selection tool or a game mechanic, exaptating real-world elements into digital experiences is the future of intuitive and immersive XR design. Furthermore, Spatial Synthesis will serve as a repository for these insights, ensuring that as XR evolves, so does our understanding and utilisation of exaptation.

Imagineering XR

Shakespeare was a spatial designer.

Don Carson’s transition from Disney Imagineering to the world of VR is a prime example of the applicability of exaptation within XR design. His principles of environmental storytelling (among many other valuable posts), initially conceptualised for theme parks, offer profound insights that can be seamlessly integrated into the XR realm. It struck me that despite being a totally ground-breaking and novel medium, spatial media was not unique to XR by any means; Shakespeare was a spatial designer. Rather than XR being an evolution of flatscreen digital media, it’s much more accurate to say that XR is an evolution of theme park, exhibition and event design. The world of themed environments, as outlined by Don Carson, has provided a treasure trove of design principles ripe for exaptation into XR. The future of Spatial Synthesis will undoubtedly be shaped by these insights, ensuring that users are offered an immersive, engaging, and emotionally resonant experience.

FEFO… Or Not

Another realisation was that compared to my past digital artefacts, I had spent far more time on research and design for Spatial Synthesis. Previously, adhering to the principles of Fail Early, Fail Often, I had started making content much more rapidly, thus ensuring a tight and responsive feedback loop with my audience. This time, I’m being much more careful. And I’m ok with that.

What Next?

The journey of Spatial Synthesis has been one of constant evolution, reflection, and learning. My personal takeaways from this phase have been: exaptating existing technologies, the importance of Imagineering, and my intense focus on design. As I head into the next phase, my focus will be on expanding the Spatial Synthesis community and continually refining the design bible through exaptative analysis.

Outsider witnessing, transitioning between industries, and human-machine therapy

Me

I want to talk about that way I transitioned from hospitality (hereafter hospo) jobs to remote work jobs involving digital skills. I worked in hospo out of necessity, not by choice. I had bombed out of uni right after high school – I just wasn’t ready at the time. I was a very confused (and probably depressed?) teenager who had left home at seventeen and suddenly found myself in a different city and with very little employability. After bouncing around a few jobs – call centres, retail – I found myself working in a hotel kitchen, eventually working my way through the front desk and to the role of Duty Manager. Several years later, I became a barista slash café all-rounder at an Inner West Sydney breakfast hotspot. I’m providing this brief background as I feel the extra context is important to provide.

Hospo out of necessity. Am I glad I spent those years in hospo? Yeah, I guess. I learned valuable social and task management skills. I learned how to manage both my own time and other people. I learned how to value my own work.

A couple of years ago, a mutual friend introduced me to my current employer, and all of a sudden I was offered a very different kind of job. After years of getting up early, running around on my feet, serving customers – suddenly I was working from home, with flexible hours, using my digital skills to organise, design and project manage. It was a massive and very welcome shift for me. The biggest thing was that I could work on things in my own time. Rather than working based on someone else’s clock (go here, do this – right now), I could set my own ETAs and work on things when I felt best able to. It was a remarkable change in doing things and I quickly understood how much more value I was able to create by working according to my own needs. I think the change I went through is a welcome one and one that should be the norm in project-based workplaces.

Outsider Witnessing

In the 1980s, Michael White and David Epston pioneered narrative therapy, challenging dominant social systems that pathologised individuals (OSWAN, ‘Outsider-witnessing In Conversation with David Stapleton and Special Guests‘). Rooted in a post-structuralist model, they rejected the notion of fixed identities, emphasising instead the fluidity of personal narratives and the influence of power dynamics, both in society and therapeutic settings. White introduced the idea that individuals often share ‘thin descriptions,’ problem-centric narratives that overshadow the myriad of other stories that contradict or resist this central problem. By focusing on ‘re-authoring,’ narrative therapy seeks to uncover these alternative, often overshadowed narratives, enabling individuals to redefine their life stories beyond problem-saturated accounts.

Outsider witnessing, a narrative therapy technique, is a process in which individuals who are not directly involved in a person’s life or a particular event (the “outsiders”) are invited to listen to and respond to someone’s story or narrative. The goal of outsider witnessing is to validate the narrator’s experiences, break feelings of isolation, and foster a richer understanding of one’s own story by hearing it reflected through others. It provides an opportunity for people to see their experiences in a new light, often with a greater sense of agency, connection, or understanding. Outsider listening is not only useful for the storyteller, but also for the outsider, as it helps them understand what they’re drawn to and why.

The understanding of identity as a multi-voiced phenomena is one that contrasts significantly with structuralist understandings that establish identity as a single voiced phenomena, as an expression of a self that is to be found at the centre of personhood.

Michael White, ‘Reflecting Teamwork as Definitional Ceremony revisited’, 2000

GPT-4

So this is kind of wacky, but I told my story to GPT-4, and asked it to play the role of the observer. While the AI’s lack of emotional intuition and lived experiences presented interesting challenges, I was excited to hear what a Large Language Model would come up with. All things considered, it did really well!

Non-verbal queues play a huge part in observing stories, and is obviously something GPT-4 cannot work with. Maybe we’ll get there one day!

Honestly, I didn’t expect it to do so well! I was able to reflect on what I had written in ways I hadn’t previously considered. Unlike a human playing the role of the observer, the AI’s response was uncoloured by personal biases or emotional undertones and presented a distilled & objective reflection of my narrative. This objectivity, paradoxically, offered a fresh perspective that was both validating and enlightening. While a human observer might naturally interject with their own experiences or emotional reactions, GPT-4’s feedback was purely rooted in the words I provided. This was definitely a detriment, and highlights how important it is to conduct outsider witnessing with a human. All things considered, this session underscored the irreplaceable value of genuine human empathy, but in a way I think it also hinted at the complementary roles technology and humanity might play in future therapeutic landscapes. It would be very difficult to get such succinct and constructive language out of a human on such short notice.

Spatial Synthesis: Pitch

I forgot to hit record before I drew the “logo” and then I liked it too much to do it from scratch 😵

If you know me, you know I’m interested in XR (rather than explain this term, which stands for “extended reality”, I’m simply going to tell you to look at the below chart). I think the medium is entirely unique, and once ubiquitous will forever alter the way in which humans socialise, work, learn and play. Unfortunately, right now there is a lot of really crappy XR software out there. Most developers seem to think they can just port flat display-based software to XR, making minimal changes, and create something novel in the process. This couldn’t be further from the truth. In fact, there are surprisingly few genuinely brilliant XR apps on the market right now – and most of them share one thing in common: they were developed from the ground up for XR. XR insights and sensibilities were applied in their design. We need more of this.

See? This is much easier.
Source: Interaction Design Foundation

I’m going to back up here to provide a bit of context and iteration history. I’m a digital producer currently producing several XR titles at an indie start-up called Universal Weebs Unlimited. Originally I was planning on pitching one of these titles as a digital artefact, and providing production updates along the way as a form of reporting. After much discussion and debate with my friend GPT-4, this was narrowed down to a game dev log, which could be useful as both marketing for the project and also just as a way to foster a community and generate some engagement.

GPT-4 was very useful in getting to where I am now.

But after speaking with my tutor, I realised that what I was more interested in was the design process for XR experiences. What constitutes good design practice in spatial computing? There is very little documentation on this currently available and I think I’m in the perfect position as an XR professional and design student to investigate this idea. It also helps my career and fosters my own developing expertise in the spatial computing field.

Note to self: do not ask GPT-4 to pretend it is Steve Jobs.

There are three strategic development concepts that will prove useful throughout the life of Spatial Synthesis:

  1. Reframing: at its core, reframing is about shifting our perspective to uncover fresh avenues of thought and creative solutions. In Spatial Synthesis, the process of deriving spatial design insights and formulating a methodology serves as a reframing exercise—transforming the traditionally intuitive design process into a structured, step-by-step documentation. To be honest I’m not sure how far I’m going to go with documentation – I might leave it at a collection of insights… we’ll see.
  2. Adjacent Possible: a concept that speaks to the immediate next steps available from our current position. Think of it as a room that contains all the conceivable moves you can make from your current state. As we make a move and enter a new “room,” new doors (or possibilities) open up. Considering how new the XR industry is, the whole space is in a constant question of adjacent possibilities. I intend to explore these, and in doing so hopefully uncover some design insights.
  3. Exaptation: in technology/design, this refers to the serendipitous repurposing of a feature or tool for a function it wasn’t originally intended for. For the purposes of Spatial Synthesis, I’ll be using the concept of ‘exaptation’ to repurpose real spatial environments, technologies, and practices into mediated virtual experiences. The pressure washer was designed as a tool to quickly clean grime off surfaces, not as a central mechanic in a video game – but if you scripted a virtual pressure washer suddenly the XR applications are endless. Exaptation, in XR design, is taking the real and finding new virtual uses for it.

It would have been easier to pick one of these and focus on it solely, but I found that each was too applicable to discard.

There exist already a sea of gamedev and game production blogs, vlogs and YouTube tutorials for flat games. I don’t care about these. When seeking out analogous artefacts that might be of use to Spatial Synthesis, I focused exclusively on XR, and I approached from a design angle rather than a development or production angle. For example, the work of Don Carson, a former Disney theme park designer turned VR environmental artist. Carson’s unique set of skills and experiences position him as an expert in spatial design, and his design blog is a treasure trove of spatial insights.

Solarpunk Solutions – Contextual Report

Context

Earlier this year I began a digital project I named Solarpunk Solutions. The primary purpose of the Digital Artefact was to leverage the potential of social media to foster a better understanding and implementation of strategies to reduce greenhouse gas emissions (GGEs). At the heart of this project lay the ambition to contribute to limiting the global average temperature increase to 1.5 Degrees Celsius by 2050, in line with the aim of the 2015 Paris Agreement.

Solarpunk Solutions was born out of my love for science fiction. The solarpunk genre of speculative fiction imagines a future where society lives in harmony with nature, powered primarily by renewable energy sources. The project aimed to disseminate this ethos through a series of blog posts, Tweets, and TikTok videos.

Solarpunk Solutions sought to go beyond merely discussing potential future technologies or developments. It aimed to inspire its audience to engage in practical ways to reduce GGEs now, focusing not on future technologies, but on evidence-backed direct action. In my pitch, I described my plan to systematically analyse solarpunk texts, extracting and communicating valuable methodology from them, and promoting solarpunk aesthetics, themes, and messages. These elements were used as a tool to inspire decarbonisation efforts among the audience. In essence, Solarpunk Solutions endeavoured to foster a spirit of non-consumptive living, urging people to transform their lives and communities towards a zero-carbon future.

My project pitch also outlined my interest in various themes within the solarpunk genre, including arcology/green architecture, transport, and agricultural techniques such as permaculture and subsistence farming. With the conviction that speculative fiction has the power to drive social change, Solarpunk Solutions aimed to engage the audience with an optimistic vision of a greener future, making non-consumptive living appealing and achievable for everyday people.

Through a planned schedule and a plethora of media content, Solarpunk Solutions aimed to create an engaging platform where people can learn about solarpunk and be motivated to take direct action in their lives towards reducing GGEs.

Ultimately, it didn’t work out that way.

Some early Solarpunk Solutions notes.

While I had planned to begin publishing media two weeks into my research, I ended up not starting until several weeks later. I stuck to my research schedule – consuming solarpunk texts, news articles and academic journals – but ultimately found that I had little to go on in terms of interesting technology to feature. This disheartened me and I continued to try to find new media that might give me what I was looking for, rather than simply begin outputting content based on what I had learned, which in hindsight would have been the much better option.

Even at the time of my writing, I have published far less media than I had planned to originally and the type of media I have published is quite different than what I thought I would. Nethertheless, I’ve been surprised by the engagement and response I received on one of the platforms: TikTok. More on that later.

Methodology

The methodology for Solarpunk Solutions involved three steps: research, application, and communication. My intention was to begin doing all three at once, by outputting media as I researched – but I ended up doing each step more-or-less sequentially.

The first step involved a comprehensive exploration of solarpunk media. This included literature such as ‘Walkaway’ by Cory Doctorow, movies like ‘Nausicaa of the Valley of the Wind’, board games, and video games. The goal of this research was to delve into the representation of regenerative/restorative technologies and concepts in solarpunk media. Critical analysis and note-taking of these resources facilitated the extraction of elements relevant to real-world decarbonisation strategies. The biggest problem I ran into here was that honestly, most media that is labelled solarpunk isn’t actually solarpunk. My research into the genre led me to understand that solarpunk is the antithesis of dystopian sci-fi. Some post-climate apocalypse fiction may very well have deep solarpunk vibes and aesthetic, but I felt it missed the point if it wasn’t hopeful and grounded in reality. Nethertheless, I decided that if there was tech implementation in this media, I would research it.

Simultaneously, academic research was undertaken first to understand more deeply the solarpunk genre and its potential for driving social change, and second to research the specific technologies I featured on Solarpunk Solutions. I engaged with academic sources including ‘Making people responsible: The possible, the probable, and the preferable’ by Bell Wendell and ‘Art, Energy and Technology: the Solarpunk Movement’ by J.D. Reina-Rozo among others, and used them to inform my content.

Next came the application of these theories and concepts into real-world decarbonisation strategies. For this, I investigated the practicality of the technologies and methodologies featured in the solarpunk genre, evaluating their feasibility for implementation on both micro and macro levels.

Example of a Midjourney 5.1 prompt and output. This image was used in a TikTok video on vertical farming.

The final step revolved around the communication of these strategies. This was done through the creation and dissemination of content via three social media platforms: WordPress, Twitter, and TikTok. Each piece of content had a specific theme, inspired by the solarpunk media consumed. For example: ‘Solarpunk Food Production: Urban Agriculture, Vertical Farming, and Permaculture’. For each theme, a blog post was first written, and then a series of videos and tweets were curated via remixing these blog posts. The media was designed to introduce each concept, illustrate its presence in solarpunk media, highlight real-world examples, provide academic insight, and suggest ways viewers and readers could support these concepts in their own lives.

Example of a GPT-4 prompt. While the chatbot’s output was never directly translated to content (requiring fact-checking and editing), it helped the process immensely.

The use of AI was crucial throughout the entire production process. I used ChatGPT-4 to suggest solarpunk media, topics, research, to help write my blog posts, tweets, and to script my TikTok videos. Meanwhile I prompted Midjourney 5 & 5.1 to generate eye-catching solarpunk images which were utilised throughout my content. A typical content series would consist of an introduction to the concepts, followed by a discussion of specific solarpunk media that feature the concept. The next step would highlight a real-world example of the concept(s) in action, explaining its benefits and implications. Following this, I would present academic research related to the concept, simplifying complex ideas with graphics and layman’s terms. Lastly, I would provide practical ways for the audience to adopt and support these concepts, thus empowering them to take direct action towards reducing greenhouse gas emissions.

Results

Food production chat turns feisty.

While at the time of writing I haven’t managed to get through as many topics as I would have liked, the response, at least on TikTok – which was always intended to be my primary platform – was generally quite good, with each post gaining more and more traction and engagement. My latest TikTok video, for example, generated a significant amount of debate in the comments and reached nearly 2000 views within 24 hours. At the time of writing I am sitting on 33 followers, 297 likes and 2,892 views.

I think a large factor in the engagement I have received is due to the use of striking AI-generated images captivating my audience, causing them to stop doom-scrolling momentarily, and allowing them to digest my written content. If I am to continue on my trajectory of iterating and producing content for Solarpunk Solutions, I could see the channel eventually growing into a thriving community.

In this way I would call the project a limited success. Limited in that I did not put out as much content as I could have, and therefore did not iterate as much on my methodology to find out what works and what doesn’t. And also limited in that while it got people excited to discover solarpunk, and even debating fiercely in the comments, it’s difficult to gauge how much action I’ve generated. Another limitation was that my Twitter and WordPress accounts have not received much engagement to date – I think I could have made an effort to link the three profiles together, funnelling traffic from TikTok to my other accounts. Nethertheless, as I’ve said many times throughout my pitch and Solarpunk Solutions blogposts, promotion of the solarpunk genre and ideals constitutes action on climate change, in that science fiction imagined futures help make possible futures reality and in this way I would call Solarpunk Solutions a success.

A tour of Solarpunk Solutions.

Media & References

Below I’ll list a selection of solarpunk media I researched, read, listened to, watched, or played.

Video games:

  • Anno 2070
  • ABZU
  • Eco
  • Cloud Gardens
  • Outer Wilds
  • Terra Nil
  • Timberborn

Board games:

  • Solarpunk Futures
  • Wingspan
  • Photosynthesis
  • Ecosystem

Screen:

  • Nausicaa of the Valley of the Wind & other Ghibli films
  • Beasts of the Southern Wild
  • Black Panther
  • The Boy Who Harnessed The Wind
  • Ocean Girl

Literature:

  • Walkaway by Cory Doctorow
  • The Summer Prince by Alaya Dawn Johnson
  • Earthseed by Octavia E. Butler
  • Ecotopia by Ernest Callenbach

And some academic articles which were utilised throughout Solarpunk Solutions:

  • Asseng, Senthold & Guarin, Jose & Raman, Mahadev & Monje, Oscar & Kiss, Gregory & Despommier, Dickson & Meggers, Forrest & Gauthier, Paul, 2020, ‘Wheat yield potential in controlled-environment vertical farms’, Proceedings of the National Academy of Sciences 117, 202002655. 10.1073/pnas.2002655117.
  • Bell, Wendell 1998, Making people responsible: The possible, the probable, and the preferable, American Behavioral Scientist, Vol 42(3), Nov-Dec, 1998 Special Issue: Futures studies in higher education, pp. 323-339.
  • Evans, Rebecca, 2018, Nomenclature, Narrative, and Novum: “The Anthropocene” and/as Science Fiction, Science Fiction Studies Vol. 45, No. 3, SF and the Climate Crisis (November 2018), pp. 484-499 (16 pages) https://www.jstor.org/stable/10.5621/sciefictstud.45.3.0484 
  • Happer, C. and Philo, G., 2013, The role of the media in the construction of public belief and social change, Journal of social and political psychology1(1), pp.321-336.
  • Holleran, Sam, 2019, Putting the Brakes on Dystopia: Speculative Design, Solarpunk, and Visual Tools for Positing Positive, web.
  • Papadaki, Nikolaou & Assimakopoulos, 2022, ‘Circular Environmental Impact of Recycled Building Materials and Residential Renewable Energy’, Sustainability, 14, 4039. 10.3390/su14074039.
  • Ragheb, El-Shimy & Ragheb, 2016, ‘Green Architecture: A Concept of Sustainability’, Procedia – Social and Behavioral Sciences, Volume 216, Pages 778-787, ISSN 1877-0428, https://doi.org/10.1016/j.sbspro.2015.12.075.
  • Reina-Rozo, J.D., 2021, Art, Energy and Technology: the Solarpunk Movement, International Journal of Engineering, Social Justice, and Peace, 8(1), pp.47-60.
  • Stead, M., Macpherson-Pope, T. and Coulton, P., 2022, The Repair Shop 2049: Mending Things and Mobilising the Solarpunk Aesthetic, Branch (EIT Climate KIC, Mozilla Foundation, Climate Action Tech, and the Green Web Foundation), (4).
  • Williams, R., 2019, ‘This Shining Confluence of Magic and Technology’: solarpunk, energy imaginaries, and the infrastructures of solarity, Open Library of Humanities, 5(1).

Live-Tweeting Exercise 2: Electric Boogaloo

Part 2 of a thing I did.

Her

The 2013 film ‘Her’ presents a near-future scenario where humans form intimate relationships with AI entities. My live-tweeting during this screening was largely focused on the film’s depiction of AI and its implications for our future, a theme that resonates with Ray Kurzweil’s and Vernor Vinge’s discussions of the Singularity. While LLM models are not reeeaaaally conscious and are kind of just very advanced word predictors

This tweet was inspired by a scene that seemed to be near-future echo of our current reality. ‘Her’ suggests a future where AI becomes an integral part of our daily lives, a future that seems not too far off given the advancements in LLM technology like ChatGPT and Bard.

Working in the VR industry, this novum was particularly interesting to me. With Apple’s XR device around the corner, and the Meta Quest 3 hot on its heels, this kind of game will be commonplace soon. Even the AI character the film depicts is being developed as we speak, and was recently featured in NVIDIA’s AI keynote event.

While I received several likes and retweets this screening, I only got a handful of comments. Perhaps I did not meme hard enough.

Arrival

‘Arrival’ is a film that explores the concept of communication on a profound level, challenging our conventional understanding of language (and possibly the way it effects how we experience time). I already saw this film several years ago and adored it, but it was honestly so much better the second time around, having already experienced the twist and being able to absorb the film with this in mind. My live-tweeting during this screening revolved around the film’s unique approach to communication and its potential parallels with advanced AI.

Honestly this should have done better. Shame.

I was super thrilled to see a peer tweeting about one of my favourite authors. Vinge = GOAT.

I see things like hypertext and the memex as a new way of communicating, and so the parallel to Arrival’s alien language sprung to mind. I then begun wondering how AI would fare, and saw the intuition-based approach Banks took in the film as directly opposing any potential technological approach. This concept aligns with the Sapir-Whorf Hypothesis, which suggests that the language one speaks influences how one thinks about reality. We’re not there yet, but I wonder how AI will think about reality..? If there is a first contact event, I think it will be AI, not biology, which breaks the language barrier.

Ready Player One

Referencing Tony Myers, I draw parallels to the burgeoning metaverse industry, in which entirely new realms of expression, sociality and economy are made possible.

The film ‘Ready Player One’ presents a dystopian future where people escape from their grim reality into a virtual world known as the OASIS. My live-tweeting during this screening revolved around the themes of cyberspace, identity, and economic disparity.

It kind of pissed me off how first-world-focused RPO was. Ok, yeah, the protagonist lived in a weird consumerism-dystopian wasteland, but the film could have engaged with digital currency and digital labour in a much more nuanced fashion. People all over the world are flocking to digital labour and cryptocurrencies, where they are able to avoid the pitfalls of the global reserve USD economy, which is geared against them by design.

Another point I made had to do with cyber avatars and digital expression. This tweet did quite well among my peers, who had a lot to say. We ended up discussing sexism in online games.

Everything Everywhere All at Once

‘Everything Everywhere All at Once’ presents a multiverse scenario where different dimensions are interconnected, much like nodes in a network. My live-tweeting during this screening focused on the film’s representation of interconnectedness and its parallels with cyberspace.

The film suggests a future where our understanding of reality is expanded through the interconnectedness of different dimensions, much like how cyberspace expands our access to information and each other.

Live-tweeting these screenings has allowed me to engage more readily with the themes of these films, and to consider the implications of their representations of the future. I love sci-fi as imaginary futures are the best way to think about and to study the future. As we continue to navigate our relationship with technology and AI, these films serve as important cultural touchstones, prompting us to reflect on our own visions of the future and the role we want technology to play in it.

Seeking Singularity: Mechanical Game Design Experience

Ascension: AI Evolution is a cyberpunk resource management and card game for 2-5 players. The game was pitched as part of a group game experience design project, in which I, as someone working professionally in the game design & production space and therefore with relevant experience, volunteered to handle designing the mechanics & rules of the game. I focused on creating a system that not only provided an engaging gameplay loop, but also synthesised our chosen cyberpunk/AI theme with the mechanics.

Some Midjourney generations, which helped to prototype our visual identify early on in the design process.

The initial spark for our game design was the theme itself – the evolution of AI sentience and its ultimate escape off the servers of megacorps and into the internet, triggering the Singularity. It was my task to reflect this theme through the game’s mechanics. In a previous blog post, I touched upon the importance of finding the right balance between Callois’ concepts of paidia and ludus in game experience design. In this project, that principle guided my steps as I developed the mechanics of our game. The ultimate challenge in game design is the finding of a harmonious balance between these two concepts.

An early iteration of an Action Card, designed by a team member. I had messily added the Overclock effect and resource cost, utterly destroying the visual cohesion of their card design.

We initially modelled Ascension on resource management games such as Splendour, and even playtested the game using Splendour’s colourful gem tokens as stand-ins for our games’ CPUs, RAM, Servers and Bitcoin resources. Further playtesting found that simply playing cards got dull, and in a moment of synthesis I designed the “overclocking” mechanic, in which players may spend resources to upgrade the effects of their action cards, or even unlock unique effects. This not only made playing action cards more engaging, but added an additional layer to resource management strategy. Does a player spend their resources overclocking their cards to get ahead, or save their resources to upgrade their sentience level sooner?


The gameplay was structured – not at all on purpose; it’s just that all games must follow this – according to the three-act model described by Tidball (2011). The game starts with Act 1, setting the stage for the conflict among the players. Each player, embodying a unique AI, prepares for the race towards triggering the Singularity. This is achieved through the initial resource gathering phase, where players collect CPUs, RAM, Servers, and Bitcoin (our wildcard resource), as well as draw their first set of action cards. Each AI has different resource draws according to their tableau, adding an additional layer of strategic decision-making right at the onset of the game.

In Act 2, the struggle for victory commences and the core gameplay loop is introduced. Players use their resources to upgrade their AI’s sentience level, which in turn unlocks more powerful action cards. It’s during this act that players may use their action cards to set back other players, steal their resources, or protect themselves. Our game’s cyberpunk setting is reflected in the design of these cards: players can launch DDoS attacks, install malware, or build firewalls, drawing from familiar cyberpunk & AI tropes. These mechanics, as well as the “overclocking” mechanic mentioned above, contribute to thematic immersion while providing an engaging gameplay loop of gathering resources, upgrading sentience, and strategically managing action cards.

Finally, Act 3: the push for victory. This is where, in line with David Parlett’s (2011) classification system, the game’s ‘race’ aspect becomes central. Players are striving to evolve their AI to Sentience level 4, and then play 3 “Quantum Node” cards to trigger the Singularity. This not only requires careful resource management and strategic play, but also an awareness of the other players’ progress. The final act is a tense conclusion to the race, with the potential for last-minute reversals and strategic surprises.

By interweaving the mechanics with the cyberpunk/AI theme, I designed a game that is not just about rules and actions, but also a narrative journey. Players aren’t just collecting resources or playing cards – they’re evolving their AI, striving for transcendence, and contending with the strategies and attacks of their fellow players. It’s a race towards Singularity.

Below I’ve linked some of my peers’ work:

Marketing: https://paigesutton.home.blog/2023/05/13/ascension-industry-talks/
Visual Identity and Presentation: https://ethanbeard91.wordpress.com/2023/05/13/beyond-fun-games-ascension-reflection/
Video: https://mavroproductions.com/2023/05/11/ascension-the-group/

References

Callois, Roger 1958, ‘Les jeux et les hommes‘, published in English as ‘Man, Play and Games’, 1961.
Tidball, Jeff 2011, ‘Three-Act Structure Just Like God and Aristotle Intended‘, The Kobold Guide to Board Game design edited by Mike Selinker, Open Design, Kirkland, pp. 11-19. 

Analysing My Favourite Films

Over my past three years at university I have sort of been excited for BCM 325 and sort of not. Every time the class has run my Twitter feed has been saturated by tweets about my favourite films; most of which I’d seen 5-10 times. Naturally I was both excited to study these academically, but also trepidatious about re-watching media I knew like the back of my hand (after all, I’d rather be exposed to something new). In the end I’ve found live-tweeting my analysis of these films an engaging and enriching experience, and I’m going to take you through it retrospectively, as a colossal sci-fi nerd.

2001: A Space Odyssey

To expand on this: I think a hybrid form of AI (LLM + another model) may very well reach HAL’s level of intelligence (though hopefully not his murderous intent).

It was interesting to see some of my peers’ engagements with science fiction thinking, something I have been doing as long as I can remember. Darko Suvin’s concept of the novum was new to me as a word, but was the reason I love sci-fi in the first place: imagining a possible future through the mechanism of a new thing. In the example of 2001, the primary novum was AI sentience and free will; and I think we are quite lucky to be studying this in Autumn 2023, when for the first time in history this possible future” is becoming a reality.

Going into the first screening, I assumed that visual tweets (accompanying screenshots, for example) would do very well in terms of generating engagement, and so I kept my print screen button handy. However in the end it was the tweets exhibiting novel readings & extrapolations of the text which far out-performed my visually-based contributions, not just in likes/RTs but also in initiating exchanges.

In general I noticed the same of the tweets of my classmates, although the above thread was a notable combination of visual media and a unique and interesting take on the film.

A Selection of Tweets

https://twitter.com/dokidokiguy/status/1631113106935140352
https://twitter.com/dokidokiguy/status/1631117411230584832
https://twitter.com/dokidokiguy/status/1631120826891370497
https://twitter.com/dokidokiguy/status/1631131724326535168
https://twitter.com/dokidokiguy/status/1631144944906960896

Westworld

Westworld for me was a fun one – as it was the only film on the list I hadn’t already seen! I’m a big fan of the series and have been meaning to check out the OG. I actually found the live-tweeting experience to be a really great way to enjoy a new film, although I am sure I missed a lot. While the above tweet did very well, in general I found tweets generated by ChatGPT – of which I posted two or three per week – did not do as well as original tweets. They tended to be quite shallow and did not introduce novel new concepts or present interesting interpretations. The LLM did very well in the above tweet though, I think.

Westworld was a powerful lens through which to study cybernetics. Before this, I thought the word simply meant “robotic parts in an organic creature”, and so it was fascinating to understand there was a much larger conceptual framework for the concept, including feedback loops & system dynamics. In fact the robots in Westworld are not “cybernetic” in the shallow sense of the word at all – they are completely robotic. It is the system at large, a complicated intersection between technology and humanity, that is cybernetic. The screening got me thinking, too: when a peer commented about humans using the park to escape reality, it made me realise the parallels between the cold and antisocial behaviour of its clients and the behaviour of people in online games and forums, hiding behind their anonymity. In the same way the guests excuse their behaviour because “none of it is real”, these antisocial netizens (are we still saying netizen?) act out in their own simulated cybernetic experience.

Sometimes you have to go niche with it. I don’t think many in my audience had the context to appreciate this one, but that didn’t mean I wasn’t going to go for gold.

A Selection of Tweets

https://twitter.com/annas_media/status/1633646595999940608
https://twitter.com/dokidokiguy/status/1633658739596341248
https://twitter.com/dokidokiguy/status/1633646935327518722
https://twitter.com/dokidokiguy/status/1633649846073847808
https://twitter.com/disladycaity/status/1633654097495883776
https://twitter.com/dokidokiguy/status/1633662312614854656
https://twitter.com/dokidokiguy/status/1633668042097041416
https://twitter.com/jasmynconnell/status/1633662509571002369

Blade Runner

Something I think not enough people recognise is that while visually the film broke new ground, conceptually the book was even more ground-breaking in its day.

Blade Runner was my favourite film for years. It might still be? Honestly I’m not sure – it’s a bit moody for me, these days. It really is the perfect film/lens through which to study trajectory analysis and forecasting. It gets a lot right (I’d argue that environmental destruction itself is a novum), and it gets some things wrong.

As a seasoned sci-fi enthusiast, I was more than aware that much of cyberpunk’s visual identity (let’s be honest, 99% of it) comes from Blade Runner, so it was interesting to immediately see my peers draw parallels to other cyberpunk media.

A Selection of Tweets

https://twitter.com/dokidokiguy/status/1636182044677148672
https://twitter.com/dokidokiguy/status/1636183958596775937
https://twitter.com/dokidokiguy/status/1636185555766771713
https://twitter.com/dokidokiguy/status/1636188075218075648
https://twitter.com/dokidokiguy/status/1636190380025868290
https://twitter.com/dokidokiguy/status/1636191326000775168
https://twitter.com/dokidokiguy/status/1636209805701578752

Ghost in the Shell

My favourite tweet of the screening.

GitS is literally peak fiction. Moving on…

c y b e r c r i m e

TBH I missed the most of this film as I watched it in Japanese with English subtitles (and it’s been a good 10 years since I last saw it). My Japanese is pretty good, but it’s not tweet in English while absorbing Japanese dialogue good. I noticed much less tweets from my peers this week. I wonder why? It’s a shame because I think this film is in many ways the most cerebral of all the media in this post.

A Selection of Tweets

https://twitter.com/dokidokiguy/status/1638720311473602560
https://twitter.com/dokidokiguy/status/1638721751642406913
https://twitter.com/dokidokiguy/status/1638729034514448386
https://twitter.com/ayyeblizz96/status/1638725670451949568
https://twitter.com/dokidokiguy/status/1638727148650496000
https://twitter.com/dokidokiguy/status/1638727662033342466

The Matrix

Oh..? What’s this?

Another classic. I had the DVD as a kid and I think this was rewatch no. 20. Again, the most interesting part of this screening was seeing those of my peers who had never seen the film react. Adding a layer of a analysis to that through the lens of cyberculture and cyberpunk just sweetened the delicious Matrix virgin tweets.

A Selection of Tweets

https://twitter.com/dokidokiguy/status/1641265469033447424
https://twitter.com/dokidokiguy/status/1641265469033447424
https://twitter.com/siennahillcoat/status/1641256496469262337
https://twitter.com/dokidokiguy/status/1641268014946594816
https://twitter.com/dokidokiguy/status/1641277784533397506
https://twitter.com/dokidokiguy/status/1641285311862022145