Improving human-to-human relationships should be included on every technology product’s KPIs.

My AI Code of Conduct

When I work with AI tools, professionally and personally, these are my ethical guidelines.

1. Promote a sustainable ecosystem

Use of AI tools need to create a value ecosystem for all participants to thrive. This includes advocating for artists in commercial contracts, making sure that there are viable paths to make a decent living especially for creatives supplying training material and licensing of output from the models.

2. Peer review and approval of AI tools before use in commercial pipelines

Representatives of each department will review and approve tools before put into general use.

3. Prepare everyone

Train employees on the AI tools, empowering them to be more efficient and valuable in the future economy. At least in the short term, we think that jobs will be transferred to people using AI tools rather than AI systems without human oversight, therefore ensuring continued employees with skills in demand.

4. Leaders are accountable

Ensure that stakeholders responsible for decisions are clearly identified and held accountable for decisions.

5. Creative and Artistic standards prevail

Creativity is paramount – our value is thinking of novel solutions to complex problems. We will continue to rely on human aesthetic choices and never delegate to automated tools.

6. Be Transparent

Encourage transparency both internally and externally, despite uncomfortable discussions.

7. Prioritise human-to-human relationships in our product designs

When our products are immersive experiences, evaluate the impact on human-to-human relationships, always choosing to improve those when possible (instead of optimising for efficiency or engagement.) We want to ensure our products that leverage generative AIs continue to foster empathy and connection as a key design component (rather than build systems where humans are only encouraged to engage with the product. We have all seen the dystopian sci-fi movies and feel like this is going to be key in fighting that tide.).

AI Music Video Mega Rock Band (NDA)

A 7 min+ music video created from custom-trained StableDiffusion LoRA models. Done in 4 (very long) days led by Shynola and the amazing Nexus Studio dept. Due to legal complexities around IP and generative AI, it will likely never see the light of day, sadly.

Barbican / AI: More than Human

Interactive Installations. Our series of 7 installations tell the story of how integrated AI already is in our society, and our relationship to it. The series makes use of GAN, YOLO real time object detection and Computer Vision technology, work by artist Memo Akten and culminates in ‘Meet the AI, a chance to personally reflect on the role that we play in AI’s development.

The exhibition is currently touring Europe.

This Top Does Not Exist / AI

Trained StyleGan2 Model. A personal project, exploring the formulaic and infinite space of Tom of Finland’s iconic illustrations. Scanned by hand over 1200 Tom of Finland illustrations. Below are the art experiments with faces; head to Kink for the Giger-esque fever dreams.

Big Bang AR

Google + CERN + Nexus Studios. The first time we trained a computer vision model on hand-tracking, so you can kick off the universe in the palm of your hand. Small potatoes now, but in 2018 this was a big deal. A mixed reality journey through the birth and evolution of the universe, as told by Tilda Swinton. Open your hand in front of your phone’s camera to see the universe form; move between ‘reality’ and ‘space’ mode, create the very first particles and atoms; make a star explode; create a supernova and explore the nebula.

Next
Next

XR