On Earth as It Is in Heaven: The Power of AI Art in Reifying Our Visions

The presenter recounts discovering AI image generation in September 2022, when typing “cat” into Stable Diffusion and watching a neural network create an entirely new image felt like “pure magic.” Acknowledging both the creative potential and the risks of deepfakes and misinformation, he focuses on AI art’s positive applications—demonstrating through a commissioned Hindu icon of Krishna and Radha how tools like ControlNet, Regional Prompter, inpainting, and upscaling transform rough generations into photorealistic works. He argues that just as photography requires skill beyond pressing a shutter button, AI art demands mastery of complex processes, making it a legitimate new medium for “reifying the mystical” and manifesting visions “on earth as it is in heaven.”

Bryce Haymond
Bryce Haymond

Bryce Haymond is an independent scholar and writer exploring the intersection of Mormonism, mysticism, and the potential for human transcendence. His interest in these areas was sparked five years prior to his MTA conference presentation, when he experienced a profound and transformative feeling he describes as an overwhelming creative muse. This experience led him on a journey to understand higher states of consciousness and mystical experiences. Haymond’s research has led him to engage with the work of scholars such as Mark Koltko-Rivera, Hugh Nibley, Margaret Barker, and William Hamblin, focusing on diverse forms of mysticism. He believes there is an ultimate shared divine reality behind mystical experiences. He has distilled his research into a forthcoming book and shares extended versions of his ideas on his website, thymindoman.com. In his MTA conference talk, Haymond described his unique views on the potential of humanity and overcoming assumed limitations.

Transcript

So, this was my first AI image. This blew my mind, okay? It was September twenty twenty two. I had just bought an NVIDIA graphics card for my computer and installed Stable Diffusion, a new open source text image generative AI software platform. In the prompt, I simply typed the word cat. Then set a couple other parameters, click the button generate. Seconds later, a cat appeared on the screen. It was like pure magic, using a spell of words to conjure new realities into being.

Now, clearly, it wasn’t perfect, but my computer had seemingly understood a word I wrote. by performing millions of floating point calculations and transformed it into a visual image of that very thing. It wasn’t just searching Google for a preexisting image of a cat or retrieving it from a database somewhere. No, it was creating this image. On the fly from scratch, based on its training of millions of images of cats. A deep learning neural network had essentially learned what that word cat represents. and using a latent diffusion model of that training data had generated a brand new image of a cat on my computer screen. Never seen before, never existing before.

A new art medium had been born. It was early days, of course, and a four-eyed cat was more Picasso than Michelangelo. But I could see even then that this was going to be a very powerful new tool for creative expression, a new artistic medium. Giving us powers that we had only dreamed of before, perhaps only imagined in our mind’s eye.

Fast forward just six months to March 2023, and we saw this image. Began making the rounds on social media. This was one of the first viral deepfakes using this new technology. I think it was probably made in Mid Journey. Which had become so powerful in its realism that it could fool people into thinking it was reality. And it did fool many people. The writing was on the Facebook wall.

As with any new technology, this could be used for good as a new artistic tool for creativity and the benefit of humanity. But it could also be used for ill, for destruction, harm, misinformation, and deception of all kinds. Today, I’m just going to briefly discuss the positive side. and how I’ve used this new technology to facilitate my work as a designer and artist. But there are clearly many downsides that will also need to be addressed and are being addressed probably by some here today.

The visual arts have always been about creative expression and using our creative skills and talents to bring to light our hopes and dreams, giving form to our deepest visions. communicating the otherwise ineffable, manifesting beauty, and expressing emotions deeper than words. Art, like religion, is a means of reifying the mystical. The abstract, the vague, the imaginative, the visionary, making concrete new worlds of being from our deepest inspirations. Indeed, it is a way to incarnate into physical reality what may have only been loosely glimpsed through the veil of the mind or imagination, piercing that veil to make it on earth as it is in heaven.

Some have argued that this new generative AI technology is not art at all, as it requires little to no skill to write a prompt and click a button. But I suggest that this is like saying that photography is not art, because anyone can tap the shutter button on their phone’s camera app.

It has also been said that this is not art because it has been stolen from artists of the past and even some living today, since some models have been trained on their works. While I won’t go into that thorny subject today, and I do think we need to protect living artists, learning from the past to create a new present and a better future is what creativity is all about. and is what artists have been doing for ages.

We should be thrilled that we can see new art pieces in the style of Van Gogh, or even See what Van Gogh himself may have looked like based on his own self-portraits. This kind of art Isn’t generated by just writing a prompt and clicking a button. There is skill that goes into making good art using AI as another tool in the artist’s toolbox. I hope to give you a peek into my process so you can see how I use this new technology and the amazing new possibilities it offers us.

One of the first commissions I did With this new art tool, was reimagining a modern photographic version of an icon of the Hindu god Krishna and his wife, goddess Radha. Who are regarded as the masculine and feminine realities of God in some Hindu traditions? We may think of them like Heavenly Father and Heavenly Mother in the Mormon tradition.

In recent times, Krishna has been depicted in art with a light blue skin tone, as on the left there. But the classical iconography more anciently portrayed Krishna with a very dark complexion, as on the right. Indeed, the Sanskrit origins of his name, Krishna, signify black, dark, or dark blue.

The patron who commissioned the art wanted a spiritual icon like this, but with Krishna’s more ancient skin tone, to visually juxtapose with Radha’s luminous golden skin. This would not only showcase the duality of their divine union, but also symbolize the complementary forces of light and dark. of the yin and yang of existence, epitomizing the perfect union of opposites, and demonstrating that love transcends all boundaries.

The first thing I did in recreating this icon with Photorealism is configuring a process known as ControlNet Open Pose. This is a subsystem, an extension to stable diffusion, that allows you to condition the diffusion pipeline with specific poses, almost like puppetry, articulating the limbs of a skeleton, the hands, the face. This helps guide the diffusion process to generate that pose or stance.

Once that was set up, I configured another subsystem called Regional Prompter. This allows you to specify prompts for different areas of an image. With an icon as complex as this one, a single prompt would never work to capture that complexity. Regional prompter lets you establish boundaries that you can then prompt independently. So the division process will focus on those specific prompts for different areas of a single image. Here, for example, I could specify Krishna’s skin tone independently from Radha’s.

The first generations after this were low-resolution images like this. Gradually, the image was taking shape, but it was still very low resolution, had many anomalies, such as her foot looking a bit like a hand. AI has struggled particularly with hands and feet. It’s a lot better today.

A significant next stage was called in-painting. This means zooming in on all those little details, masking each one, and prompting the system to generate new variations to fix or change them. There are many methods to help control and guide the output at this stage, too. At the same time, I did a lot of manual editing work in Photoshop, such as painting, erasing, moving, shading, etc.

The last major part of the process is upscaling the art from low resolution to high. In this case, I upscaled it to over thirty-three megapixels. The process is similar to an image-to-image type process, except that you can use a different control net called tile to add detail. And this is quite different from simply enlarging an image like in Photoshop. Here the AI will actually add contextual details to the image as it increases in resolution. So like for example, a leaf will actually look like a detailed leaf when it is blown up larger rather than just merely a big blob.

So after all this work, which took several days, this was the final result. And this is a bit of a closer look.

So, this is how we can use AI to make new art. Depictions like this that were nearly impossible before have become possible, making our dreams and visions a reality on earth as it is in heaven.

This is another recent commission I made, which went viral. It helps convey the message that inasmuch as we do it unto the least of these, we do it unto Christ.

Thank you.