Work from the wretched year of twenty twenty, the beginning of the downfall of all civilization on planet earth. Artificial intelligence and Neural Networks were once again recruited as generative tools to create interesting and evocative shapes. Then even more neural networks competed against each other to increase depth and detail in the 2d textures, which were then mapped to roughly modelled 3d shapes. We bow down to our new machine overlords – as we always have and always will.
We tried out a few variants of Machine Learning Art headshots generated with StyleGAN but were quickly bored, after the initial novelty of being able to generate new perfect humanoids at will wore off. These portraits represent the effort of trying to “break” the system and finding a novel aesthetic at the margins. The generated images were then mapped to a rough 3d geometry using the textures as input for specularity, metallicness and roughness. This gives depth and a physicality that the low resolution flat GAN Art images lack.
Below you can find some placeholder text, which consists of a rewritten Wikipedia article (with the help of some machine learning as a matter of course).
The generative system creates competitors while the discriminative system assesses them. The challenge works regarding information dispersions. Normally, the generative system figures out how to plan from a dormant space to an information circulation of intrigue, while the discriminative system recognizes up-and-comers delivered by the generator from the genuine information conveyance. The generative system’s preparation objective is to build the blunder pace of the discriminative system (i.e., “fool” the discriminator organize by creating novel up-and-comers that the discriminator believes are not blended (are a piece of the genuine information distribution)).
A referred to dataset fills in as the underlying preparing information for the discriminator. Preparing it includes giving it tests from the preparation dataset, until it accomplishes satisfactory exactness. The generator trains dependent on whether it prevails with regards to tricking the discriminator. Commonly the generator is seeded with randomized info that is tested from a predefined dormant space (for example a multivariate ordinary circulation). From that point, competitors combined by the generator are assessed by the discriminator. Backpropagation is applied in the two systems with the goal that the generator delivers better pictures, while the discriminator turns out to be progressively gifted at hailing manufactured images. The generator is regularly a deconvolutional neural system, and the discriminator is a convolutional neural system.
GANs regularly experience the ill effects of a “mode breakdown” where they neglect to sum up appropriately, missing whole modes from the information. For instance, a GAN prepared on the MNIST dataset containing numerous examples of every digit, may by the by tentatively preclude a subset of the digits from its yield. A few specialists see the root issue to be a feeble discriminative system that neglects to see the theme of exclusion, while others allocate fault to an awful decision of target work. Numerous arrangements have been proposed.
Many developments in the history of everything have started out as a mimesis of one kind or another. The arm became the lever while the horse became the steam engine and the mind became the computing machine. At some moment then typically comes a sort of inflection point at which the mimic surpasses its model: suddenly, there were hundreds of horses in the space of one. Often, this leads to other effects, ones much less obvious, unintended and almost impossible to foresee. Those horses, history tells us, facilitated a fundamental change in the urban landscape of North America; a change that came with a universe of social, ecological and economic transformations, not all of them for the better.
Cognitive technologies are likely to follow a similar pattern, although their mode of mimicry is much less linear. Consequently, inflection points may differ: instead of being an analog of our own thinking apparatus, they started off as apparatuses of logic. Running mechanically at first, such as the Antikythera mechanism, Charles Babbage’s difference engine or Gottfried Wilhelm Leibniz’s stepped reckoner, those machines could perform as many simple calculations as mechanical resistance (the arm) would allow for. The rise of electrical power and the vast paradigm shift it initiated then changed the mode of resistance into one of scale and integration: logical formations, materialized into ever-shrinking circuits, now powered by an invisible force at the speed of light. A sense of inflection followed: what if our souls fundamentally work the same way? But it turned out to be a mirage: our brains are not digital computers, just as little as the steam engine is a horse.
After more decades of trying to construct an apparatus that can think, we may be finally witnessing the fruits of those efforts: machines that know. That is to say, not only machines that can measure and look up information, but ones that seem to have a qualitative understanding of the world. A neural network trained on faces does not only know what a human face looks like, it has a sense of what a face is. Although the algorithms that produce such para-neuronal formations are relatively simple, we do not fully understand how they work. A variety of research labs have also been successfully training such nets on functional magnetic resonance imaging (fMRI) scans of living brains, enabling them to effectively extract images, concepts, thoughts from a person’s mind. This is where the inflection likely happens, as a double one: a technology whose workings are not well understood, qualitatively analyzing an equally unclear natural formation with a degree of success.
Andreas N. Fischer’s work Computer Visions II seems to be waiting just beyond this cusp, where two kinds of knowing beings meet in a psychotherapeutic session of sorts, consistent with the ideas that Joseph Weizenbaum first raised half a century ago with his software ELIZA. Yet, in Fischer’s interpretation, this relationship presents itself as a peculiar clash of surreal images and a voice tending to the very human. It is perhaps no coincidence then, that some of the images, particularly the carcass of an animal, are reminiscent of Werner Herzog’s 1971 film Fata Morgana, which depicts the Sahara and Sahel deserts to the sound of Lotte Eisner’s voice reciting the Mayan creation myth.
Like Herzog, Fischer created the images first and the voice-over followed after, almost in an effort to decode them and with them offer an experimental analysis of a future to come. Herzog’s film, after all, was initially intended as a science fiction narrative and only later turned into an exegesis of the origin of the world. In both films, the images serve as surreal divining rods to explore the nature of dreams and visions. “What kind of life is it?” asks the therapist. We do not hear the answer, but perhaps we have not heard the question right either: in a time of talk, simultaneously, of both the Anthropocene and the possibility of a posthuman condition, should the question not rather be what the dreams are, at their base of bases? And would it not be only fitting if—after passing the epochal inflection point of a machine that truly knows—its first words would be: “hi there, do you want me play back some of your dreams for you?”
Sascha Pohflepp, September 2017