top of page

Wk 16 // jigsaws, deep listening, and Swedish electronica (misc infodump)

Updated: Jun 5, 2023

A quick note session on:

  • the work of artist Zac Langdon-Pole and his exploration of the production and history of 'landscape' images

  • Pauline Oliveros' experimental music making practice and the concept of "deep listening"

  • the history of the main digital programmes I use in my practice: Blender and RealityCapture




The Jigsaws of Zac Langdon-Pole (The Dog God Cycle, 2022)



Notes from RNZ interview:

  • The images we are seeing from the James Webb Telescope are "sublime and stunning" but are composed of data points. The images we see are interpretations of raw data - often infrared or other that would look quite alien and unreadable to us. NASA states that they prepare images for public view using what is called the "Hubble palette", a colour scheme that references the sublime landscape paintings of artists like Albert Bierstadt and John Constable

  • Langdon-Pole delves into these narratives and processes, bringing connections together between the 19th century landscape painting tradition and celestial images via jigsaw puzzle assemblages. In doing so, he questions what has brought these images into existence, and the coding that has gone into creating them.

  • "There is a great science involved in what NASA has done to produce these images... Ultimately, to get them what they say press ready, to make them have maximum impact in culture, they have attuned this palette to one that we already know...within a history of sublime image making and paintings"

  • there's a history beneath these images of space, and in turn in art, in landscapes, in image making in general

  • "These things call back to what we already know...beyond what we know and discovering new depths of the universe, but then tracing that back to what is already known within culture and history"

  • Jigsaws - giant collages; relate to a history of translation (i.e. artists working through/via black-and-white reproductions of works from grand masters); a medium of translation. Also fragile in nature - a feeling of them being able to come apart, pieced together; a sense of connection but also instability

  • The Dog God Cycle (2022) - four large (4 x 3m) puzzles that combine images from the Hubble and James Webb telescopes with 19th century landscape paintings; a 'ghost template' is used to combine them, "a hidden figure within these images" - Gestalt Images (a high-contrast black and white image of a dog)

  • "Work begets work"

  • On meteorite works - "knowledge that this material is older than time itself"; working with magnets to collect dust from carvings in his studio, but came to the realisation that that itself could become a work; magnets as reflective of the solar system and forces at play in the universe, just on a very small scale. A back and forth between method and material...

  • Scales - geological, human, celestial, etc... Combining objects in "layers of history and time" that are still with us in the presence; "what art can do beautifully is fold these things together and make use realise how we're constituted in the present... an ongoing part of my work is looking at how these assumptions of history as something distant or fictional or remote to us are actually very much with us right now"

  • "two words that have been rolling around my head in relation to this show are 'tender scrutiny'... taking care to look at these objects and images from history but then also scrutinising them and allowing a quality of attention for them"


 

Pauline Oliveros - Deep Listening


Pauline Oliveros was a composer, performer and humanitarian known for her career in music-making. Her concept of 'deep listening' drew on a childhood fascination with sounds. She describes Deep Listening as "listening in every possible way, to everything it’s possible to hear, no matter what you’re doing." This listening includes "the sounds of daily life, of nature, or one’s own thoughts, as well as musical sounds... Deep Listening represents a heightened state of awareness and connects to all there is."


Deep Listening involves going below the surface of what is heard, expanding to the whole field of sound while finding focus. This is the way to connect with the acoustic environment, all that inhabits it, and all that there is.

Oliveros went on to publish a manifesto, Quantum Listening, which proposed that deep listening was a vehicle for radical social transformation - in which "compassion and peace form the basis of our actions in the world."

Quantum listening is listening to more than one reality simultaneously. Listening for the least differences possible to perceive - perception at the edge of the new. Jumping like an atom out of orbit to a new orbit - creating a new orbit - as an atom occupies both spaces at once one listens in both places at once. Mothers do this. One focuses on a point and changes that point by listening. Quantum Listening is listening in as many ways as possible simultaneously - chaining and being changed by the listening.”


What is the difference between hearing and listening? Does sound have consciousness? Can you imagine listening beyond the edge of your own imagination?

I see and hear life as a grand improvisation - I stay open to the world of possibilities for interplay in the quantum field with self and others - community - society - the world - the universe and beyond.


Deep listening resonates with me (no pun intended) in relation to both my studio practice currently and the literature I've been engaging with. It makes me think back to The Book of Stone and the unheard, unseen 'voices' of landscapes; to Lucy MacIntosh's exploration of Tāmaki Makaurau's deep social, political, cultural histories around space and geology; and to the questions I've been asking around the interconnection of volcanic material and maunga. Practically, it also makes me think of the work of Jez Riley French, and the recent field recording experiments I've begun to make in relation to the geologies I'm investigating - like the smartphones and computer programs I use to make my digital works, the sensitive contact microphones offer an in point into experiencing and engaging with the environment on a level that goes beyond what is there in front of us...


 


A brief history of Blender


Blender was created by Ton Roosendaal, a Dutch art director and self-taught software developer who founded the 3D animation studio, NeoGeo, in 1989. Blender's name refers to a song of the same title by Swiss electronic band, Yello - I couldn't find any information around this connection, so I have to assume that it was either a favourite of Roosendaal's or just a popular tune at the time!


(Opening verse below:)

My name is Random Tox I have the great honor and pleasure To present to you Turnex, the son of Durex The blender for the next millennium This is a revolution for your kitchen Smashed potatoes, sliced tomatoes Apple juice, blueberry, raspberry Cherries and peaches in a fraction of a second Turnex, the son of Durex The only blender which can be turned Into the most powerful vacuum cleaner

Roosendaal's first files were written on the January 2nd January, 1994, which is still considered Blender’s birthday today. Originally, the programme was intended to be only for in-house use at NeoGeo, intended to address the question: when a difficult client requires multiple changes to a project, how do you implement those changes painlessly? This question I think is behind much of Blender's ongoing growth and expansion over the years... The programme's first iteration launched in January 1995 and despite a lack of popularity around 3D animation at the time, Roosendaal describes having fallen in love with its “magical ability to create a whole world in a computer” -once you know how, you can create almost anything within it.


Following the closing of NeoGeo studio, Roosendaal and partner Frank van Beek went on to found various companies that promoted and grew the program, before founding the Blender Foundation - a non-profit organisation with the aim of making Blender open-source and free to use. Roosendaal's hope was to allow everyone the chance to use the programme for their own portfolios, and on October 13th 2002 this goal was achieved - the programme and all of its source codes were made free to the public for any use and purpose whatsoever. To this day, one of Blender's biggest strengths is the community behind its development. Not only did this initially result in its development and popularisation, it continues to drive its expansion and engagement today. With capabilities for modelling, sculpting, animation, character rigging, video editing and many other things, as well as the implication of Python coding into its system, the community that uses Blender is vast and variable.


As a way to "stress test" the programme, from 2005 the Blender Foundation began challenging the community to make animated 3D short films - images from the winning film from each competition are frequently used as the promotional material for the associated version of the program. Each challenge places demands Blender's 3D creation capabilities, which in turn leads to further upgrades and development. The first film to win was the surreal adventure Elephants Dream (2006). Bassam Kurdali, the director, explained the plot of the film:

"The story is very simple... It is about how people create ideas/stories/fictions/social realities and communicate them or impose them on others. Thus [the main character] Proog has created (in his head) the concept of a special place/machine, that he tries to "show" to [the other main character] Emo. When Emo doesn't accept his story, Proog becomes desperate and hits him. It's a parable of human relationships really—You can substitute many ideas (money, religion, social institutions, property) instead of Proog's machine—the story doesn't say that creating ideas is bad, just hints that it is better to share ideas than force them on others. There are lots of little clues/hints about this in the movie—many little things have a meaning—but we're not very "tight" with it, because we are hoping people will have their own ideas about the story, and make a new version of the movie. In this way (and others) we tie the story of the movie with the "open movie" idea."

In July, 2019, Blender broke into the 3D mainstream with it's 2.8 version - the interface was designed to be more accessible to users with no prior industry knowledge or training, while the programme was expanded to allow for 2D animation abilities that rivalled other programmes in use commercially. Although industry recognition for Blender had grown over the decades, 2.8 marked the moment when it gained momentum globally. Today, Blender has just reached its 3.5 iteration, with several more currently in the works; and with each revision, its functionality and capacity to build digital worlds and environments grows more and more expansive.


For me, much like Roosendaal, Blender offers a way for me to 'world-build'. By allowing me the ability to import objects and visualised data systems (for example, point clouds) into the programme, it gives me to ability to both modify them directly (in terms of material, size, lighting, etc.) and to create camera views within them that would not be possible through any other process. While I don't do much editing of the objects I work with, having this freedom and space to work digitally allows me to think about the work three-dimensionally and as a 2D image, and I am able to digitally walk through the scenes I create in order to experience them before generating final pieces.



An even briefer history of RealityCapture


RealityCapture is a photogrammetry software used for creating 3D models out of unordered photographs (terrestrial and/or aerial) or laser scans. It is currently used in the fields of art, archaeology, architecture, full body scanning, gaming, surveying, mapping, 3D printing, and virtual reality, among other things. The public version of RealityCapture was released by Slovak company Capturing Reality on 2 February 2016. Because of its application in creating 3D artefacts and environments, the company was acquired by the major gaming brand Epic Games in March 2021, which plans to integrate RealityCapture into another of the industry's primarily used game development programs, Unreal Engine.


Unlike Blender, which focuses on animation and allows for the creation of entire original scenes and narratives, RealityCapture is centred around creating artefacts (3D models for example) from image data. This may be of a singular object, or an entire environment or space - for example a building, the objects within it, or the land it is on; all are possible so long as one has the images to process of them. As a relatively young program, there is far less information surrounding it than Blender, but the basic principle behind its creation was to create a "better, faster and simpler photogrammetry solution" for users. Compared to other photogrammetry programmes on the market, RealityCapture to me definitely feels more intuitive to use; and although some of its functions are behind a pay-wall, there are many free-to-use functions within its design, making for an accessible option to people wanting to explore the possibilities of photogrammetry. With its upcoming integration into Unreal Engine, I'm curious to see how work between the two will become more fluid... I imagine it will open up more possibilities for seamless environment creation, particularly in the fields of VR and game design or development for both commercial application and individual use.


To my understanding, some of RealityCapture's recent success and its acquisition by Epic Games was driven by the Covid-19 pandemic. As many film studios and production companies were forced to put projects on hold during lockdowns, filming on set became impossible. This has lead to a rise in the need for methods of environment creation at a distance - something programmes like Unreal Engine, Unity, Blender and others similar are capable of doing. Combined virtual filming spaces have been on the rise for some time - for example, using motion capture to map the movement of actors directly to their characters in video game production - and is becoming increasingly common for film production, allowing those in production to mitigate issues around the pandemic, access to filming space, appropriate weather conditions and so on.


One of the strengths of this programme (and what drew me to it) is the ability to create artefacts with minimal technical fiddling. As long as one has access to a digital camera of some sort, they can upload images into the programme - from there, you can create 3D object files, topographical renditions, calculate distances and volumes, and generate point cloud systems from camera and image data. It allows me to bypass more complicated processes to achieve the same outcome - creating the point clouds and meshes I then import into Blender through processing images of the rocks I'm studying via the programme's built in features rather than having to work with a more expensive or technical programmes and devices.

Comments


© 2023 Anna Bensky

  • LinkedIn
  • Instagram
bottom of page