As artificial intelligence reshapes composition and tastes, the future of natural sound hangs in the balance
G |
iven the sudden and widespread dominance of artificial intelligence, one wonders what the shape of music will be in the future.
Technology plays an all-pervasive role now in every aspect of life – including the arts. Looking back, it is easy to trace the changes technology brought to music – from its forms and intonations to performance styles and patterns of patronage.
The first and perhaps most crucial impact of technology was the severing of the direct relationship between the artiste and the audience. Imagine a world with no recording technology – where the only way to hear music was live, with the vocalist (or instrumentalist) performing in real time before a listening audience. There was no concept of sound being captured and played back as an intermediary between the performer and the listener.
This shift marked the emergence of a shadowy new presence in the musical ecosystem – a different kind of patron who enabled musical performances through access to resources and recording technologies.
The length of early recordings – particularly the 78 rpm discs – shaped the structure and duration of musical compositions. This limitation was exploited creatively, especially in film music, which adapted to these constraints with remarkable aplomb.
In other words, music was once inseparable from the musician or performer. To listen to music, one had to provide for the performer. In most parts of the subcontinent, where princely states and local royalty held sway in various forms, the musician had to be cared for. This care extended beyond the artiste to include their immediate and extended family. They had to be housed, fed and sustained. If there were several high-calibre artistes, the support system had to be all-encompassing.
Because these families often lived in seclusion, the transmission of musical knowledge had a personalised, insular character. It was passed down the family lines in a closely guarded tradition.
This model of patronage also resulted in a monopolisation of the artiste – and, by extension, the art – by the patron. The artiste would perform only for their patron, or for those invited by the patron. This meant that audiences were more or less of the same cultural and social standing, often having a highly developed appreciation for the art form.
This cultural homogeneity may have demanded greater technical expertise and virtuosity from performers, though it lacked diversity. The standards were exacting – the bar for excellence set high and clearly defined. This uniformity in cultural expectations enabled the consolidation of certain aesthetic values and performance qualities. Virtuosity was thus shaped within a specific framework, contributing to the solidification of distinctive tonalities and musical forms.
Through this process, the markers of a particular tradition became more visible; and the contours of form – its identity, boundaries and style – more sharply defined.
Gradually, the post-production process came to play an increasingly significant role in music. The natural sounds were treated through various technological interventions to add texture and depth. Instrumental layering – incorporating multiple sonic strands – was introduced to enrich the perceived one-dimensionality of natural sound. Existing compositions were subjected to these innovations, leading to a gradual shift in how musical quality was measured. Natural sound began to recede as the benchmark for musical expression.
With the advent of the digital age, natural sound entered an uneasy alliance with computer-generated sound – an ‘unhappy marriage’, some might say. Over time, the latter began to replace the former. The intonation patterns and development of melodic or harmonic lines were increasingly shaped by the logic of digital sound processing. As a result, musical tastes entered a transformative phase, shaped more by software than by voice or instrument.
The arrival of artificial intelligence is set to further overhaul the process of musical composition. In mere seconds, AI can scan trillions of existing compositions and generate not just one prototype – but millions – from which to choose. This marks a radical departure from the slow, intuitive and often personal craft of composition.
In this new paradigm, the musician may no longer be defined by the ability to produce sound through voice or instrument, but rather by their proficiency in software manipulation. The performer at the keyboard becomes the coder, not the vocalist or instrumentalist.
Today, streaming platforms are the dominant mode of music consumption. These services may eventually render all other formats obsolete. Perhaps the next technological leap will create platforms embedded in the human body, activated at will without the need for any physical movement.
If this trajectory continues, it seems increasingly likely that natural sound, once the soul of music – may become its greatest casualty.
The writer is a Lahore-based culture-critic