The reason why musical data processing is so important to us now is because it progressively developed tools that have fundamentally changed how we think about Mp3 Download . However, it has a brief history. It combines with the advancement of numerical technologies: computers, initially, alongside the development of symbolic programming languages, and afterwards with a large number of numerical technological inventions. Data processing will be demonstrated to be mature enough early in its history to handle concerns of all types, from accountancy to scientific study, while naturally ignoring what interests us, artistic expression.

Undoubtedly, it is vital to distinguish between what results from data processing itself and what pertains more to the larger field of numerical technology at that point. These two domains provide the music with a wealth of new resources. However, the distinction is crucial since the field of sound is now transformed into audio numerical. As a result of the interaction between musical concerns, the environment created by numerical technologies, and the uniqueness of the computer, on the one hand, and the scientific fields that define its research areas, on the other, musical data processing was developed. Almost all of the other musical activities can be found there if the musical composition appears there at a favourable place. And since data processing, acoustics, signal processing, and even cognitive psychology have cleared some ground for musical research, musical data processing is at the centre of many musical, scientific, and technical domains.

However, what distinguishes its step is the use of the precise contributions of the data processing. Artificial intelligence is always supplying new conceptual tools, which are concretized by languages like Lisp or Prolog. They are immediately placed at the composition’s abstracting or musicologist aid service. New linkages between the instrumentalist and the electronic environment can be imagined thanks to research in real-time systems and interactive interfaces.

https://naijaremix.co/advanced musical data processing phases

With regard to the beginnings of musical data processing, there are two distinct types of activities. If these acts seem wise today, it is because the original vision that gave rise to them allowed for the ability to foresee. The creation of sound and musical composition are these two categories of actions. The computer makes sure the desired outcome is produced in both situations. These two activities are notably more modern today. The earliest comprehensive studies of musical composition by computer date back to 1956, when Lejaren Hiller calculated a partition on the Illiac I computer at the University of Illinois utilising rules encoded in the form of algorithms. It is about the string quartet WQXR’s performance of the three movements of Illiac Suite for String Quartet this year. Lejaren Hiller describes in detail the steps he took to programme the Illiac computer to create the partition for his string quartet in a famous structure that was published in 1959 and bears the subtitle “Experimental Music-Composition with Year Electronic Computer.”

To pinpoint this time, think back to 1956, the year John McCarthy coined the phrase “artificial intelligence.” One year later, Max Mathews, a researcher at the Bell Telephone laboratory in New Jersey, creates the first numerical sound synthesis programme for the IBM 704 computer. It is the first of a huge family of acoustic compilers and is commonly referred to as “Music I.” A psychologist named Newman Guttman creates the first piece, a one-and-a-half-minute study called “In the Silver Scale.” The first four movements of Lejaren Hiller’s Continuation Illiac for String Quartet were released in 1957, and the first FORTRAN programme in its basic form was created the same year (FORmula TRANslator). Let’s take note of the fact that Max Mathews organised a recording during the string quartet WQXR’s creation of Hiller’s work, which later took place with the publication of this recording in a disc produced by the Bell Laboratories in 1960 and titled Music from Mathematics: even though the paths taken by these two inventors were independent, it is not known that they did not cross.

The evolution will continue progressively in the indicated directions—the composition and creation of the sound—from these two virtually contemporaneous events. We’ll observe their actions at a distance. A third approach, however, does not take long to emerge because it is based on Hiller’s earlier remark that the computer is first and foremost a powerful calculator at that time. Additionally, the English term for a chosen computer previously denoted the workers responsible for performing calculations. But at the same time, there was some apprehension as people spoke openly about electronic brains at the time. Without feeling something, an artist could not approach a computer, which explains without a doubt the allure and occasionally terror that data processing will have on artists in the next decades. However, these studies were the brainchild of two scientists: Hiller was a practising chemist and Mathews was a well-known scientist. That they mirror in place, each on their own side and with wholly autonomous goals, is unquestionably what explains exceptional approaches.

Max Mathews wrote the first numerical programme for sound synthesis for the IBM 704 computer, which had 4096 words of memory, in 1957 with the Bell laboratories by his side. It is the first member of an illustrious family and is currently referred to as Music I. The term “instrument modularity” was first used in the 1960 television programme Music III. The Max Mathews model is influenced by more than one piece of laboratory equipment or an electronic music studio than it is by the acoustic stringed instrument commerce. The software provides a variety of independent modules (Unit Generators), each in charge of a fundamental task, such as an oscillator with programmable waveform, signal adder, multiplier, generator of envelopes, and random signal generator. A “instrument” is created by the musician by interconnecting a number of components. The oscillators’ or generators’ output signals are directed to other modules where they can be combined or adjusted. An “orchestra” is made up of multiple instruments, each of which has its own unique personality. In contrast to what happens in the physical universe, there is no restriction on the number of modules that can be used at once, with the possible exception of the computer’s memory. As a result of the instrument’s positioning, a complicated sound wave is gradually calculated and converted into a series of numbers that, when lined up end to end, represent the sound. The term “samples” refers to these numbers. Today, 44100 samples per channel were designated as one second’s worth of data for general public applications and 48000 samples per channel for professional applications.

The time required to generate the sound wave is significantly longer than the duration of the sounds due to the relative slowness of the machines and the design burden to be carried out; this is why the operation of these programmes is referred to be operating “in different time”. The sound waves calculated numerically and recorded at the origin were advanced to the end of an arithmetic unit of samples on a numerical tape. It is known as “direct synthesis” when referring to this method of sound creation. The musician then uses a second software to read the “file of sound” in real time and send the samples to an amplifier and speakers via a digital-to-analog converter, which is then used to produce a “file of sound” for themselves.

The musician must create a “partition” that contains all the information required by the instrument’s modules in order to activate the orchestra. This section is presented as a list of numbers or telegraphy codes, with each “note” or occurrence the focus of its own list. These listings have a chronological arrangement.

But defining each parameter is a challenging undertaking, made more challenging by the fact that musicians are not trained to assign values based on the qualities of sound that they manipulate. The Score programme by Leland Smith is the most well-known of the languages created to aid with the authoring of partitions in order to overcome this difficulty (1972). Score is not an automatic composition programme, but it does allow for the specification of parameters using terms derived from musical practise (heights, nuances, durations), the automatic calculation of changes in tempo or nuances, and even the addition of notes to sections that follow a trajectory provided by the type-setter.
With the release of Music IV, the model instrument-partition became firmly established (1962). This programme was developed as a result of numerous alternatives, some of which are still in use today. Among these misadventures, let’s mention Music 4BF (1966–1967), of which there is currently a Macintosh version (Music 4C, 1989), and Music 360 of Barry Vercoe (1968); this descendant of Music IV has the feature of being presented in the form of a true programming language, which undoubtedly explains why it became today’s most popular acoustic compiler alongside C-Music. It was first adapted to the DIGITAL PDP-11 minicomputer in 1973, and in 1985, after being completely rewritten in the language C, it was given the moniker C-Sound. It was then immediately adapted to a variety of data-processing platforms, including microcomputers like the Atari, Macintosh, and IBM. A software called Music V, which was created in 1969 to make it easier to write music for instruments and partitions, is still widely used today, usually in the form of Richard Moore’s C-Music adaption (1980).

The computer was also without a doubt successful in musicologists’ analysis, a very speculative discipline. Beginning in the ’60s, data processing, which was still somewhat enigmatic and difficult to use, revealed the potential for odd musical work in composition, musicology, and finally, limited to the Bell laboratories, sound creation. The modular synthesisers, known as “analogical” because they lack numerical circuitry, first appeared in 1964, causing a significant musical upheaval that was to last the rest of this decade. The synthesisers, which were independently developed by Paolo Ketoff (Rome), Robert Moog, and Donald Buchla (the United States), are a response to the technological aspirations of many musicians, particularly in the wake of the commercial success of Walter Carlos’ Switched One Bach disc, which effectively brought these instruments to the attention of a broad audience. The Mathews curriculum has undergone changes at other locations during this time, including the universities of New York, Princeton, and Stanford.

The use of analogue instrumentation by the computer is another use. The machine produces signals with gradual variations that change the settings of studio equipment, including oscillator frequency, amplifier gain, and filter cut-off frequencies. The Royal Academy of Music provided funding for the establishment of the first instance of this technology, known as “hybrid synthesis,” in Stockholm’s Elektron Musik Studio in 1970. Knut Wiggen served as its director. There were twenty-four frequency generators, a white vibration generator, two third octave filters, modulators: out of ring, of amplitude, and reverberations, all controlled by a PDP-15/40 computer. The Stockholm system was unique in that the type-setter could determine the synthesis’s settings by brushing a panel of numbers with a metal brochette on an incredibly ergonomic operator console. Another studio worth mentioning is that of Peter Zinovieff in London (1969), which was run by a minicomputer called the Digital PDP 8 for which Peter Grogono created the Musys piloting language.

The system Groove (Generated Realtime Operations One Equipment Voltage-controlled, Ca 1969), developed at the Bell laboratory by Max Mathews and Richard Moore, is another outstanding realisation. An instrument called a groove is used to adjust the settings that determine how a synthesis equipment interprets sound. This puts the musician closer to the role of a leader than it does a type-setter or an instrumentalist, though it should be noted that the electronic music type-setter frequently assumes the role of chief by immediately interpreting the composed music.

The introduction of the microprocessor in the middle of the 1970s signalled the beginning of the inevitable expansion of the world of musical data processing. With the development of entire computers on an integrated circuit, or microprocessors, a data-processing stringed instrument trade will gradually become feasible. Additionally, it will be necessary to upgrade the user interface and switch from punch cards to a more interactive manner of inputs, which will be carried by the keyboard and cathode ray tube.

Prior to being completely replaced by numerical synthesisers at the start of the 1980s, the hybrid synthesis principle was still in use throughout the 1970s. The first microprocessor, the 4004 circuit, is sold by the American company Intel since 1971. This device enables the creation of true microcomputers, such as the Intellec 8 (which was developed from the microprocessor 8008 of 1972), Apple I, and Altair (1975), which are all grouped together under the umbrella term “microcomputers.”

The Art Group and Data Processing of Vincennes’ (GAIV) experiment musical serves as a good example of this period of change. This group, known for the publication of a bulletin disseminating research tasks in contemporary art and data processing, was founded at the University of Paris 8 by Patrick Greussay and a team of artists and architects. Giuseppe Englert, a typesetter, was given the task of coordinating the group’s musical activities. English synthesisers EMS-VCS3 were controlled by the microcomputer, via digital-to-analog converters charged to provide power of order in exchange for the binary data calculated by interactive programmes. The microcomputer, an eight-bit Intellec 8, was used for compositional activities and research on musical formalisation.

The development of “mixed synthesis,” or synthesisers that are true computers that are adapted to calculating the sound wave in real time, was the second result of the introduction of the microcomputer. There are a number of accomplishments of this kind that date back to the second half of the 1970s; we will keep the work of James Beauchamp, Jean-François Allouis, William Buxton, among others, as well as Peter Samson’s Synclavier de New England DIGITAL Corporation and Syd Alonso and Cameron Jones’ Synthesizer of Systems Concept, both of which were inspired by type-setter Jon Appleton and Luciano. Let’s take note of the fact that the name “synthesiser” vanishes with the 4X of Ircam (1980) and is replaced by “numerical processor of signal,” which unquestionably shifts the emphasis on the machine’s general information.

It doesn’t take long for the electronic instrument sector to adjust to these new advancements. The first stage involved placing microprocessors inside analogical synthesisers (Prophet synthesisers made by the company Sequential Circuits), which are still classified as “hybrid synthesis” and are responsible for controlling the modules ordered in tension. Soon, the second stage will begin, which entails creating real musical instruments that are purely numerical. It is Synclavier II’s and Fairlight’s noticed arrival.

The market for synthesisers and sound processors, as well as the software that enables their use, make up the bulk of today’s industrial sector. Since all modern synthesisers are numerical, they must all comply with the Midi standard. The field of synthesisers is divided into two categories: samplers, or “samplers,” which are devices designed to reproduce previously recorded and memorised sounds or sounds stored on mass memory. These devices are frequently equipped with a keyboard and offer a selection of preprogrammed sounds that can have specific parameters changed.

It should be mentioned that the private musician now has access to all of these technologies within the framework of what is generally referred to as the “personal studio” (home studio).

But these devices, and the personal studio in particular, cannot function without specialised software. For example, while sound editors are designed to process, assemble, and mix sound sequences, sequencers manage the execution of a piece directly from a computer. A partition can now be written using programmes, and the musical edition now frequently uses them. Last but not least, additional programmes to the composition can be used to control the devices.

The “workstation” is the most novel character in the modern stringed instrument data processing industry. Creating a workstation entails putting together applications of various kinds that are designed for sound analysis, synthesis, control, or composition. These apps are included in a data-processing “environment” built around a computer and its add-ons and are designed to treat sound over the internet. This is the case with plug-in charts that, when used with software, enable the reading of “sound files” stored on a disc in exchange for a command, such as one from a Midi source. If the system is new and has not yet been given a name, it is typically described as a “hard” or “direct-to-disk” disc.

Music is used to symbolise

Since the computer, in contrast to studio electronic music, demands a specification of the data and, consequently, a writing, the issue of musical representation is a recurrent worry in the industry. There will be two responses. The first, by Xenakis, exemplifies an a priori compositionnelle step. The second is the Midi standard, which is more inclusive.

Iannis Xenakis introduces innovation with the UPIC design (Unit Polyagogique Informatique of CEMAMu). The idea for this system, which was developed in the middle of the 1970s, came from the type-approach setter’s to sound synthesis. Xenakis had built a high-quality digital-to-analog converter with the help of the Foundation Gulbenkian while working with the team he had assembled, initially known as Emamu (Team of Mathematics and Musical Automatic, 1966). The UPIC shows an entire compositional environment with the resulting sound synthesis of a piece of made-up music. The team assembled around Xenakis conceived a system allowing the type-setter to draw on a large table of architect of the “arcs time-height” by selecting for each arc a temporal trajectory, a form of wave, and a nuance, becoming in 1971 CEMAMu (Center of Mathematics and Musical Automatic) because of the creation of a place intended to shelter its research. As a result, the music is originally represented graphically. The initial UPIC’s programmes were created for the Solar 16/65 minicomputer, which also had two magnetic tape bodies for storing samples and programmes, a digital-to-analog converter, and a cathode ray tube for posting waveforms and drawing them with a graphic pencil. The type-setter must wait until the computer has finished calculating all of the samples before hearing the page that it has just drawn. A high-quality digital-to-analog converter ensures that the sound is generated. The UPIC was recently updated for microcomputer use and operates instantly.

The Phonograms programme, created by Vincent Lesbros at the University of Paris 8, aims to express the sound in the form of a customizable visual. In the style of sonograms, the application displays the spectral analysis as a drawing that may be altered. The updated representation can then be synthesised using Midi, a sound file, or even another Midi format.

Today, it is frequently criticised that the younger generation of artists who interact with technology through the Midi environment are not sufficiently aware of the issues associated with musical data processing. But it’s important not to overlook the fact that, in some ways, the creation of the Midi standard was done without a meaningful connection to the earlier stages of the industry that we will call musical data processing. The phenomenon that epitomises Midi is not at all an error in this area.

The Midi standard was created in 1983 to enable the control of several synthesisers from a single keyboard; messages are sent via a clearly defined protocol and are sent in numerical format. As a result of its historical roots in instrumental gesture control, Midi is a system for representing not sound but the motion of a musician playing a Midi instrument. The Prophet 600 from Sequential Circuits, the first synthesiser with a Midi interface, was released in 1983. On the other hand, what had not been defined is the success that this standard was going to swiftly achieve and which is now used to connect all the equipment in an electronic music studio, including the sets of lighting for a scene.

musical analysis

The Illiac Suite for String Quartet was written by Lejaren Hiller beginning in 1956, and this work represents both the true beginning of musical data processing and the securing of this field in the research, in this case applied to the automatic composition. The computer then came into existence as a tool that could handle the intricate series of processes that make up the composition of big musical pieces. The French typesetter Pierre Barbaud, who founded the Algorithmic Group in 1958 in partnership with the corporation Bull-General Electric and began his research into automatic composition, would strengthen this approach going forward. The following year, the first algorithmic work of Barbaud was created:

Unpredictable innovations (Algorithm 1), with Pierre Blanchard’s assistance. The Illiac computer’s Musicomp software, created at the same time as the Illiac Continuation for it, became the University of Illinois one of the leading institutions for the processing of musical data at the time. The golden age of computer-aided composition began in 1962 with Iannis Xenakis’ creation of ST/10, 080262, which was made possible by the stochastic programme ST that had been running since 1958 on an IBM 7090 computer. Project I (1964), written by Gottfried Michael Koenig in the Netherlands, was soon followed by Project II (1970). The computer-assisted composition is based on stochastic theory and mathematics, largely using Markov process resources (Hiller, Barbaud, Xenakis, Chadabe, Manoury).

A new trend emerges with the introduction of microcomputers: assistance in composition, followed by computer-aided composition design (CAD). In order to solve specific musical issues, the programme demiurge, which can create a whole composition from a model of an ecosystem of data-processing tools charged, is used. Let’s use a few of the most notable to quote: HMSL (Hierarchical Music Language Specification, 1985) was developed with Mills College in California, Forms by Xavier Rodet, Draft and Patchwork by Ircam on Jean-Baptiste Barrier’s initiative, and David Cope’s Experiment in Musical Intelligence at the University of California, Santa Cruz. These programmes are interactive with the type-setter and connected to the Midi device universe, making them open. They are structured using non-numerical languages from the artificial intelligence field, such as Forth and especially Lisp, with the exception of M and Jam Factory by Joel Chadabe and David Zicarelli. This explains why they don’t rely on mathematics like the first generation of computer-assisted composition did, but rather on formal languages and generative grammars.

Computer and musical cosmos in real time

The Eighties see the development of the use of the computer in a concert setting; thanks to the advent of real-time numerical synthesisers, or, more generally, real-time numerical processors of sound, the conditions are favourable to revisit this venerable aspect of twentieth-century music: electronic music on the internet (live electronic music). The majority of the time, the first step is to think of a way to connect a computer and its processing capability to sound synthesis or treatment equipment, ideally with the participation of artists. By including treatment protocols and writing itself, Pierre Swell’s Répons (1981) demonstrated how the computer transformed into an instrument that was seamlessly incorporated into the orchestra. This work is followed by an operation known as “follow-up of partition” that is a computer follow-up of the instrumentalist’s performance. Let’s cite Roger Dannenberg’s contributions to the automatic accompaniment and languages providing the conditions of the communication computer-instrument, Max Mathews’ work on “Radio operator Drum” and Miller Puckette’s Max program’s simulation of the chief orchestrator’s rod, and Max’s contributions to the automatic accompaniment and languages offering the conditions of the communication computer-instrument.

This is why it was also interesting to give orchestral instruments this capability by equipping them with sensors that would allow the computer to follow the performance (flute, vibraphone, etc…). Although the method to be used has not yet been decided, the entire musical industry is concerned with this trend. Will it be electromechanical (material sensors placed at strategic locations of the instrument, conducting membranes, etc.), or will it be necessary to resort to the analysis of the sounds themselves to determine their height, spectral structure, and mode of play?

The neighbourhood organises itself

The community’s acceptance of responsibility by the artists and researchers alone coincided with the maturation of musical data processing. The conscience of being a member of a field gradually fades away. The appearance of the international congresses is followed by the local conferences. The communications that are noted there are published in collections that are accessible to the entire neighbourhood. These gatherings also serve as a venue for concert presentations, which have a stronger tendency to fuse the aesthetic and scientific elements with the awareness of a new sector. The “International Computer Music Conferences” have just begun (ICMC). The “Computer Music Association,” which later changed its name to the “International Computer Music Association” in 1991, was founded in 1978 with the goal of improving congress behaviour and communication (ICMA). The convention will be held in North America one year and on a different continent the following year, according to the organisers. These congresses witness the ICMA playing an increasing role in the support provided to the local organisers, such as in the disseminating of publications resulting from these meetings, up until putting from the orders of works which will be carried out during ICMC (ICMA Commission Awards, 1991).