<< preface

this blog is nina wenhart's collection of resources on the various histories of new media art. it consists mainly of non or very little edited material i found flaneuring on the net, sometimes with my own annotations and comments, sometimes it's also textparts i retyped from books that are out of print.

it is also meant to be an additional resource of information and recommended reading for my students of the prehystories of new media class that i teach at the school of the art institute of chicago in fall 2008.

the focus is on the time period from the beginning of the 20th century up to today.

>> search this blog

2008-09-29

>> Peter Foldes, "Hunger", 1974

"Hunger" or "La Faim" is an 11 min 2d-computer animation by Peter Foldes, composed of hand drawn images and digital metamorphosis, nominated for the acadamy award in 1974 (didn't win).
Peter Foldes was born in Hungary in 1924, emigrated to France. Hunger was produced with the help of the National Filmboard of Canada.
















part 1:




part 2:

>> Bill Buxton, "The Role of the Artist in the Lab", 1988

from: http://www.billbuxton.com/artistRole.html

"For centuries, there has been a kind of love/hate relationship between the arts and science and technology. From the artist's perspective, this has sometimes take the form of confrontation. At other times, it has resulted in new materials and techniques that enabled artistic breakthroughs. (Architecture, for example, is full of such examples.)

Our own era is no different. If anything, artist/technology synergies and confrontations are more visible today than in any previous period in history. Taking this view, as the result of my experience over the past twenty years, I believe the arts/sciences relationship begs scrutiny and discussion. That is my objective in what follows. The perspective that I take will be that of the artist.

My underlying view is that in the art/technology equation, the artist is too often viewed as some kind of welfare case begging for resources and at the mercy of the benvolent technologist who controls the means to production. Admittedly, this is an over simplification. Nevertheless, I feel strongly that the artist brings far more to the relationship than is generally acknowledged.

My central thesis is, therefore, that I believe that there is a vital role for the artist in the laboratory and that this role is equally beneficial to artist and scientist, alike. To summarize before the fact, the objective of what follows is to give my arguments for why I believe "all" research labs should have an artist in residence program. (I will leave it for another time to make the converse argument, why all art colleges should have a scientist in residence program.)

My first argument for the mutual benefit of art/science collaboration is based on precedent. First, there are some good examples of artists collaborating with scientists to produce art. An example is the whole arts-science mix orchestrated by Billy Klüver and EAT for the Pepsi Pavilion at Expo '70. In the other direction, there is a long history of artists contributing to scientific research. There is the mix of art and science in the work of Da Vinci. Then there is the case of music playing a role in the experiments that resulted in Galileo's discovery of the law of falling bodies.

A more recent example can be seen in research undertaken at the National Research Council of Canada in the early 1970s. This work in human-computer interaction took the form of two case studies: one in music, and one in animation. I conjecture that as a result of this collaborative study, the researchers at NRC knew more about human-computer interaction 15 years ago than 90% of today's "specialists". And from the other perspective, some important art was produced - most notably Pierre Foldes' film Hunger.

This mutual benefit is not isolated. At Bell Labs, for example, Max Mathew's work in computer music has contributed to numerous compositions, as well as our knowledge of psychoacoustics and speech synthesis. And at the same lab, work in animation by Ken Knowlton and Lillian Schwartz resulted in several films, and also contributed towards new techniques for displaying molecules. Finally, at the University of Pennsylvania and at Simon Fraser University, work in dance notation involving computer scientists, dance notators, kinesiologists and a sculptor has resulted in new tools for the study of human motion. This work has contributed new insights into the biomechanics of human locomotion. "Bubbleman", a computer model of human that was developed has even been used by NASA in studies of cockpit design and crash simulation.

To summarize our first argument, there are several clear examples of successful collaborations between artists and scientists (of which we have mentioned only a few). The point emphasize is that the benefits have gone both ways.

This brings me to my second argument. I believe that technology is currently faced with a set of research problems to which the artist can make a particular contribution. The problem is in realizing the full potential of the evolving microelectronic and communications technologies: potential from the the educational, recreational, commercial, social, and information providing perspective. Until very recently, computer scientists have designed systems for computer scientists. However, designing for a more general public presents a whole new set of problems - problems which require a multi-disciplinary approach. We can no longer afford to partition teams by profession. The industrial engineer, electrical engineer, behavioral scientist, computer programmer, psychophysicist, and graphic designer, animator, and composer all have their respective roles to play.

Visually rendering data so that it effectively informs is something that graphic artists do well. An excellent example of this is the work of Aaron Marcus who has made an important contribution to augmenting computer scientists' knowledge about effective display lay-out and typography.

In a similar way, animators understand the use of visual images over time, and therefore have a contribution to make in the effective graphical representation of computer simulations. Finally, musicians (especially those who write for film) have a wealth of knowledge in how to use sound to highlight key features of what is being viewed, or to communicate information in the absence of visual contact. For example, the work of Sarah Bly on using sound cues in the analysis of statistical data could not have been carried out without the contribution of work in computer music.

My third argument is the easiest to state, but the most difficult to measure. I believe that there is a very important socio-political benefit that accrues from collaboration. I believe that our society is increasingly being polarized into two groups: those intimidated by technology (cyberphobics) and those who place all their faith in it (cyberphillics). To break down this polarization, technologist must acknowledge the legitimacy of the cyberphobic's fears, just as the cyberphobic must understand the positive role that technology can play in society. Technical literacy will permit the humanist to function as an informed watch-dog as well as make some contribution to the enlightened use of technology. Involvement in the arts will help the scientist remain conscious of the human impact of technological developments. Polarization between camps is just downright dangerous.

In breaking down these barriers, artists have an important role to play. Their "business" is communication, and I see no better group with which to begin our assault on this artificial and counter-productive humanist/rationalist partitioning.

My final argument is a natural outgrowth of the issues just raised. I believe that the visual and audio arts are important areas in which to apply the emerging technologies. We have already discussed art with technology from the professional's perspective. Perhaps more important, however, is the potential impact of the technology for the amateur artist.

I subscribe to the Platonic view that everyone has some innate artistic ability, and that this creative potential is worth developing. In an era of (largely technologically induced) increased leisure time, I see it as only fitting that we capture that same technology's potential to aid in addressing problems that it helped create. Technology can provide a strong catalyst for artistic development. The New York composer Laurie Spiegel speaks of the microprocessor as "the folk instrument of the 80's", and there is compelling evidence that she is right. The point is, to fully develop the potential of this "digital folk art" will require the active collaboration of both professional artists and scientists.

If there is so much to benefit from arts-science collaboration, then why is it so hard to get projects going?

In my opinion, one reason is that the existing funding agencies are not well equipped to deal with most such projects. Having separate funding bodies for the arts, humanities, and engineering sciences makes interdisciplinary projects difficult to put forward. In addition, the environment for research in most universities is one which rewards increased specialization rather than encourages interdisciplinary cooperation. Clearly we are in need of some new mechanisms and policies.

Next, there needs to be a re-education on the side of both the artist and the scientist in order that each learn to respect the other, understand that they have common problems, and recognize that each stands to benefit as a result. The artist can no more afford to be scientificly illiterate than the scientist can afford to be illiterate in the humanities and arts.

Finally, I do not believe it overly dramatic to conclude by stating that without mutual respect and collaboration between the "two cultures", what future there is will be pretty bleak. Artists are not second class citizens who have access to technologies only at the whim of some scientific "guardian of the castle." Their importance and value is real, and has been sold short for too long. It is time for a change.

Acknowledgements

I must acknowledge the contribution made to this article by Catherine Richards, Ron Baecker, K.C. Smith, and Alain Fournier. Our (often animated) conversations have not only helped shape my opinions but have also provided yet another example of the benefits of cross-discipline pollination. In addition, Lillian Schwartz, Norm Badler, Max Mathews and Ken Knowlton have all made valuable comments and suggestions."

>> Leslie Mezei

published in Ruth Leavitt's book "Artist and Computer", 1976
article taken from: http://www.atariarchives.org/artist/sec7.php


Computer Art, as many new endeavors, has reached a plateau of stagnation after an exhilarating start full of promise. The computer specialists who first played with these possibilities soon exhausted their ideas and their interest. They merely did what was easy and obvious with their hardware and their even more limited software. Since they were first the results were unique and interesting, but generally 'artless,' and not very innovative. [--> compare to G. Youngblood, p.192: "However, there is a tendency to regard any computer-generated art as highly significant— even the most simplistic line drawing, which would be meaningless if rendered by
hand."]

The first wave of artists—really only a small ripple—that came to the computer expected miracles from it without a serious effort of learning and exploring and creation on their part. The results were in a way even more disappointing, except in the cases where the artist was already doing a type of art which could be directly assisted by computer techniques, such as modular art. Some instead succeeded in prettifying the output of their technical collaborator, without any real understanding of the processes involved. The rest were confined to existing programs and repeated the technicians and each other's work [--> compare to "creative" software applications such as photoshop & co. what freedom and what constraints do they give to artists?]. Those first class artists that deigned to inquire into the possibilities were quickly discouraged by the lack of convenient control over the computer, the difficulty of communicating visually with it, and the amount of effort required to do it really well.

Today we are left with a small number of people from both sides, each of whom is aware of the long term effort needed to exploit the potential. The promise is as great as ever, but, as usual, requires more application and ingenuity and application than at first realized. The artists, and especially the art students, are willing to learn programming and some mathematics, and to learn to think in an algorithmic, process oriented manner. More importantly, in my view, they are ready to transcend the technological art so far pursued, and learn something of the underlying scientific ideas. [Applying any new technology slavishly results in imitative work, often foreshadowed by visionary artists long before the new technology. (Compare Picasso's drawings with some of our transformations, such as my BIKINI SHIFTED).] It is the new concepts and ideas, the new ways of thinking provided by the information sciences that will provide this. I am referring to our enriched understanding of system, structure, randomness and process as well as of the very process of communication and language, and the more realistic accounts of the methods of discovery in the sciences and the arts.

I have developed an Interdisciplinary course on the Concepts of the Information Sciences, in which we explore many of the concepts which come from cybernetics and computer science, communication theory and linguistics, general systems research and morphology, mathematics and operations research, etc.

The technical computer specialists, on the other hand, have to become aware of the potential contribution of the artists, develop a respect for their pattern perceiving and pattern generating abilities, for their trained sensitivity to the exploration of novelty, their ability to select what is most significant; indeed—at their best—to make concrete the future before it happens, before we can define it, formalize it and verbalize it. We may well end up in the next few years with a few individuals who have mastered both sides reasonably well. Programmer-artists and artist-programmers. Collaboration and multimedia are not impossible, only extremely hard and rarely successful. But then, so is most activity of a high ambition, high risk, innovative nature.

Of course, both should have an awareness of what has been already done, and what directions have been pointed to. My own book (Computer Art), which does just this is still making the rounds of the publishers, and the book introducing some of the information theoretic ideas applied to this field is in the German language ("Asthetik als Informationsverarbeitung," Frieder Nake, Springer-Verlag). Though Franke's book covers too large an area too superficially, it is the only book in English I can recommend ("Computer Graphics, Computer Art," H.W. Franke, Phaidon). In any case no exciting new ideas and results have appeared in the last few years; the next wave of creativity in this field is probably still a few years away.

image

'Bikini Shifted'

image

'BEAVER SCALED'

image

'BABEL SHOOK'

What we ask of the artist is to use the science and technology to explore and expand our reality, and make statements of significance to today's tortured but expectant world. We have all filled pages and pages of programmatic notes, enough aims for a lifetime. Now it is time to raise the standards, to stop applauding the fact that we can do art with the aid of a computer at all, and apply as critical judgment to our results as to any other works of art. The hardware and software are becoming more flexible and less expensive. Our own Dynamic Graphics Group, for example, is developing, under the leadership of Ron Baecker, a system with both a high speed line display and a digital color video tube, with sophisticated software for interactive dynamic graphics for artistic and simulation purposes. We are now making an arrangement with the local art college for a few of us each to 'adopt' one art student to work with us, sit in on our courses and develop themselves in their own way gradually.

image

'SCALE of RANDOMNESS'

My own work, all done a few years ago, has tried to make a novel beginning in the exploration of controlled randomness, of various distortions and transformations. These were neither systematic enough to be scientific, nor did they try to achieve the ultimate exploitation of their medium to be really good art. They merely tried to point the way toward new possibilities. From the still graphics I shifted to animation, and some successful films were produced on our system by a number of artists working with the help of a programmer. But I was not sufficiently involved with this work, merely the producer allowing it to happen. As soon as our equipment and software are advanced enough to undertake ambitious concepts easily, I intend to combine my developing understanding of graphic simulation methods and of the new concepts of feedback, structure, system, randomness and so on to try to create a new combination of science and art.

My background was in mathematics, physics and meteorology by training, and for the last 21 years computers, learned on the job. An early interest in the possibility of computer art (first paper on the subject in 1964) led me to become an academic, and to computer graphics research, as well as many other fascinating ideas and people. There is a constant struggle within me between the symbolic mathematical, the visual artistic and the verbal literary modes of expression, with the verbal winning at the moment. I do have a fascination with the visual possibilities, especially as seen in the incredible complexity and variety in nature—combined within many organizing aspects. However, to express this is—at least for me—a difficult, time consuming and indirect process.

We need to find those things which uniquely suit these new media, which can only be expressed with their help, and thus make the effort worthwhile. I look for the fresh wind of ideas from the new wave of art students who will be literate in the information sciences, and conversant with interactive computers and the new processes which they can help visually explicate.

Toronto, Canada
July 1975

2008-09-27

>> Gene Youngblood, Expanded Cinema, 1970 (full pdf and excerpts)

This excerpt is part 4 - Cybernetic Cinema and Computer Films - of Youngblood's seminal book "Expanded Cinema" (1970). The whole book can be downloaded from the Vasulkas' archive: http://www.vasulka.org/Kitchen/PDF_ExpandedCinema/ExpandedCinema.html



THE HUMAN BIO-COMPUTER AND HIS ELECTRONIC BRAINCHILD

The verb "to compute" in general usage means to calculate. A computer, then, is any system capable of accepting data, applying prescribed processes to them, and supplying results of these processes. The first computer, used thousands of years ago, was the abacus.
There are two types of computer systems: those that measure and those that count. A measuring machine is called an analogue computer because it establishes analogous connections between the measured quantities and the numerical quantities supposed to
represent them. These measured quantities may be physical distances, volumes, or amounts of energy. Thermostats, rheostats, speedometers, and slide rules are examples of simple analogue computers.
A counting machine is called a digital computer because it consists entirely of two-way switches that perform direct, not analogous, functions. These switches operate with quantities expressed directly as digits or discrete units of a numerical system known as the binary system.7 This system has 2 as its base. (The base of the decimal system is 10, the base of the octal system is 8, the base of the hexadecimal system is 16, and so on.) The binary code used in digital computers is expressed in terms of one and zero (1-0), representing on or off, yes or no. In electronic terms its equivalent is voltage or no voltage. Voltages are relayed through a sequence of binary switches in which the opening of a later switch depends on the action of precise combinations of earlier switches leading to it.
The term binary digit usually is abbreviated as bit, which is used also as a unit of measurement of information. A computer is said to have a "million-bit capacity," or a laser hologram is described as requiring 109 bits of information to create a three-dimensional image.
The largest high-velocity digital computers have a storage capacity from four thousand to four million bits consisting of twelve to forty-eight digits each. The computer adds together two forty-eight digit numbers simultaneously, whereas a man must add each pair of digits successively. The units in which this information is stored are called ferrite memory cores. As the basic component of the electronic brain, the ferrite memory core is equivalent to the neuron,
the fundamental element of the human brain, which is also a digital computer. The point at which a nerve impulse passes from one neuron to another is called a synapse, which measures about 0.5 micron in diameter. Through microelectronic techniques of Discretionary Wiring and Large Scale Integration (LSI), circuit elements of five microns are now possible. That is, the size of the
computer memory core is approaching the size of the neuron. A complete computer function with an eight-hundred-bit memory has been constructed only nineteen millimeters squared.8
The time required to insert or retrieve one bit of information is known as memory cycle time. Whereas neurons take approximately ten milliseconds (10- 2 second) to transmit information from one to another, a binary element of a ferrite memory core can be reset in one hundred nanoseconds, or one hundred billionths of a second (10- 7 second). This means that computers are about one-hundredthousand times faster than the human brain. This is largely offset,
however, by the fact that computer processing is serial whereas the brain performs parallel processing. Although the brain conducts millions of operations simultaneously, most digital computers conduct only one computation at any one instant in time.9 Brain elements are much more richly connected than the elements in a computer. Whereas an element in a computer rarely receives simultaneous inputs from two other units, a brain cell may be simultaneously influenced by several hundred other nerve cells.10 Moreover, while the brain must sort out and select information from the nonfocused total field of the outside world, data input to a
computer is carefully pre-processed.


HARDWARE AND SOFTWARE
It is often said that computers are "extraordinarily fast and extraordinarily accurate, but they also are exceedingly stupid and therefore have to be told everything." This process of telling the computer everything is called computer programming. The hardware of the human bio-computer is the physical cerebral cortex, its neurons and synapses. The software of our brain is its logic or intelligence, that which animates the physical equipment. That is to
say, hardware is technology whereas software is information. The software of the computer is the stored set of instructions that controls the manipulation of binary numbers. It usually is stored in the form of punched cards or tapes, or on magnetic tape. The process by which information is passed from the human to the machine is called computer language. Two of the most common computer languages are Algol derived from "Algorithmic Language,"
and Fortran, derived from "Formula Translation."
The basis of any program is an algorithm— a prescribed set of rules that define the parameters, or discrete characteristics, of the solution to a given problem. The algorithm is the solution, as opposed to the heuristics or methods of finding a solution. In the case of computer-generated graphic images, the problem is how to create a desired image or succession of images. The solution usually is in the form of polar equations with parametric controls for straight lines, curves, and dot patterns.
Computers can be programmed to simulate "conceptual cameras" and the effects of other conceptual filmmaking procedures. Under a grant from the National Science Foundation in 1968, electrical engineers at the University of Pennsylvania produced a forty-minute instructional computer film using a program that described a "conceptual camera," its focal plane and lens angle, panning and zoom actions, fade-outs, double-exposures, etc. A program of "scenario description language" was written which, in effect, stored fifty years of moviemaking techniques and concepts into an IBM 360-65 computer.11
In the last decade seventy percent of all computer business was in the area of central processing hardware, that is, digital computers themselves. Authorities estimate that the trend will be completely reversed in the coming decade, with seventy percent of profits being made in software and the necessary input-output terminals. At present, software equals hardware in annual sales of approximately $6.5 billion, and is expected to double by 1975.12 Today machines read printed forms and may even decipher handwriting. Machines "speak" answers to questions and are voiceactuated.
Computers play chess at tournament level. In fact, one of the first instances of a computer asking itself an original question occurred in the case of a machine programmed to play checkers and
backgammon simultaneously. A situation developed in which it had to make both moves in one reset cycle and thus had to choose between the two, asking itself: "Which is more important, checkers or backgammon?" It selected backgammon on the grounds that more affluent persons play that game, and since the global trend is toward more wealth per each world person, backgammon must take priority.13
Machine tools in modern factories are controlled by other machines, which themselves have to be sequenced by higher-order machines. Computer models can now be built that exhibit many of
the characteristics of human personality, including love, fear, and anger. They can hold beliefs, develop attitudes, and interact with other machines and human personalities. Machines are being developed that can manipulate objects and move around autonomously in a laboratory environment. They explore and learn, plan strategies, and can carry out tasks that are incompletely specified.14
So-called learning machines such as the analogue UCLM II from England, and the digital Minos II developed at Stanford University, gradually are phasing out the prototype digital computer. A learning machine has been constructed at the National Physical Laboratory that learns to recognize and to associate differently shapedshadows which the same object casts in different positions.15 These new electronic brains are approaching speeds approximately one million
times faster than the fastest digital computers. It is estimated that the next few generations of learning machines will be able to perform in five minutes what would take a digital computer ten years. The significance of this becomes more apparent when we realize that a digital computer can process in twenty minutes information equivalent to a human lifetime of seventy years at peak performance. 16
N. S. Sutherland: "There is a real possibility that we may one day be able to design a machine that is more intelligent than ourselves. There are all sorts of biological limitations on our own intellectual capacity ranging from the limited number of computing elements we have available in our craniums to the limited span of human life and the slow rate at which incoming data can be accepted. There is no reason to suppose that such stringent limitations will apply to computers of the future... it will be much easier for computers to bootstrap themselves on the experience of previous computers than it is for man to benefit from the knowledge acquired by his predecessors. Moreover, if we can design a machine more intelligent than ourselves, then a fortiori that machine will be able to design one more intelligent than itself.''17
The number of computers in the world doubles each year, while computer capabilities increase by a factor of ten every two or three years. Herman Kahn: "If these factors were to continue until the end of the century, all current concepts about computer limitations will have to be reconsidered. Even if the trend continues for only the next decade or so, the improvements over current computers would be factors of thousands to millions... By the year 2000 computers are
likely to match, simulate or surpass some of man's most 'human-like' intellectual abilities, including perhaps some of his aesthetic and creative capacities, in addition to having new kinds of capabilities that human beings do not have... If it turns out that they cannot duplicate or exceed certain characteristically human capabilities, that will be one of the most important discoveries of the twentieth century.''18
Dr. Marvin Minsky of M.I.T. has predicted: "As the machine improves... we shall begin to see all the phenomena associated with the terms 'consciousness,' 'intuition' and 'intelligence.' It is hard to say how close we are to this threshold, but once it is crossed the world will not be the same... it is unreasonable to think that machines could become nearly as intelligent as we are and then stop, or to suppose that we will always be able to compete with them in wit and wisdom. Whether or not we could retain some sort of control of the machines— assuming that we would want to— the nature of our activities and aspirations would be changed utterly by the presence
on earth of intellectually superior entities.''19 But perhaps the most portentous implication in the evolving symbiosis of the human biocomputer and his electronic brainchild was voiced by Dr. Irving John Good of Trinity College, Oxford, in his prophetic statement: "The first ultra-intelligent machine is the last invention that man need make."20


THE AESTHETIC MACHINE
As the culmination of the Constructivist tradition, the digital computer opens vast new realms of possible aesthetic investigation. The poet Wallace Stevens has spoken of "the exquisite environment of face."
Conventional painting and photography have explored as much of that environment as is humanly possible. But, as with other hidden realities, is there not more to be found there? Do we not intuit something in the image of man that we never have been able to express visually? It is the belief of those who work in cybernetic art that the computer is the tool that someday will erase the division between what we feel and what we see. Aesthesic application of technology is the only means of achieving new consciousness to match our new environment. We certainly are not going to love computers that guide SAC missiles. We surely do not feel warmth toward machines that analyze marketing trends. But perhaps we can learn to understand the beauty of a machine that produces the kind of visions we see in expanded cinema.
It is quite clear in what direction man's symbiotic relation to the computer is headed: if the first computer was the abacus, the ultimate computer will be the sublime aesthetic device: a parapsychological instrument for the direct projection of thoughts and emotions. A. M. Noll, a pioneer in three-dimensional computer films at Bell Telephone Laboratories, has some interesting thoughts on the subject: "...the artist's emotional state might conceivably be determined by computer processing of physical and electrical signals from the artist (for example, pulse rate and electrical activity of the brain). Then, by changing the artist's environment through such external stimuli as sound, color and visual patterns, the computer
would seek to optimize the aesthetic effect of all these stimuli upon the artist according to some specified criterion... the emotional reaction of the artist would continually change, and the computer would react accordingly either to stabilize the artist's emotional state or to steer it through some pre-programmed course. One is strongly tempted to describe these ideas as a consciousness-expanding experience in association with a psychedelic computer... current successive stereo pairs from a film by A. Michael Noll of Bell Telephone Laboratories, demonstrating the rotation, on four mutually perpendicular axes, of a four-dimensional hypercube projected onto dual two-dimensional picture planes in simulated three-dimensional space. The viewer wears special polarized glasses such as those common in 3-D movies of the early 1950's. It was an attempt to communicate an intuitive understanding of four-dimensional objects, which in physics are called hyperobjects. A computer can easily construct, in mathematical terms, a fourth spatial dimension perpendicular to our three spatial dimensions. Only a fourth digit is required for the machine to locate a point in four-dimensional space.


This chapter on computer films might be seen as an introduction to the first tentative, crude experiments with the medium. No matter how impressive, they are dwarfed by the knowledge of what computers someday will be able to do. The curious nature of the technological revolution is that, with each new step forward, so much new territory is exposed that we seem to be moving backwards. No one is more aware of current limitations than the artists themselves.
As he has done in other disciplines without a higher ordering principle, man so far has used the computer as a modified version of older, more traditional media. Thus we find it compared to the brush, chisel, or pencil and used to facilitate the efficiency of conventional
methods of animating, sculpting, painting, and drawing. But the chisel, brush, and canvas are passive media whereas the computer is an active participant in the creative process. Robert Mallary, a computer scientist involved in computer sculpture, has delineated six levels of computer participation in the creative act. In the first stage the machine presents proposals and variants for the artist's consideration without any qualitative judgments, yet the man/machine symbiosis is synergetic. At the second stage, the computer becomes an indispensable component in the production of an art that would be impossible without it, such as constructing holographic interference patterns. In the third stage, the machine makes autonomous decisions on alternative possibilities that ultimately govern the outcome of the artwork. These decisions, however, are made within parameters defined in the program. At the fourth stage the computer makes decisions not anticipated by the artist because they have not been defined in the program. This ability does not yet exist for machines. At the fifth stage, in Mallary's words, the artist "is no longer needed" and "like a child, can only get in the way." He would still, however, be able to "pull out the plug," a capability he will not possess when and if the computer ever reaches the sixth stage of "pure disembodied energy."22

Returning to more immediate realities, A. M. Noll has explained the computer's active role in the creative process as it exists today:
"Most certainly the computer is an electronic device capable of performing only those operations that it has been explicitly instructed to perform. This usually leads to the portrayal of the computer as a powerful tool but one incapable of any true creativity. However, if 'creativity' is restricted to mean the production of the unconventional or the unpredicted, then the computer should instead be portrayed as a creative medium— an active and creative collaborator with the
artist... because of the computer's great speed, freedom from error, and vast abilities for assessment and subsequent modification of programs, it appears to us to act unpredictably and to produce the unexpected. In this sense the computer actively takes over some of the artist's creative search. It suggests to him syntheses that he may or may not accept. It possesses at least some of the external attributes of creativity."23
Traditionally, artists have looked upon science as being more important to mankind than art, whereas scientists have believed the reverse. Thus in the confluence of art and science the art world is understandably delighted to find itself suddenly in the company of science. For the first time, the artist is in a position to deal directly with fundamental scientific concepts of the twentieth century. He can now enter the world of the scientist and examine those laws that describe a physical reality. However, there is a tendency to regard any computer-generated art as highly significant— even the most simplistic line drawing, which would be meaningless if rendered by hand. Conversely, the scientific community could not be more pleased with its new artistic image, interpreting it as an occasion to relax customary scientific disciplines and accept anything random as art. A solution to the dilemma lies somewhere between the polarities and surely will evolve through closer interaction of the two
disciplines.

When that occurs we will find that a new kind of art has resulted from the interface. Just as a new language is evolving from the binary elements of computers rather than the subject-predicate relation of the Indo-European system, so will a new aesthetic discipline that bears little resemblance to previous notions of art and the creative process. Already the image of the artist has changed radically.
In the new conceptual art, it is the artist's idea and not his technical ability in manipulating media that is important. Though much emphasis currently is placed on collaboration between artists and technologists, the real trend is more toward one man who is
both artistically and technologically conversant. The Whitney family, Stan VanDerBeek, Nam June Paik, and others discussed in this book are among the first of this new breed. A. M. Noll is one of them, and he has said: "A lot has been made of the desirability of collaborative efforts between artists and technologists. I, however, disagree with many of the assumptions upon which this desirability supposedly is founded. First of all, artists in general find it extremely difficult to verbalize the images and ideas they have in their minds. Hence the communication of the artist's ideas to the technologist is very poor indeed. What I do envision is a new breed of artist... a man who is extremely competent in both technology and the arts."
Thus Robert Mallary speaks of an evolving "science of art... because programming requires logic, precision and powers of analysis as well as a thorough knowledge of the subject matter and
a clear idea of the goals of the program... technical developments in programming and hardware will proceed hand in glove with a steady increase in the theoretical knowledge of art, as distinct from the intuitive and pragmatic procedures which have characterized the creative process up to now."


CYBERNETIC CINEMA
Three types of computer output hardware can be used to produce movies: the mechanical analogue plotter, the "passive" microfilm plotter and the "active" cathode-ray tube (CRT) display console.
Though the analogue plotter is quite useful in industrial and scientific engineering, architectural design, systems analysis, and so forth, it is rather obsolete in the production of aesthetically-motivated computer films. It can and is used to make animated films but is best suited for
still drawings. Through what is known as digital-to-analogue conversion, coded signals from a computer drive an armlike servomechanism that literally draws pen or pencil lines on flatbed or drum carriages. The resulting flow charts, graphs, isometric renderings, or realist images
are incrementally precise but are too expensive and time-consuming for nonscientific movie purposes. William Fetter of the Boeing Company in Seattle has used mechanical analogue plotting systems to make animated films for visualizing pilot and cockpit configurations in aircraft design. Professor Charles Csuri of Ohio State University has created "random wars" and other random and semi-random drawings using mechanical plotters for realist images.
However, practically all computer films are made with cathode-ray tube digital plotting output systems. The cathode-ray tube, like the oscilloscope, is a special kind of television tube. It's a vacuum tube in which a grid between cathode and anode poles emits a narrow beam of electrons that are accelerated at high velocity toward a phosphor-coated screen, which fluoresces at the point where the electrons strike. The resulting luminescent glow is called a "trace-point." An electromagnetic field deflects the electron beam along predetermined patterns by electronic impulses that can be broadcast, cabled, or recorded on tape. This deflection capability follows vertical and horizontal increments expressed as xy plotting coordinates. Modern three-inch
CRTs are capable of responding to a computer's "plot-point" and "draw-line" commands at a rate of 100,000 per second within a field of 16,000 possible xy coordinates— that is, approximately a million times faster and more accurate than a human draftsman. When interfaced with a digital computer, the CRT provides a visual display of electronic signal information generated by the computer program. The passive microfilm plotter is the most commonly used output system for computer movies. It's a self-contained film-recording unit
in which a movie camera automatically records images generated on the face of a three-inch CRT. The term "microfilm" is confusing to filmmakers not conversant with industrial or scientific language. It simply indicates conventional emulsion film in traditional 8mm., 16mm., or 35mm. formats, used in a device not originally intended for the production of motion pictures, but rather still pictures for compact storage of large amounts of printed or pictorial information. Users of microfilm plotters have found, however, that their movieproducing capability is at least as valuable as their storage-andretrieval capability. Most computer films are not aestheticallymotivated. They are made by scientists, engineers, and educators to
facilitate visualization and rapid assimilation of complex analytic and abstract concepts.
In standard cinematography the shutter is an integral part of the camera's drive mechanism, mechanically interlocked with the advance-claws that pull successive frames down to be exposed. But cameras in microfilm plotters such as the Stromberg-Carlson 4020 or
the CalComp 840 are specially designed so that the shutter mechanism is separate from the film pull-down. Both are operated automatically, along with the CRT display, under computer program control.
Some computer films, particularly those of John Whitney, are made with active twenty-one-inch CRTs such as the IBM 2250 Display Console with its light pen, keyboard inputs, and functional
keys, whose use will be described in more detail later on. This arrangement is not a self-contained filmmaking unit; rather, a specially modified camera is set up in front of the CRT under automatic synchronous control of a computer program. This system is called "active" as opposed to the "passive" nature of the microfilm plotter because the artist can feed commands to the computer through the CRT by selecting variables with the light pen and the function
keyboard, thus "composing" the picture in time as sequences develop (during filming, however, the light pen is not used and the CRT becomes a passive display of the algorithm). Also, until recently the display console was the only technique that allowed the artist to see the display as it was being recorded; recent microfilm plotters, however, are equipped with viewing monitors.
Since most standard microfilm plotters were not originally intended for the production of motion pictures, they are deficient in at least two areas that can be avoided by using the active CRT. First, film registration in microfilm plotters does not meet quality standards of the motion-picture industry since frame-to-frame steadiness is not a primary consideration in conventional microfilm usage. Second, most microfilm plotters are not equipped to accept standard thousand-foot core-wound rolls of 35mm, film, which of course is possible with magazines of standard, though control-modified, cameras used to photograph active CRTs. Recently, however, computer manufacturing firms such as Stromberg-Carlson have designed cameras and microfilm plotters that meet all qualifications of the motion-picture industry as the use of computer graphics becomes increasingly popular in television commercials and large animation firms. Passive CRT systems are preferred over active consoles for various reasons. First, the input
capabilities of the active scope are rarely used in computer animation. Second, passive CRTs come equipped with built-in film recorders. Third, a synchronization problem can arise when filming from an active CRT scope caused by the periodic "refreshing" of the image. This is similar to the "rolling" phenomenon that often occurs in the filming of a televised program. The problem is avoided in passive systems since each frame is drawn only once and the camera shutter remains open while the frame is drawn.
The terms "on-line," "off-line," and "real time" are used in describing computer output systems. Most digital plotting systems are designed to operate either on-line or off-line with the computer.
In an on-line system, plot commands are fed directly from the computer to the CRT. In an off-line system, plot commands are recorded on magnetic tape that can instruct the plotter at a later time. The term "real time" refers specifically to temporal relationships between the CRT, the computer, and the final film or the human operator's interaction with the computing system. For example, a real-time interaction between the artist and the computer is possible by drawing on the face of the CRT with the light pen. Similarly, if a movie projected at the standard 24 fps has recorded the CRT display exactly as it was drawn by the computer, this film is said to be a realtime representation of the display. A live-action shot is a real-time document of the photographed subject, whereas single-frame animation is not a real-time image, since more time was required in recording than in projecting.
Very few computer films of significant complexity are recorded in real-time operation. Only one such film, Peter Kamnitzer's City-Scape, is discussed in this book. This is primarily because the hardware necessary to do real-time computer filmmaking is rare and prohibitively expensive, and because real-time photography is not of crucial importance in the production of aesthetically-motivated films.
In the case of John Whitney's work, for example, although the imagery is reconceived for movie projection at 24 fps, it is filmed at about 8 fps. Three to six seconds are usually required to produce one image, and a twenty-second sequence projected at 24 fps may require thirty minutes of computer time to generate.
Most CRT displays are black-and-white. Although the Sandia Corporation and the Lawrence Radiation Laboratory have achieved dramatic results with full-color CRTs, the color of most computer films is added in optical printing of original black-and-white footage, or else colored filters can be superposed over the face of the CRT
during photography. Full color and partial color displays are available.
As in the case of City-Scape, however, a great deal of color quality is lost in photographing the CRT screen. Movies of color CRT displays invariably are washed-out, pale, and lack definition. Since black-and-white film stocks yield much higher definition than color film stocks, most computer films are recorded in black-and-white with color added later through optical printing.
A similar problem exists in computer-generated realistic imagery in motion. It will be noted that most films discussed here are nonfigurative, non-representational, i.e., concrete. Those films which do contain representational images— City-Scape, Hummingbird— are rather crude and cartoon-like in comparison with conventional
animation techniques. Although computer films open a new world of language in concrete motion graphics, the computer's potential for manipulation of the realistic image is of far greater relevance for both artist and scientist. Until recently the bit capacity of computers far outstripped the potentials of existing visual subsystems, which did not have the television capability of establishing a continuous scan on the screen so that each point could be controlled in terms of
shading and color. Now, however, such capabilities do exist and the tables are turned; the bit capacity necessary to generate televisionquality motion images with tonal or chromatic scaling is enormously beyond present computer capacity.
Existing methods of producing realistic imagery still require some form of realistic input. The computer does not "understand" a command to make this portion of the picture dark gray or to give that line more "character." But it does understand algorithms that describe the same effects. For example, L. D. Harmon and Kenneth
Knowlton at Bell Telephone have produced realistic pictures by scanning photographs with equipment similar to television cameras.
The resulting signals are converted into binary numbers representing brightness levels at each point. These bits are transferred to magnetic tape, providing a digitized version of the photograph for computer processing. Brightness is quantized into eight levels of density represented by one of eight kinds of dots or symbols. They appear on the CRT in the form of a mosaic representation of the original photograph. The process is both costly and time consuming, with far less "realistic" results than conventional procedures.
The Computer Image Company of Denver, Colorado, has devised two unique methods of producing cartoon-like representational computer graphics in real-time, on-line operation. Using special hybrid systems with the advantages of both digital and analogue computers, they generate images through optical scanning or acoustical and anthropometric controls. In the scanning process, called Scanimate, a television camera scans black-and-white or color transparencies; this signal is input to the Scanimate computer where it is segmented into as many as five different parts, each
capable of independent movement in synchronization with any audio track, either music or commentary. The output is recorded directly onto film or videotape as an integral function of the Scanimate process.
The second computer image process, Animac, does not involve optical scanning. It generates its own images in conjunction with acoustical or anthropometric analogue systems. In the first instance the artist speaks into a microphone that converts the electrical signals into a form that modulates the cartoon image on the CRT.
The acoustical input animates the cartoon mouth while other facial characteristics are controlled simultaneously by another operator. In the second method an anthropometric harness is attached to a person— a dancer, for example— with sensors at each of the skeletal joints. If the person moves his arm the image moves its arm; when the person dances the cartoon character dances in real-time synchronization, with six degrees of freedom in simulated threedimensional space. It should be stressed that these cartoon images are only "representational" and not "realistic." The systems were designed specifically to reduce the cost of commercial filmmaking
and not to explore serious aesthetic potentials. It's obvious, however, that such techniques could be applied to artistic investigation and to nonobjective graphic compositions.
Professor Charles Csuri's computer film, Hummingbird, was produced by digital scanning of an original hand-drawing of the bird. The computer translated the drawing into xy plotting coordinates and processed variations on the drawing, assembling, disassembling, and distorting its perspectives. Thus the images were not computergenerated so much as computer-manipulated. There's no actual animation in the sense of separately-moving parts. Instead a static image of the bird is seen in various perspectives and at times is distorted by reversals of the polar coordinates. Software requirements were minimal and the film has little value as art other than its demonstration of one possibility in computer graphics.
Limitations of computer-generated realistic imagery exist both in the central processing hardware as well as visual output subsystems. Advancements are being made in output subsystems that go beyond the present bit-capacity of most computers. Chief among these is the "plasma crystal" panel, which makes possible billboard
or wallsize TV receivers as well as pocket-size TV sets that could be viewed in bright sunlight. The Japanese firms of Mitsubishi and Matsushita (Panasonic) seem to be leaders in the field, each having produced workable models. Meanwhile virtually every major producer of video technology has developed its own version. One of
the pioneers of this process in the United States was Dr. George Heilmeier of RCA's David Sarnoff Research Center in Princeton, New Jersey. He describes plasma crystals (sometimes called liquid crystals) as organic compounds whose appearance and mechanical properties are those of a liquid, but whose molecules tend to form
into large orderly arrays akin to the crystals of mica, quartz, or diamonds. Unlike luminescent or fluorescing substances, plasma crystals do not emit their own light: they're read by reflected light, growing brighter as their surroundings grow brighter.
It was discovered that certain liquid crystals can be made opalescent, and hence reflecting, by the application of electric current. Therefore in manufacturing such display systems a sandwich is formed of two clear glass plates, separated by a thin
layer of clear liquid crystal material only one-thousandth of an inch thick. A reflective mirror-like conductive coating is deposited on the inside face of one plate, in contact with the liquid. On the inside of the other is deposited a transparent electrically-conductive coating of tin oxide. When an electric charge from a battery or wall outlet is applied between the two coatings, the liquid crystal molecules are disrupted and the sandwich takes on the appearance of frosted
glass. The frostiness disappears, however, as soon as the charge is removed.
In order to display stationary patterns such as letters, symbols, or still images, the coatings are shaped in accordance with the desired pattern. To display motion the conductive coatings are laid down in the form of a fine mosaic whose individual elements can be charged independently, in accordance with a scanning signal such as is presently used for facsimile, television, and other electronic displays.
To make the images visible in a dark room or outdoors at night, both coatings are made transparent and a light source is installed at the edge of the screen. In addition it is possible to reflect a strong
light from the liquid crystal display to project its images, enlarged many times, onto a wall or screen. The implications of the plasma crystal display system are vast. Since it is, in essence, a digital system composed of hundreds of thousands of discrete picture elements (PIXELS), it obviously is suitable as a computer graphics subsystem virtually without limitation if only sufficient computing capabilities existed. The bit requirements necessary for computer generation of real-time
realistic images in motion are as yet far beyond the present state of the art.
This is demonstrated in a sophisticated video-computer system developed by Jet Propulsion Laboratories in Pasadena, California, for translation of television pictures from Mars in the various Mariner projects. This fantastic system transforms the real-time TV signal into digital picture elements that are stored on special data-discs.
The picture itself is not stored; only its digital translation. The JPL video system consists of 480 lines of resolution, each line composed of 512 individual points. One single image, or "cycle," is thus defined by 245,760 points. In black-and-white, each of these points, individually selectable, can be set to display at any of 64 desired intensities on the gray scale between total black and total white.
Possible variations for one single image thus amount to 64 times 245,760. For color displays, the total image can be thought of as three in-dependent images (one for each color constituent, red, blue, and green) or can be taken as a triplet specification for each of the 480 times 512 points. With each constituent being capable of 64 different irradiating levels in the color spectrum, a theoretical total of 262,144 different color shadings are possible for any given point in
the image. (The average human eye can perceive only 100 to 200 different color shadings.) These capabilities are possible only for single motionless images. Six bits of information are required to produce each of the 245,760 points that constitute one image or cycle, and several seconds are necessary to complete the cycle. Yet JPL scientists estimate that a computing capability of at least two megacycles (two million cycles) per second would be required to generate motion with the same image-transforming capabilities.
It is quite clear that human communication is trending toward these possibilities. If the visual subsystems exist today, it's folly to assume that the computing hardware won't exist tomorrow. The notion of "reality" will be utterly and finally obscured when we reach that point. There'll be no need for "movies" to be made on location since any conceivable scene will be generated in totally convincing reality
within the information processing system. By that time, of course, movies as we know them will not exist. We're entering a mythic age of electronic realities that exist only on a metaphysical plane. Meanwhile some significant work is being done in the development of new language through computer-generated, nonrepresentational
graphics in motion. I've selected several of the most prominent artists in the field and certain films, which, though not aestheticallymotivated, reveal possibilities for artistic exploration. We'll begin with the Whitney family: John, Sr., and his brother James inaugurated a tradition; the sons John, Jr., Michael, and Mark are the first secondgeneration computer-filmmakers in history.

2008-09-25

>> Newsknitter





















project done in 2007 by Mahir Yavuz and Ebru Kurbak, students at the Interface Cultures department (Christa Sommerer, Laurent Mignonneau) at the Art University of Linz.


from: http://casualdata.com/newsknitter/:

News Knitter is a data visualization project which focuses on knitted garments as an alternative medium to visualize large scale data.

The production of knitted garments is a highly complex process which involves computer support at various steps starting with the designs of both the fabric and the shape of garments until they are ready-to-wear. In recent years, technical innovations in machine knitting have especially focused on the patterning facilities. The patterns are designed by individuals generally depending on the current trends of fashion and the intended target markets and multiplied through mass production. News Knitter translates this individual design process into a world-wide collaboration by utilizing live data streams as a base for pattern generation. Due to the dynamic nature of live data streams, the system generates patterns with unpredictable visuality.

News Knitter converts information gathered from the daily political news into clothing. Live news feed from the Internet that is broadcasted within 24 hours or a particular period is analyzed, filtered and converted into a unique visual pattern for a knitted sweater. The system consists of two different types of software: whereas one receives the content from live feeds the other converts it into visual patterns, and a fully computerized flat knitting machine produces the final output. Each product, sweater of News Knitter is an evidence/result of a specific day or period.

The exhibition consists of ten unique sweaters that are produced as sample outputs of the News Knitter project. The patterns of the sweaters are generated by using online global news of a particular day or local Turkish news of a particular time period.

















compare it to "Struckmaschine", by Patrick Rüegg, Fabienne Blanc, 2007

2008-09-23

>> Duck and Cover

another classic: information video about how to protect yourself from the effetcs of nuclear strikes.
produced by general electrics in 1952.


>> A is for Atom

the classic 50ies animation about the different uses of atoms.
produced in 1953 by John Sutherland Productions Inc.




2008-09-15

>> Roland Barthes, "Death of the Author", 1967

full text available here:

http://www.ubu.com/aspen/aspen5and6/threeEssays.html#barthes

or:

http://www.pdf-search-engine.com/roland-barthes-%E2%80%9Cthe-death-of-the-author,%E2%80%9D-1968-in-his-story-sarrasine-html-www.mariabuszek.com/kcai/PoMoSeminar/Readings/BarthesAuthor.html

In his story Sarrasine Balzac, describing a castrate disguised as a woman, writes the following
sentence: “This was woman herself, with her sudden fears, her irrational whims, her instinctive
worries, her impetuous boldness, her fussings, and her delicious sensibility.” Who is speaking
thus? Is it the hero of the story bent on remaining ignorant of the castrato hidden beneath the
woman? Is it Balzac the individual, furnished by his personal experience with a philosophy of
Woman? Is it Balzac the author professing “literary” ideas on femininity? Is it universal
wisdom? Romantic psychology? We shall never know, for the good reason that writing is the
destruction of every voice, of every point of origin. Writing is that neutral, composite, oblique
space where our subject slips away; the negative where all identity is lost, starting with the very
identity of the body writing.
No doubt it has always been that way. As soon as a fact is narrated no longer with a view
to acting directly on reality but intransitively, that is to say, finally outside of any function other
than that of the very practice of the symbol itself, this disconnection occurs, the voice loses its
origin, the author enters into his own death, writing begins. The sense of this phenomenon,
however, has varied; in ethnographic societies the responsibility for a narrative is never assumed
by a person but by a mediator, shaman or relator whose “performance”—the mastery of the
narrative code—may possibly be admired but never his “genius.” The author is a modern figure,
a product of our society in so far as, emerging from the Middle Ages with English empiricism,
French rationalism and the personal faith of the Reformation, it discovered the prestige of the
individual, of, as it is more nobly put, the “human person.” It is thus logical that in literature it
should be this positivism, the epitome and culmination of capitalist ideology, which has attached
the greatest importance to the “person” of the author. The author still reigns, in histories of
literature, biographies of writers, interviews, magazines, as in the very consciousness of men of
letters anxious to unite their person and their work through diaries and memoirs. The image of
literature to be found in ordinary culture is tyrannically centered on the author, his person, his
life, his tastes, his passions, while criticism still consists for the most part in saying that
Baudelaire’s work is the failure of Baudelaire the man, Van Gogh’s his madness, Tchaikovsky’s
his vice. The explanation of a work is always sought in the man or woman who produced it, as if
it were always in the end, through the more or less transparent allegory of the fiction, the voice
of a single person, the author “confiding” in us.
Though the sway of the Author remains powerful (the new criticism has often done no
more than consolidate it), it goes without saying that certain writers have long since attempted to
loosen it. In France, Mallarmé was doubtless the first to see and to foresee in its full extent the
necessity to substitute language itself for the person who until then had been supposed to be its
owner. For him, for us too, it is language which speaks, not the author; to write is, through a
prerequisite impersonality (not at all to be confused with the castrating objectivity of the realist
novelist), to reach that point where only language acts, “performs,” and not “me.” Mallarmé’s
entire poetics consists in suppressing the author in the interests of writing (which is, as will be
seen, to restore the place of the reader). Valéry, encumbered by a psychology of the Ego,
considerably diluted Mallarmé’s theory but, his taste for classicism leading him to turn to the
lessons of rhetoric, he never stopped calling into question and deriding the Author; he stressed
the linguistic and, as it were, “hazardous” nature of his activity, and throughout his prose works
he militated in favor of the essentially verbal condition of literature, in the face of which all recourse to the writer’s interiority seemed to him pure superstition. Proust himself, despite the
apparently psychological character of what are called his analyses, was visibly concerned with
the task of inexorably blurring, by an extreme subtilization, the relation between the writer and
his characters; by making of the narrator not he who has seen and felt nor even he who is writing,
but he who is going to write (the young man in the novel—but, in fact, how old is he and who is
he?—wants to write but cannot; the novel ends when writing at last becomes possible), Proust
gave modern writing its epic. By a radical reversal, instead of putting his life into his novel, as is
so often maintained, he made of his very life a work for which his own book was the model; so
that it is clear to us that Charlus does not imitate Montesquieu but that Montesquieu—in his
anecdotal, historical reality—is no more than a secondary fragment, derived from Charlus.
Lastly, to go no further than this prehistory of modernity, Surrealism, though unable to accord
language a supreme place (language being system and the aim of the movement being,
romantically, a direct subversion of codes—itself moreover illusory: a code cannot be destroyed,
only”played off”), contributed to the desacrilization of the image of the Author by ceaselessly
recommending the abrupt disappointment of expectations of meaning (the famous surrealist
“jolt”), by entrusting the hand with the task of writing as quickly as possible what the head itself
is unaware of (automatic writing), by accepting the principle and the experience of several
people writing together. Leaving aside literature itself (such distinctions really becoming
invalid), linguistics has recently provided the destruction of the Author with a valuable analytical
tool by showing that the whole of the enunciation is an empty process, functioning perfectly
without there being any need for it to be filled with the person of the interlocutors. Linguistically,
the author is never more than the instance writing, just as I is nothing other than the instance
saying I: language knows a “subject,” not a “person, and this subject, empty outside of the very
enunciation which defines it, suffices to make language “hold together,” suffices, that is to say,
to exhaust it.
The removal of the Author (one could talk here with Brecht of a veritable “distancing,”
the Author diminishing like a figurine at the far end of the literary stage) is not merely an
historical fact or an act of writing; it utterly transforms the modern text (or—which is the same
thing—the text is henceforth made and read in such a way that at all its levels the author is
absent). The temporality is different. The Author, when believed in, is always conceived of as
the past of his own book: book and author stand automatically on a single line divided into a
before and an after. The Author is thought to nourish the book, which is to say that he exists
before it, thinks, suffers, lives for it, is in the same relation of antecedence to his work as a father
to his child. In complete contrast, the modem scriptor is born simultaneously with the text, is in
no way equipped with a being preceding or exceeding the writing, is not the subject with the
book as predicate; there is no other time than that of the enunciation and every text is eternally
written here and now. The fact is (or, it follows) that writing can no longer designate an
operation of recording, notation, representation, “depiction” (as the Classics would say); rather, it
designates exactly what linguists, referring to Oxford philosophy, call a performative, a rare
verbal form (exclusively given in the first person and in the present tense) in which the
enunciation has no other content (contains no other proposition) than the act by which it is
uttered—something like the I declare of kings or the I sing of very ancient poets. Having buried
the Author, the modern scriptor can thus no longer believe, as according to the pathetic view of
his predecessors, that this hand is too slow for his thought or passion and that consequently,
making a law of necessity, he must emphasize this delay and indefinitely “polish” his form. For
him, on the contrary, the hand, cut off from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin—or which, at least, has no other origin than
language itself, language which ceaselessly calls into question all origins. We know now that a
text is not a line of words releasing a single “theological” meaning (the “message” of the Author-
God) but a multi-dimensional space in which a variety of writings, none of them original, blend
and clash. The text is a tissue of quotations drawn from the innumerable centers of culture.
Similar to Bouvard and Pécuchet, those eternal copyists, at once sublime and comic and whose
profound ridiculousness indicates precisely the truth, of writing, the writer can only imitate a
gesture that is always anterior, never original. His only power is to mix writings, to counter the
ones with the others, in such a way as never to rest on any one of them. Did he wish to express
himself, he ought at least to know that the inner “thing” he thinks to “translate” is itself only a
ready-formed dictionary, its words only explainable through other words, and so on indefinitely;
something experienced in exemplary fashion by the young Thomas de Quincey, he who was so
good at Greek that in order to translate absolutely modern ideas and images into that dead
language, he had, so Baudelaire tells us (in Paradis Artificiels), “created for himself an unfailing
dictionary, vastly more extensive and complex than those resulting from the ordinary patience of
purely literary themes.” Succeeding the Author, the scriptor no longer bears within him passions,
humours, feelings, impressions, but rather this immense dictionary from which he draws a
writing that can know no halt: life never does more than imitate the book, and the book itself is
only a tissue of signs, an imitation that is lost, infinitely deferred.
Once the Author is removed, the claim to decipher a text becomes quite futile. To give a
text an Author is to impose a limit on that text, to furnish it with a final signified, to close the
writing. Such a conception suits criticism very well, the latter then allotting itself the important
task of discovering the Author (or its hypostases: society, history, psyche, liberty) beneath the
work: when the Author has been found, the text is “explained”—victory to the critic. Hence there
is no surprise in the fact that, historically, the reign of the Author has also been that of the Critic,
nor again in the fact that criticism (be it new) is today undermined along with the Author. In the
multiplicity of writing, everything is to be disentangled, nothing deciphered; the structure can be
followed, “run” (like the thread of a stocking) at every point and at every level, but there is
nothing beneath: the space of writing is to be ranged over, not pierced; writing ceaselessly posits
meaning ceaselessly to evaporate it, carrying out a systematic exemption of meaning. In
precisely this way literature (it would be better from now on to say writing), by refusing to assign
a “secret,” an ultimate meaning, to the text (and to the world as text), liberates what may be
called an antitheological activity, an activity that is truly revolutionary since to refuse to fix
meaning is, in the end, to refuse God and his hypostases - reason, science, law.
Let us come back to the Balzac sentence. No one, no “person,” says it: its source, its
voice, is not the true place of the writing, which is reading. Another—very precise—example
will help to make this clear: recent research (J.-P. Vernant)1 has demonstrated the constitutively
ambiguous nature of Greek tragedy, its texts being woven from words with double meanings that
each character understands unilaterally (this perpetual misunderstanding is exactly the “tragic”);
there is, however, someone who understands each word in its duplicity and who, in addition,
hears the very deafness of the characters speaking in front of him—this someone being precisely
the reader (or here the listener). Thus is revealed the total existence of writing: a text is made of
multiple writings, drawn from many cultures and entering into mutual relations of dialogue,
parody, contestation, but there is one place where this multiplicity is focused and that place is the
reader, not, as was hitherto said, the author. The reader is the space on which all the quotations
that make up a writing are inscribed without any of them being lost; a text’s unity lies not in its origin but in its destination. Yet this destination cannot any longer be personal: the reader is
without history, biography, psychology; he is simply that someone who holds together in a single
field all the traces by which the written text is constituted. Which is why it is derisory to
condemn the new writing in the name of a humanism hypocritically turned champion of the
reader’s rights. Classic criticism has never paid any attention to the reader; for it, the writer is the
only person in literature. We are now beginning to let ourselves be fooled no longer by the
arrogant antiphrastical recriminations of good society in favor of the very thing it sets aside,
ignores, smothers or destroys; we know that to give writing its future, it is necessary to
overthrow the myth: the birth of the reader must be at the cost of the death of the Author.

>> labels

>> cloudy with a chance of tags


Powered By:Blogger Widgets

followers

.........

My photo
... is a Media Art historian and researcher. She holds a PhD from the University of Art and Design Linz where she works as an associate professor. Her PhD-thesis is on "Speculative Archiving and Digital Art", focusing on facial recognition and algorithmic bias. Her Master Thesis "The Grammar of New Media" was on Descriptive Metadata for Media Arts. For many years, she has been working in the field of archiving/documenting Media Art, recently at the Ludwig Boltzmann Institute for Media.Art.Research and before as the head of the Ars Electronica Futurelab's videostudio, where she created their archives and primarily worked with the archival material. She was teaching the Prehystories of New Media Class at the School of the Art Institute of Chicago (SAIC) and in the Media Art Histories program at the Danube University Krems.