Christian Jutten, Christian Jutten, Athina Petropulu
The 75th anniversary of the IEEE Signal Processing Society (SPS) is an ideal time to look at the rapid advances in our field and the many ways that these increasingly powerful technologies have transformed our professions and the world. This is not just a time to celebrate past achievements and pat ourselves on the back, but also to educate young students and innovators about the history of our profession, the challenges we have overcome, and the breakthroughs that have led to the incredible growth of Signal Processing (SP). More importantly, this reflection will help shape our path forward, by inspiring new innovations, and also bringing awareness of the ethical issues associated with evolving and emerging technologies. This awareness will help us to develop meaningful safeguards and ensure responsible use of these technologies.
The 75th anniversary of the SPS coincides with another important 75th anniversary, that of a tiny yet mighty device: the transistor. This is not a mere coincidence. From their birth, signal and image processing (SIP) has been strongly associated with technological advances, especially in electronics and computers. In fact, SIP requires both sensors, for recording signals and images and computers, for implementing smart and efficient processing.
For the sake of our younger SPS scientists, let’s start with a few milestones in the common history of SIP and hardware. The first computer was very big. ENIAC, built in 1943, was about 170 m2 and 27 tons, with a power consumption of 150 kW. It had a very limited computational capacity: approximately 0.2 ms for addition or subtraction, 2 ms for multiplication, and up to 65 ms for performing a division or a square root! ENIAC was a decimal machine, but, a few years later, in 1946, in the framework of the Electronic Discrete Variable Automatic Computer (EDVAC) project, the concept of the von Neumann machine appeared, using binary coding and computing. The basic components of these machines were electronic tubes.
By the 1950s and up to end of the 1960s, a few computers were built with transistors as discrete components. In 1958, Kilby (a Nobel Prize winner in 2000) invented the first integrated circuit, which was patented in 1964 by Texas Instruments. This discovery had an incredible impact on the development of the digital world.
The first microprocessor appeared in 1971: the Intel 4004, a 4-bit microprocessor with 2,300 transistors and a clock frequency of about 100 kHz. This 16-pin integrated circuit (about 3.8 × 2.8 cm) had a computational power similar to that of ENIAC! Of course, advances in microelectronics provided increasingly powerful integrated circuits and microprocessors. Here are just a few milestones, to show this impressive growth:
Today, microprocessors are 64 bits and multicore, with more than 2 million transistors and a clock of about 5 GHz!
The microprocessor Intel 8088 was the basic component of the first IBM personal computer built in 1981. With 16 kB of random-access memory (RAM), extensible to 256 kB, and a floppy disk of 160 kB, its price was quite high, and it was primarily used by companies and, later, by some laboratories.
Until the 1980s, images were recorded using a Vidicon camera, based on a cathodic ray tube, which provides an image by the scanning of an electron beam. At that time, it was impossible to store such images in computer memory because the time access of the memory was not compatible with the speed of the scanning (30 frames/s and about 500 lines), and the capacity of the memory (even dynamic RAM) was too small—fewer than 256 kB [1]. In 1970, Boyle and Smith (Nobel Prize winners in 2009) published a paper on charge-coupled semiconductor devices (CCDs) [2], which could be used as image sensors. The first commercial image CCD sensors were proposed by Fairchild in 1974, with 100 × 100 pixels. Then, in 1983, Sony developed the first mass-produced consumer video camera based on a CCD sensor (CCD-G5) with 384 × 491 pixels. Now, the size of CCD or CMOS image sensors in a camera is about 8,000 × 6,000 pixels or better!
Advances in technology have had a strong impact in many domains for the development of other electronic devices and sensors, especially in medicine, remote sensing, transportation, and telecommunications.
Christian’s experiences as a researcher in his university lab in France provide some important perspectives on the impact of increasingly powerful technologies. “In 1980, about 20 researchers in three labs shared access to two 16-bit computers: HP 1000 and T1600 (from the French company Télémécanique). On average, we could use one machine for about 1 h per day, with a personal partition of 24 kB of memory—for both the program and the data! Programs were written in Fortran, and there was no graphical output: we had to manually draw curves from the numerical results.” One of Christian’s friends designed methods for doing handwritten character recognition: despite small images of 128 × 32 pixels, computations had to be done using integers since coding and computing in floating point were impossible using 24-kB memory.
Later, in 1985, Christian’s lab got its first PC, and it was possible to use other languages, like Pascal and Basic. “But the performance was still very limited,” he notes. “A very simple program of source separation required about one hour to converge. In the lab, we did some simulations on computers, but, typically, Ph.D. students also built dedicated machines.” To overcome the computer’s slowness, Christian implemented the source separation algorithm with operational amplifiers, field effect transistors, and other discrete components (Figure 1). The convergence of this analog implementation required only a few milliseconds, and he added a low-pass RC circuit to slow down the convergence speed so that it became observable!
Figure 1. This analog electronic implementation of a source separation algorithm was about 1 million times faster than the simulation on a PC available in 1985.
Christian’s team worked on artificial neural networks (ANNs), and, by the end of the 1980s, a few Ph.D. students had designed new systolic and parallel architectures with the related software for overcoming the limitations of classic computers for simulating ANNs.
Currently, e-mail and Internet access are essential tools in our lives, both personal and professional. We’ve become so used to fast, reliable, 24-h connectivity that we see it as a crisis if our web server is down for more than a few minutes. Young people wonder how it was possible to get work done, locate journal articles, do comprehensive research, share ideas, and communicate with each other without e-mail and the Internet. Christian remembers that one of the first e-mails that he sent in 1985 came back three weeks later, with an error message and the list of servers through which it had passed! Before reliable Internet and e-mail connectivity became available at the beginning of the 1990s, we had access to some printed journals in the lab or university library, and, when we had to share documents with collaborators outside our workplace, we did so by fax.
In that era, writing articles and papers for journals and conferences was also much more tricky. “We used an electric typewriter,” says Christian. “If we needed to change the font, such as when typing equations or Greek characters, we had to replace the typeball.” When Christian’s lab acquired its first LaserJet in 1989, it became so easy to print a text with different fonts in one step using PostScript. It was also possible to design figures on a computer and to add them to the text, and later it became easy to include photos and images, too.
Now, tablets, laptops, PCs, and even smartphones are so powerful and fast, with huge memories, tens of gigabytes, and hard disks of a few terabytes. For very complex simulations and computations, researchers can share university-based and national computing centers with incredibly powerful machines. All of these means of high-performance computations seem commonplace today, but it’s important to be mindful that the growth of these tools has been extraordinarily fast over the last decades. The growth in SIP followed a similar trajectory, and it is easy to understand why image processing, computational imaging, wireless communications, and forensics, to name a few, didn’t appear until the 1990s since they required devices, sensors, and computers that didn’t exist or were not powerful enough.
In the 2020s, the developments in integrated circuits have led to GPUs whose highly parallel architectures are well-suited for efficiently performing a large number of operations. Multicore computers and GPUs provide researchers with the tools to train deep neural networks more quickly and efficiently than was previously possible. These tools enabled the development of large-scale deep learning frameworks, such as TensorFlow and PyTorch, making it easier for researchers and engineers to experiment with artificial intelligence (AI) models. These tools also supported the development of large-scale language models, such as the ChatGPT, developed by OpenAI, enabling language translation, chatbots, and content generation.
With all of the computational power available today, AI can analyze large amounts of data and identify patterns and insights that might be difficult for humans to detect. AI is now capable of performing a wide range of tasks, including image recognition, natural language processing, decision making, and even creative tasks, such as music composition and art generation.
While AI can accelerate the pace of scientific discovery, it also poses several concerns. AI systems may perpetuate and amplify biases that exist in society, such as racial or gender bias. This can happen when the AI system is trained on biased data, or if the algorithm itself is designed in a way that perpetuates bias. Another problem with AI methods is that they operate as “black boxes,” meaning that their inner workings are not transparent or easily understandable by humans. This can make it difficult to explain how the AI system arrived at a particular decision or prediction and can also make it challenging to identify and correct errors or biases in the system. When it comes to AI language models, such as the ChatGPT, there are serious concerns stemming from the kind of information it is accessed and potential violations of data privacy and intellectual rights. ChatGPT designs its answers by utilizing various resources available on the web and other servers, but the accuracy of these sources cannot always be guaranteed. Additionally, it is important to consider whether such AI tools respect academic integrity. When scientists or students write a paper or report, they must carefully cite all sources used; otherwise, the work may be considered plagiarism. Unfortunately, in ChatGPT’s answers, sources are not always accurately referenced.
More research is needed to make AI systems more explainable and trustworthy by using interpretable or explainable machine learning algorithms. Such algorithms should produce results that can be easily understood by humans and provide insights into how the AI system arrived at its predictions or decisions. Additionally, it is crucial for such systems to provide a measure of uncertainty in their answers, such as confidence intervals or standard deviations, similar to scientific practices, where results are often reported with a range of values that account for possible variations in the data or measurement errors.
Another significant concern with AI methods, particularly deep learning algorithms, is that they consume a lot of power. This is because these algorithms require large amounts of computational resources to train and run. They use huge servers and high-performance GPUs, which require high power, large amounts of memory, and also communications between servers and GPUs. The energy consumption is only going to increase as the use of AI continues to grow, and more powerful AI systems are developed. In addition to the environmental impact of energy consumption, high levels of power consumption can also result in higher operating costs and can limit the scalability and accessibility of AI systems.
There is increasing research and development focused on developing more energy-efficient AI systems. This involves a range of techniques and strategies, including the use of specialized hardware, such as tensor processing units; the development of more efficient algorithms and architectures; and the use of techniques, such as model compression and pruning, to reduce the computational requirements of AI systems. In addition to these technical approaches, there is also a need for broader policy and regulatory measures to encourage the development and adoption of energy-efficient AI systems. This could include incentives for energy-efficient design, regulations on the energy consumption of AI systems, and the development of standards and benchmarks to encourage the use of more energy-efficient AI technologies [4], [5], [6]. It is also important to consider whether AI is necessary to solve the problem at hand or whether simpler and less costly solutions exist. Furthermore, it’s crucial to evaluate the impact of any proposed AI solution on both humans and the environment. In evaluating and comparing AI systems, one should use metrics that take into account both performance and complexity or power consumption, such as the Akaike criterion [7] or similar ones.
The rapid evolution of technology has opened the doors to many extraordinary breakthroughs that have had an incredible impact on our field and will continue to transform the world. Let’s celebrate these achievements, but let’s also be mindful that, with these promising technologies, there can also be significant peril—to scientific progress, to society and human well-being, and to the ecological environment. Innovation comes with great responsibility. Let us all do our best to be smart and thoughtful as we navigate the future.
In this IEEE Signal Processing Magazine special issue celebrating the 75th anniversary of the SPS, you will find additional insights into the history of SPS during the last decades and more technical articles about the evolution, breakthroughs, and discoveries in different domains in SIP.
[1] S. Matsue et al., “A 256 K dynamic RAM,” in Proc. IEEE Int. Solid-State Circuits Conf. Dig. Tech. Papers, San Francisco, CA, USA, 1980, pp. 232–233, doi: 10.1109/ISSCC.1980.1156048.
[2] W. S. Boyle and G. E. Smith, “Charge coupled semiconductor devices,” Bell Syst. Tech. J., vol. 49, no. 4, pp. 587–593, Apr. 1970, doi: 10.1002/j.1538-7305.1970.tb01790.x.
[3] C. Jutten and J. Hérault, “Analog implementation of a permanent unsupervised learning algorithm,” in Proc. NATO Workshop Neurocomputing, Les Arcs, France, 1989, pp. 145–152, doi: 10.1007/978-3-642-76153-9_18.
[4] E. Azarkhish, D. Rossi, I. Loi, and L. Benini, “Neurostream: Scalable and energy efficient deep learning with smart memory cubes,” IEEE Trans. Parallel Distrib. Syst., vol. 29, no. 2, pp. 420–434, Feb. 2018, doi: 10.1109/TPDS.2017.2752706.
[5] W. J. McKibbin and A. C. Morris, Policy Challenges for the Global Transition to a Low-Carbon Economy. Washington, DC, USA: Brookings Institution, 2019.
[6] V. Galaz et al., “Artificial intelligence, systemic risks, and sustainability,” Technol. Soc., vol. 67, Nov. 2021, Art. no. 101741, doi: 10.1016/j.techsoc.2021.101741.
[7] H. Akaike, “A new look at the statistical model identification,” IEEE Trans. Autom. Control, vol. 19, no. 6, pp. 716–723, Dec. 1974, doi: 10.1109/TAC.1974.1100705.
Digital Object Identifier 10.1109/MSP.2023.3266472