A bit of history
"When I started working at the MIT Artificial Intelligence Lab in 1971, I became part of a software-sharing community that had existed for many years. Sharing of software was not limited to our particular community; it is as old as computers, just as sharing of recipes is as old as cooking. But we did it more than most. [...] We did not call our software free software, because that term did not yet exist; but that is what it was. Whenever people from another university or a company wanted to port and use a program, we gladly let them. If you saw someone using an unfamiliar and interesting program, you could always ask to see the source code, so that you could read it, change it, or cannibalize parts of it to make a new program." Richard Stallman, "The GNU Project" (originally published in the book Open sources) [208]
Although all the histories associated to IT are necessarily brief, free software's is one of the longest. In fact, we could say that in the beginning almost all developed software fulfilled the definition of free software, even though the concept didn't even exist yet. Later the situation changed completely, and proprietary software dominated the scene, almost exclusively, for a fairly long time. It was during that period that the foundations were laid for free software as we know it today, and when bit by bit free programs started to appear. Over time, these beginnings grew into a trend that has progressed and matured to the present day, when free software is a possibility worth considering in virtually all spheres. This history is largely unknown, to such an extent that for many IT professionals proprietary software is software "in its natural state". However, the situation is rather the opposite and the seeds of change that could first be discerned in the first decade of the 21st century had already been sown in the early 1980s. Bibliography There are not many detailed histories of free software, and the ones that there are, are usually papers limited to their main subject. In any case, interested readers can extend their knowledge of what we have described in this chapter by reading "Open Source Initiative. History of the OSI" [146] (http://www.opensource.org/docs/history.php), which emphasises the impact of free software on the business community in the years 1998 and 1999; "A brief history of free/open source software movement" [190], by Chris Rasch, which covers the history of free software up until the year 2000, or "The origins and future of open source software" (1999) [177], by Nathan Newman, which focuses to a large extent on the US Government's indirect promotion of free software or similar systems during the decades of the 1970s and the 1980s. Free software before free software Free software as a concept did not appear until the beginning of the 1980s. However, its history can be traced back to several years earlier. And in the beginning it was free During the sixties, the IT panorama was dominated by large computers, mainly installed in companies and governmental institutions. IBM was the leading manufacturer, way ahead of its competition. During this period, when buying a computer (the hardware), the software came added. As long as the maintenance contract was paid for, access was given to the manufacturer's software catalogue. Plus, the idea of programs being something "separate" from a commercial point of view was uncommon. In this period, software was normally distributed together with its source code (in many cases just as source code), and in general, with no practical restrictions. User groups such as SHARE (users of IBM systems) or DECUS (DEC users) participated in these exchanges, and to a certain extent, organised them. The "Algorithms" section of the magazine Communications of the ACM was another good example of an exchange forum. We could say that during these early years of IT, software was free, at least in the sense that those who had access to it could normally have access to the source code, and were used to sharing it, modifying it and also sharing these modifications. On 30thJune 1969, IBM announced that as of 1970, it would sell part of its software separately (Burton Grad, 2002) [131]. This meant that its clients could no longer obtain the programs they needed included in the price of the hardware. Software started to be perceived as something with an intrinsic value, and consequently, it became more and more common to scrupulously restrict access to the programs and the possibility of users sharing, modifying or studying the software was limited as much as possible (technically and legally). In other words, the situation changed to the one that continues to be case in the world of software at the beginning of the 21st century. Bibliography Readers interested in learning about this transition period, can read, for example "How the ICP Directory began" [226] (1998), in which Larry Welke discusses how one of the first software catalogues not associated to a manufacturer was born, and how during this process it was discovered that companies would be prepared to pay for programs not made by their computer manufacturers. In the mid-1970s it was already totally common, in the field of IT, to find proprietary software. This meant an enormous cultural change among professionals who worked with software and was the beginning of a flourishing of a large number of companies dedicated to this new business. It would still be almost a decade before what we now know as free software started to appear in an organised manner and as a reaction to this situation. The 70s and early 80s Even when the overwhelming trend was to explore the proprietary software model, there were initiatives that showed some of the characteristics of what would later be considered free software. In fact, some of them produced free software as we would define it today. Of these, we would mention SPICE, TeX and Unix, which is a much more complex case. SPICE (Simulation Program with Integrated Circuit Emphasis) is a program developed by the University of California, in Berkeley, in order to simulate the electrical characteristics of an integrated circuit. It was developed and placed in the public domain by its author, Donald O. Pederson, en 1973. SPICE was originally a teaching tool, and as such rapidly spread to universities worldwide. There it was used by many students of what was then an emerging discipline: integrated circuits design. Because it was in the public domain, SPICE could be redistributed, modified, studied. It could even be adapted to specific requirements, and that version could be sold as a proprietary product (which is what a large number of companies have done dozens of times throughout their history). With these characteristics, SPICE had all the cards to become the industry standard, with its different versions. And indeed, that is what happened. This was probably the first program with free software characteristics that for a certain period captured a market, the one of integrated circuits simulators, and that undoubtedly was able to do so precisely thanks to these characteristics (in addition to its undeniable technical qualities). Bibliography More information on the history of SPICE can be consulted in "The life of SPICE", presented during the Bipolar Circuits and Technology Meeting, Minneapolis, MN, USA, in September 1996 [175]. You can find the SPICE web page at http://bwrc.eecs.berkeley.edu/Classes/IcBook/SPICE/. Donald Knuth started to develop TeX during a sabbatical year, in 1978. TeX is an electronic typography system commonly used for producing high-quality documents. From the start, Knuth used a licence that today would be considered a free software licence. When the system was considered sufficiently stable, in 1985, he maintained that licence. At that time, TeX was on the largest and most well-known systems that could be considered free software. Bibliography You can find some of the milestones in the history of TeX by consulting online http://www.math.utah.edu/software/plot79/tex/history.html [39]. For further details, the corresponding article in Wikipedia is also extremely useful, http://www.wikipedia.org/wiki/TeX [233]. The early development of Unix Unix, one of the first portable operating systems, was originally created by Thompson and Ritchie (among others) from AT&T's Bell Labs. It has continued to develop since its birth around 1972, giving rise to endless variants sold (literally) by tens of companies. In the years 1973 and 1974, Unix arrived at many universities and research centres worldwide, with a licence that permitted its use for academic purposes. Although there were certain restrictions that prevented its free distribution, among the organisations that did possess a licence the functioning was very similar to what would later be seen in many free software communities. Those who had access to the Unix source code were dealing with a system that they could study, improve on and extend. A community of developers emerged around it, which soon gravitated towards the CSRG of the University of California, in Berkeley. This community developed its own culture, which as we will see later, was very important in the history of free software. Unix was, to a certain extent, an early trial for what we would see with GNU and Linux several years later. It was confined to a much smaller community, and the AT&T licence was necessary, but in all other aspects, its development was very similar (in a far less communicated world). Development methods inherent to free software In Netizens. On the history and impact of Usenet and the Internet (IEEE Computer Society Press, 1997 [139], page 139) we can read a few lines that could refer to many free software projects: "Contributing to the value of Unix during its early development, was the fact that the source code was open and available. It could be examined, improved and customised". Page 142 of the same work states the following: "Pioneers like Henry Spencer agree on how important it was to those in the Unix community to have the source code. He notes how having the sources made it possible to identify and fix the bugs that they discovered. [...] Even in the late 1970s and early 1980s, practically every Unix site had complete sources". The text of Marc Rochkind "Interview with Dick Haight" is even more explicit (Unix Review, May 1986) [198]: "that was one of the great things about Unix in the early days: people actually shared each other's stuff. [...] Not only did we learn a lot in the old days from sharing material, but we also never had to worry about how things really worked because we always could go read the source." Over time, Unix also became an early example of the problems that could arise from proprietary systems that at first sight "had some free software feature". Towards the end of the 1970s and especially during the decade of the 1980s, AT&T changed its policy and access to new versions of Unix became difficult and expensive. The philosophy of the early years that had made Unix so popular among developers, changed radically to such an extent that in 1991 AT&T even tried to sue the University of Berkeley for publishing the Unix BSD code that Berkeley's CSRG had created. But this is another story that we will pick up on later. The beginning: BSD, GNU All of the cases discussed in the previous section were either individual initiatives or did not strictly comply with the requirements of free software. It was not until the beginning of the 1980s that the first organised and conscious projects to create systems comprising free software appeared. During that period, the ethical, legal and even financial grounds of these projects started to be established (probably more importantly), with them being developed and completed right up to the present day. And since the new phenomenon needed a name, this was when the term free software was first minted. Richard Stallman, GNU, FSF: the free software movement is born At the beginning of 1984, Richard Stallman, who at the time was employed by the MIT AI Lab, quited his job to started working on the GNU project. Stallman considered himself to be a hacker of the kind that enjoys sharing his technological interests and his code. He didn't like the way that his refusal to sign exclusivity or non-sharing agreements made him an outcast in his own world, and how the use of proprietary software in his environment left him impotent in the face of situations that could easily be resolved before. His idea when he left the MIT was to build a complete software system, for general use, but totally free ("The GNU Project", DiBona et al.) [208]. The system (and the project that would be responsible for making it come true) was called GNU ("GNU's not Unix", a recursive acronym). Although from the beginning the GNU project included software in its system that was already available (like TeX or, later, the X Window system), there was still a lot to be built. Richard Stallman started by writing a C compiler (GCC) and an editor (Emacs), both of which are still in use today (and very popular). From the start of the GNU project, Richard Stallman was concerned about the freedoms that the users of the software would have. He wanted not only those who received programs directly from the GNU project to continue to enjoy the same rights (modification, redistribution, etc.) but also those who received it after any number of redistributions and (potentially) modifications. For this reason he drafted the GPL licence, probably the first software licence designed specifically in order to guarantee that a program would be free in this way. Richard Stallman called the generic mechanism that these GPL type licences use in order to achieve these guarantees, copyleft, which continues to be the name of a large family of free software licences (Free Software Foundation, GNU General Public Licence, version 2, June 1991) [118]. Richard Stallman also founded the Free Software Foundation (FSF) in order to obtain funds, which he uses to develop and protect free software, and established his ethical principles with the "The GNU Manifesto" (Free Software Foundation, 1985) [117] and "Why software should not have owners" (Richard Stallman, 1998) [207]. From a technical point of view, the GNU project was conceived as a highly structured endeavor with very clear goals. The usual methodology was based on relatively small groups of people (usually volunteers) developing one of the tools that would then fit perfectly into the complete jigsaw (the GNU system). The modularity of Unix, on which this project was inspired, fully coincided with that idea. The working method generally implied the use of Internet, but because at that time it was not extensively implanted, the Free Software Foundation would also sell tapes on which it would record the applications, which means that it was probably one of the first organisations to obtain financial compensation (albeit in a rather limited way) from creating free software. In the early 90s, about six years after the project was founded, GNU was very close to having a complete system similar to Unix. However, at that point it had not yet produced one of the key parts: the system's core (also known as the kernel, the part of the operating system that handles with the hardware, abstracts it, and allows applications to share resources, and essentially, to work). However, GNU software was very popular among the users of several different variants of Unix, at the time the most commonly used operating system in businesses. Additionally, the GNU project had managed to become relatively well known among IT professionals, and especially among those working at universities. In that period, its products already had a well-deserved reputation for stability and good quality. Berkeley's CSRG Since 1973, the CSRG (Computer Science Research Group) of the University of California at Berkeley had been one of the centres where most of the Unix-related developments had been made, especially between 1979 and 1980. Not only were applications ported and other new ones built to run on Unix, but also important improvements were made to the kernel and a lot of functionality had been added. For example, during the 80s, several DARPA contracts (under the US Department of Defence) financed the implementation of TCP/IP which until today has been considered the reference for the protocols that make the Internet work (in the process, linking the development of the Internet and the expansion of Unix workstations). Many companies used the CSRG's developments as the bases for their Unix versions giving rise to well-known systems at the time, such as SunOS (Sun Microsystems) or Ultrix (Digital Equipment). This is how Berkeley became one of the two fundamental sources of Unix, together with the "official" one, AT&T. In order to use all of the code that the CSRG produced (and the code of the collaborators of the Unix community which to some extent they coordinated), it was necessary to have AT&T's Unix licence, which was becoming increasingly difficult (and expensive) to obtain, especially if access to the system's source code was required. Partly in an attempt to overcome this problem, in June 1989 the CSRG released the part of Unix associated to TCP/IP (the implementation of the protocols in the kernel and the utilities), which did not include AT&T code. It was called the Networking Release 1 (Net-1). The licence with which it was released was the famous BSD licence, which except for certain problems with its clauses on advertising obligations, has always been considered an example of a minimalist free software licence (which in addition to allowing free redistribution, also allows incorporation into proprietary products). In addition, the CSRG tested a novel financing model (which the FSF was already trying out successfully): it sold tapes with its distribution for USD 1,000 each. Despite the fact that anybody in turn could redistribute the content of the tapes without any problem (because the licence allowed it), the CSRG sold tapes to thousands of organisations thus obtaining funds with which to continue developing. Having witnessed the success of the Net-1 distribution, Keith Bostic proposed to rewrite all of the code that still remained from the original AT&T Unix. Despite the scepticism of some members of the CSRG, he made a public announcement asking for help to accomplish this task, and little by little the utilities (rewritten on the basis of specifications) became integrated into Berkeley's system. Meanwhile, the same process was done with the kernel, in such a way that most of the code that had not been produced by Berkeley or volunteer collaborators was rewritten independently. In June 1991, after obtaining permission from the University of Berkeley's governing body Networking Release 2 (Net-2) was distributed, with almost all of the kernel's code and all of the utilities of a complete Unix system. The set was once again distributed under the BSD licence and thousands of tapes were sold at a cost of USD 1,000 per unit. Just six months after the release of Net-2, Bill Jolitz wrote the code that was missing for the kernel to function on the i386 architecture, releasing 386BSD, which was distributed over the Internet. On the basis of that code later emerged, in succession, all the systems of the *BSD family: first NetBSD appeared, as a compilation of the patches that had been contributed over the Net in order to improve 386BSD; later FreeBSD appeared, as an attempt to focus on the support of the i386 architecture; several years later the OpenBSD project was formed, with an emphasis on security. And there was also a proprietary version based on Net-2 (although it was certainly original, since it offered its clients all the source code as part of the basic distribution), which was done independently by the now extinct company BSDI (Berkeley Software Design Inc.). Partly as a reaction to the distribution produced by BSDI, the AT&T subsidiary that held the Unix licence rights, Unix System Laboratories (USL), tried to sue first BSDI and then the University of California. The accusation was that the company had distributed its intellectual property without permission. Following various legal manoeuvres (which included a countersuit by the University of California against USL), Novell bought the Unix rights from USL, and in January 1994 reached an out-of-court settlement with the University of California. As a result of this settlement, the CSRG distributed version 4.4BSD-Lite, which was soon used by all the projects of the *BSD family. Shortly afterwards (after releasing version 4.4BSD-Lite Release 2), the CSRG disappeared. At that point, some feared that it would be the end of *BSD systems, but time has shown that they are still alive and kicking under a new form of management that is more typical of free software projects. Even in the first decade of the year 2000 the projects managed by the *BSD family are among the oldest and most consolidated in the world of free software. Bibliography The history of Unix BSD is illustrative of a peculiar way of developing software during the seventies and eighties. Whoever is interested in it can enjoy reading "Twenty years of Berkeley Unix" (Marshall Kirk McKusick, 1999) [170], which follows the evolution from the tape that Bob Fabry took to Berkeley with the idea of making one of the first versions of Thompson and Ritchie's code function on a PDP-11 (bought jointly by the faculties of informatics, statistics and mathematics), through to the lawsuits filed by AT&T and the latest releases of code that gave rise to the *BSD family of free operating systems. The beginnings of the Internet Almost since its creation in the decade of the 1970s, Internet has been closely related to free software. On the one hand, since the beginning, the community of developers that built the Internet had several clear principles that would later become classics in the world of free software; for example, the importance of users being able to help fix bugs or share code. The importance of BSD Unix in its development (by providing during the eighties the most popular implementation of the TCP/IP protocols) made it easy to transfer many habits and ways of doing things from one community - the developers centred around the CSRG - to another community - the developers who were building what at the time was NSFNet and would later become Internet - and vice versa. Many of the basic applications for the Internet's development, such as Sendmail (mail server) or BIND (implementation of the domain name services) were free and, to a great extent, the outcome of collaboration between these two communities. Finally, towards the end of the 80s and in the decade of the 90s, the free software community was one of the first to explore in depth the new possibilities offered by the Internet for geographically disperse groups to collaborate. To a large extent, this exploration made the mere existence of the BSD community possible, the FSF or the development of GNU/Linux. One of the most interesting aspects of the Internet's development, from the free software point of view, was the completely open management of its documents and its rules. Although it may seem normal today (because it is customary, for example, in the IETF or the World Wide Web Consortium), at the time, the free availability of all its specifications, and design documents including the norms that define the protocols, was something revolutionary and fundamental for its development. In Netizens. On the history and impact of Usenet and the Internet [139] (page 106) we can read:
"This open process encouraged and led to the exchange of information. Technical development is only successful when information is allowed to flow freely and easily between the parties involved. Encouraging participation is the main principle that made the development of the Net possible."
We can see why this paragraph would almost certainly be supported by any developer referring to the free software project in which he is involved. In another quote, on "The evolution of packet switching" [195] (page 267) we can read:
"Since ARPANET was a public project connecting many major universities and research institutions, the implementation and performance details were widely published."
Obviously, this is what tends to happen with free software projects, where all the information related to a project (and not only to its implementation) is normally public. In this atmosphere, and before the Internet, well into the nineties, became an entire business, the community of users and its relationship with developers was crucial. During that period many organisations learned to trust not a single supplier of data communication services, but rather a complex combination of service companies, equipment manufacturers, professional developers, and volunteers, etc. The best implementations of many programs were not those that came with the operating system purchased together with the hardware, but rather free implementations that would quickly replace them. The most innovative developments were not the outcome of large company research plans but rather the product of students or professionals who tested ideas and collected feedback sent to them by various users of their free programs. As we have already mentioned, Internet also offered free software the basic tools for long-distance collaboration. Electronic mail, news groups, anonymous FTP services (which were the first massive stores of free software) and, later, the web-based integrated development systems have been fundamental (and indispensable) for the development of the free software community as we know it today, and in particular, for the functioning of the immense majority of free software projects. From the outset, projects such as GNU or BSD made massive and intensive use of all these mechanisms, developing, at the same time as they used them, new tools and systems that in turn improved the Internet. Bibliography Readers interested in the evolution of the Internet, written by several of its key protagonists, can consult "A brief history of the Internet" (published by the ACM, 1997) [166].
Other projects During the 1980s many other important free software projects saw the light of day. We highlight for their importance and future relevance, X Window (windowing system for Unix-type systems), developed at the MIT, one of the first examples of large-scale funding for a free project financed by a business consortium. It is also worth mentioning Ghostscript, a PostScript document management system developed by a company called Aladdin Software, which was one of the first cases of searching for a business model based on producing free software. Towards the end of the 1980s, there was already an entire constellation of small (and not so small) free software projects underway. All of them, together with the large projects we have mentioned up until now, established the bases of the first complete free systems, which appeared in the beginning of the 1990s.
Everything in its way Around 1990, most of the components of a complete system were ready as free software. On the one hand, the GNU project and the BSD distributions had completed most of the applications that make up an operating system. On the other hand, projects such as X Window or GNU itself had built from windowing environments to compilers, which were often among the best in their class (for example, many administrators of SunOS or Ultrix systems would replace their system's proprietary applications for the free versions of GNU or BSD for their users). In order to have a complete system built exclusively with free software, just one component was missing: the kernel. Two separate and independent efforts came to fill the gap: 386BSD and Linux. The quest for a kernel Towards the end of the 1980s and beginning of the 1990s, the GNU project had a basic range of utilities and tools that made it possible to have a complete operating system. Even at the time, many free applications, including the particularly interesting case of X Window, were the best in their field (Unix utilities, compilers...). However, to complete the jigsaw a vital piece was missing: the operating system's kernel. The GNU project was looking for that missing piece with a project known as Hurd, which intended to build a kernel using modern technologies. The *BSD family Practically at the same time, the BSD community was also on the path towards a free kernel. The Net-2 distribution was only missing six files in order to complete it (the rest had already been built by the CSRG or its collaborators). In the beginning of 1992, Bill Jolitz finished those files and distributed 386BSD, a system that functioned on the i386 architecture and that in time would give rise to the projects NetBSD, FreeBSD and OpenBSD. Progress in the following months was fast, and by the end of the year it was sufficiently stable to be used in non-critical production environments, which included, for example, a windows environment thanks to the XFree project (which had provided X Window for the i386 architecture) or a great quality compiler, GCC. Although there were components that used other licences (such as those from the GNU projects, which used the GPL), most of the system was distributed under the BSD licence. Bibliography Linux Magazine Some episodes of this period illustrate the capability of the free software development models. There is the well-known case of Linus Torvalds, who developed Linux while a second-year student at the University of Helsinki. But this is not the only case of a student who made his way thanks to his free developments. For example, the German Thomas Roel ported X11R4 (a version of the X Window system) to a PC based on a 386. This development took him to work at Dell, and later to become the founder of the X386 and XFree projects, which were fundamental for quickly giving GNU/Linux and the *BSDs a windows environment. You can read more about the history of XFree and Roel's role in it in "The history of xFree86" (Linux Magazine, December 1991) [135]. Then came the lawsuit from USL, which made many potential users fear proceedings against them in turn if the University of California were to lose the court case or simply, that the project came to a standstill. Perhaps this was the reason why later, the installed base of GNU/Linux was much greater than all the *BSDs combined. But we cannot know this for sure. GNU/Linux comes onstage In July 1991 Linus Torvalds (a Finnish 21-year old student) placed his first message mentioning his project (at the time) to build a free system similar to Minix. In September he released the very first version (0.01), and every few weeks new versions would appear. In March 1994 version 1.0 appeared, the first one to be called stable, though the kernel that Linus built had been usable for several months. During this period, literally hundreds of developers turned to Linux, integrating all the GNU software around it, as well as XFree and many more free programs. Unlike the *BSDs, the Linux kernel and a large number of the components integrated around it were distributed with the GPL licence. Bibliography The story about Linux is probably one of the most interesting (and well-known) in the world of free software. You can find many links to information on it from the pages marking the 10th anniversary of its announcement, although probably one of the most interesting ones is the "History of Linux", by Ragib Hasan [138]. As a curiosity, you can consult the thread on which Linus Torvalds announced that he was starting to create what later became Linux (in the newsgroup comp.os.minix) at http://groups.google.com/groups?th=d161e94858c4c0b9. There he explains how he has been working on his kernel since April and how he has already ported some GNU project tools onto it (specifically mentioning Bash and GCC). Of the many developments to have emerged around Linux, one of the most interesting is the distribution concept. The first distributions appeared soon, in 1992 (MCC Interim Linux, of the University of Manchester; TAMU, of Texas A&M, and the most well-known, SLS, which later gave rise to Slackware, which is still being distributed in the first decade of 2000), entailing the arrival of competition into the world of systems packaged around Linux. Each distribution tries to offer a ready-to-use GNU/Linux, and starting from the basis of the same software has to compete by making improvements considered important by their user base. In addition to providing pre-compiled ready-to-use packages, the distributions also tend to offer their own tools for managing the selection, installation, replacement and uninstallation of these packages, in addition to the initial installation on the computer, and the management and administration of the operating system. Over time, distributions have succeeded each other as different ones became the most popular. Of them all, we would highlight the following: Debian, developed by a community of volunteer users. Red Hat Linux, which was first developed internally by the company Red Hat, but which later adopted a more community-based model, giving rise to Fedora Core. Suse, which gave rise to OpenSUSE, following a similar evolution to Red Hat. Mandriva (successor of Mandrake Linux and Conectiva). Ubuntu, derived from Debian and produced on the basis of Debian by the company Canonical. A time of maturation Midway through the first decade of 2000, GNU/Linux, OpenOffice.org or Firefox were present in the media quite often. The overwhelming majority of companies use free software for at least some of their IT processes. It is difficult to be an IT student and not to use large amounts of free software. Free software is no longer a footnote in the history of IT and has become something very important for the sector. IT companies, companies in the secondary sector (those that use software intensively, even though their primary activity is different) and public administrations are starting to consider it as something strategic. And slowly but surely it is arriving among domestic users. In broad terms, we are entering a period of maturation. And at the bottom of it all, an important question starts to arise, which summarises in a way what is happening: "are we facing a new model of software industry?". Perhaps, it may yet happen that free software becomes no more than a passing trend to be remembered nostalgically one day. But it may also be (and this seems increasingly likely) a new model that is here to stay, and perhaps to change radically one of the youngest but also most influential industries of our time. End of the nineties In the mid-1990s, free software already offered complete environments (distributions of GNU/Linux, *BSD systems...) that supported the daily work of many people, especially software developers. There were still many pending assignments (the main one to have better graphical user interfaces at a time when Windows 95 was considered the standard), but there were already several thousand people worldwide who used exclusively free software for their day to day work. New projects were announced in rapid succession and free software embarked on its long path towards companies, the media and public awareness in general. This period is also associated with Internet taking off as a network for everyone, in many cases led by the hand of free programs (especially in its infrastructure). The net's arrival into the homes of millions of end users consolidated this situation, at least in terms of servers: the most popular web (HTTP) servers have always been free (first the NCSA server, followed by Apache). Perhaps the beginning of the road for free software until its full release among the public is best described in the renowned essay by Eric Raymond, "The cathedral and the bazaar" (Eric S. Raymond, 2001) [192]. Although much of what is described in it was already well known by the community of free software developers, putting it into paper and distributing it extensively made it an influential tool for promoting the concept of free software as an alternative development mechanism to the one used by the traditional software industry. Another important paper of this period was "Setting up shop. The Business of open source software" [141], by Frank Hecker, which for the first time described the potential business models for free software, and which was written in order to influence the decision to release the Netscape Navigator code. Whereas Raymond's paper was a great tool for promoting some of the fundamental characteristics of free software, the release of Netscape Navigator's code was the first case in which a relatively large company, in a very innovative sector (the then nascent web industry) made the decision to release one of its products as free software. At that time, Netscape Navigator was losing the browser war against Microsoft's product (Internet Explorer), partly due to Microsoft's tactics of combining it with its operating system. Many people believe that Netscape did the only thing that it could have done: to try to change the rules to be able to compete with a giant. And from this change in the rules (trying to compete with a free software model) the Mozilla project was born. This project, which had its own problems, has led several years later to a navigator that, although it has not recovered the enormous market share that Netscape had in its day, seems technically at least as good as its proprietary competitors. In any case, irrespective of its later success, Netscape's announcement that it would release its navigator's source code had a great impact on the software industry. Many companies started to consider free software worthy of consideration. The financial markets also started paying attention to free software. In the euphoria of the dotcom boom, many free software companies became targets for investors. Perhaps the most renowned case is that of Red Hat, one of the first companies to realise that selling CDs with ready-to-use GNU/Linux systems could be a potential business model. Red Hat started distributing its Red Hat Linux, with huge emphasis (at least for what was common at the time) on the system's ease of use and ease of maintenance for people without a specific IT background. Over time it diversified, keeping within the orbit of free software, and in September 1998 it announced that Intel and Netscape had invested in it. "If it is good for Intel and Netscape, it must be good for us", is what many investors must have thought then. When Red Hat went public in summer 1999, the IPO was subscribed completely and soon the value of each share rose spectacularly. It was the first time that a company was obtaining financing from the stock exchange with a model based on free software. But it was not the only one: later, others such as VA Linux or Andover.net (which was later acquired by VA Linux) did the same. Note Red Hat provides a list of its company milestones at http://fedora.redhat.com/about/history/. During this period, many companies were also born with business models based on free software. Despite not going public or achieving such tremendous market caps, they were nevertheless very important for the development of free software. For example, many companies appeared that started distributing their own versions of GNU/Linux, such as SuSE (Germany), Conectiva (Brazil) or Mandrake (France), which would later join the former in order to create Mandriva. Others offered services to companies that wanted maintenance or to adapt free products: LinuxCare (US), Alcove (France), ID Pro (Germany), and many more. Meanwhile, the sector's giants started to position themselves in relation to free software. Some companies, such as IBM, incorporated it directly into their strategy. Others, such as Sun Microsystems, had a curious relationship with it, at times backing it, at others indifferent, and at others confrontational. Most (such as Apple, Oracle, HP, SGI, etc.) explored the free software model with various strategies, ranging from the selective freeing of software to straightforward porting of their products to GNU/Linux. Between these two extremes there were many other lines of action, such as the more or less intensive use of free software in their products (such as the case with Mac OS X) or the exploration of business models based on the maintenance of free software products. From the technical point of view, the most remarkable event of this period was probably the appearance of two ambitious projects designed to carry free software to the desktop environment for inexperienced IT users: KDE and GNOME. Put simplistically, the final objective was not to have to use the command line in order to interact with GNU/Linux or *BSD or with the programs on those environments. KDE was announced in October 1996. Using the Qt graphic libraries (at that time a proprietary product belonging to the company Trolltech, but free of charge for use on GNU/Linux), construction began of a set of desktop applications that would work in an integrated manner and have a uniform appearance. In July 1998 version 1.0 of the K Desktop Environment was released, and was soon followed by increasingly more complete and more mature new versions. GNU/Linux distributions soon incorporated KDE as a desktop for their users (or at least as one of the desktop environments that users could choose). Mostly as a reaction to KDE's dependence on the Qt proprietary library, in August 1997 the GNOME project was announced (Miguel de Icaza, "The story of the GNOME Project") [101], with similar goals and characteristics to those of KDE, but stating the explicit objective of all its components being free software. In March 1999, GNOME 1.0 was released, which would also improve and stabilise over time. As of that moment, most distributions of free operating systems (and many Unix-derived proprietary ones) offered the GNOME or KDE desktop as an option, and the applications of both environments. Meanwhile, the main free software projects underway remained in good health with new projects emerging almost every day. In various niche markets, free software was found to be the best solution (acknowledged almost worldwide). For example, since its appearance in April 1995, Apache has maintained the largest market share for web servers; XFree86, the free project that develops X Window, is by far the most popular version of X Window (and therefore, the most extended windows system for Unix-type systems); GCC is recognised as the most portable C compiler and one of the best quality; GNAT, the compilation system for Ada 95, has conquered the best part of the market for Ada compilers in just a few years; and so on. In 1998, the Open Source Initiative (OSI) was founded, which decided to adopt the term open source software as a brand for introducing free software into the business world, while avoiding the ambiguity of the term free (which can mean both free to use and free of charge). This decision sparked one of the fiercest debates in the world of free software (which continues to this day), since the Free Software Foundation and others considered that it was much more appropriate to speak about free software (Richard Stallman, "Why free software is better than open source", 1998) [206]. In any case, the OSI made a great promotional campaign for its new brand, which has been adopted by many as the preferred way to talk about free software, especially in the English-speaking world. To define open source software, the OSI used a definition derived from the one used by the Debian project to define free software ("Debian free software guidelines", http://www.debian.org/social_contract.html#guidelines) [104], which at the same time fairly closely reflects the idea of the FSF in this regard ("Free software definition", http://www.gnu.org/philosophy/free-sw.html) [120], meaning that from the practical point of view almost any program considered free software can also be considered open source and vice versa. However, the free software and open source software communities (or at least the people who identify with them) can be very different. Decade of 2000 In the early years of the decade of 2000, free software was already a serious competitor in the servers segment and was starting to be ready for the desktop. Systems such as GNOME, KDE, OpenOffice.org and Mozilla Firefox can be used by domestic users and are sufficient for the needs of many companies, at least where office applications are concerned. Free systems (and especially systems based on Linux) are easy to install, and the complexity of maintaining and updating them is comparable to that of other proprietary systems. Right now, every company in the software industry has a strategy with regards to free software. Most of the leading multinationals (IBM, HP, Sun, Novell, Apple, Oracle...) incorporate free software to a greater or lesser extent. At one extreme we can find companies such as Oracle, which react by simply porting their products to GNU/Linux. At another extreme, we can find IBM, which has the most decisive strategy and has made the biggest publicity campaigns about GNU/Linux. Among the leaders in the IT market, only Microsoft has positioned itself in clear opposition to free software and particularly software distributed under the GPL licence. As regards the world of free software itself, despite the debates that occasionally stir the community, its growth is massive. Every day there are more developers, more active free software projects, more users, etc. With each passing day free software is moving away from the sidelines and becoming a force to be reckoned with. In light of this, new disciplines are emerging that specifically study free software, such as free software engineering. Based on research, bit by bit we are starting to understand how free software operates in its various aspects: development models, business models, coordination mechanisms, free project management, developers' motivations, etc. These years we are also starting to see the first effects of the offshoring that free software development allows: countries considered "peripheral" are actively participating in the world of free software. For example, the number of Mexican or Spanish developers (both countries with a limited tradition of software industry) in projects such as GNOME is significant (Lancashire, "Code, culture and cash: the fading altruism of open source development", 2001) [164]. And the role of Brazil is even more interesting, with its numerous developers and experts in free software technologies, and decisive backing from the public administrations. gnuLinEx is a case that merits special attention, as an example of how a region with very little tradition of software development can try to change the situation through an aggressive strategy of free software implantation. From the decision-making perspective when it comes to implementing software solutions, we would highlight the fact that there are certain markets (such as Internet services or office applications) in which free software is a natural choice that cannot be overlooked when studying what type of system to use. On the negative front, these years have seen how the legal environment in which free software operates is changing rapidly worldwide. On the one hand, software patents are increasingly adopted in more and more countries. On the other hand, new copyright laws make it difficult or impossible to develop free applications in some spheres, the most well-known one being DVD viewers (due to the CSS encoding algorithm that this technology uses). gnuLinEx In the beginning of 2002 the Extremadura Regional Government publicly announced the gnuLinEx project. The idea was simple: to promote the creation of a distribution based on GNU/Linux with the fundamental objective of using it on the thousands of computers to be installed in public schools throughout the region. Extremadura, situated in the western part of Spain, bordering Portugal, has approximately 1 million inhabitants and has never stood out for its technological initiatives. In fact, the region had practically no software industry. In this context, gnuLinEx has made a very interesting contribution to the free software panorama on a global scale. Beyond being just a new distribution of GNU/Linux based on Debian (which is still a worthy anecdote), and beyond its enormous impact on the mass media (it was the first time that Extremadura made the front cover of The Washington Post and one of the first that a free software product did), what is extraordinary is the (at least apparently) solid backing of a public administration for free software. The Regional Government of Extremadura decided to try a different model where educational software was concerned, and then to extend this model to all the software used within the scope of its influence. This has made it the first public administration of a developed country to have decisively adopted this approach. A lot of interest was generated around the Regional Government's initiative, within Extremadura and outside of it: there are academies that teach IT using gnuLinEx; books have been written to support this teaching; computers are being sold with gnuLinEx pre-installed. In general, they are trying to create an educational and business fabric around this experience in order to give it support. And the experience has been exported. At the beginning of the 21st century, several autonomous communities in Spain have backed free software in education (in one way or another), and in general, its importance for public administrations is widely acknowledged. Knoppix Since the end of the nineties, there are GNU/Linux distributions that can be easily installed, but Knoppix, whose first version appeared in 2002, has probably allowed this idea to reach its full expression. It is a CD that boots on almost any PC, converting it (without even having to format the disk, since it can be used "live") into a fully functional GNU/Linux machine, with a selection of the most frequent tools. Knoppix combines good automatic hardware detection with a good choice of programs and "live" functioning. For example, it allows a rapid and direct experience of what it means to work with GNU/Linux. And it is giving rise to an entire family of distributions of the same type, specialised for the specific requirements of a user profile. OpenOffice.org In 1999, Sun Microsystems bought a German company called Stardivision, whose star product was StarOffice, a suite of office applications similar in functionality to the Microsoft Office set of tools. One year later, Sun distributed most of the StarOffice code under a free licence (the GPL) creating the OpenOffice.org project. This project released version 1.0 of OpenOffice.org in May 2002. OpenOffice.org has become a quality suite of office applications with a similar functionality to that of any other office product, and, more importantly, it interoperates very well with the Microsoft Office data formats. These features have made it the reference free software application in the world of office suites. The importance of OpenOffice.org, from the point of view of extending free software to a large number of users, is enormous. Finally it is possible to change, almost without problems, from the proprietary environments common with office suites (undoubtedly the star application in the business world) to totally free environments (such as GNU/Linux plus GNOME and/or KDE plus OpenOffice.org). Also, the transition can be made very smoothly: since OpenOffice.org also works on Microsoft Windows, it is not necessary to change operating systems in order to experiment in depth with using free software. Mozilla, Firefox and the rest Practically since its appearance in 1994 until 1996, Netscape Navigator was the unchallenged market leader in web browsers, with market shares of up to 80%. The situation started to change when Microsoft included Internet Explorer with Windows 95, causing Netscape Navigator to gradually lose market share. At the beginning of 1998 Netscape announced that it was going to distribute a large part of its navigator code as free software, which it did in March that same year, launching the Mozilla project. For quite a while the project was clouded by uncertainty, and even pessimism (for example, when its leader, Jamie Zawinski, abandoned it), because as time went by no product was resulting from its launch. In January 2000, the project released Mozilla M13, which was considered the first relatively stable version. In May 2002 version 1.0 was finally published, the first officially stable version, over four years after the first Netscape Navigator code had been released. Bibliography In "Netscape Navigator", by Brian Wilson, [234], we can consult a detailed list of the main versions of Netscape Navigator and Mozilla, and their main characteristics. Finally Mozilla had become a reality, although perhaps too late, if we bear in mind the market shares that Internet Explorer had in 2002 or 2003 (when it was the undisputed leader leaving Mozilla and others in a totally marginal position). But despite taking so long, the Mozilla project has borne fruit; not only expected fruit (the Mozilla navigator), but also other "collateral" ones, such as Firefox for example, another navigator based on the same HTML engine, which has become the main product, and which since it appeared in 2005 is managing bit by bit to erode other navigators' market share. The Mozilla project has helped to fill a large gap in the world of free software. Before Konqueror appeared (the KDE project's navigator), there were not many free navigators with a graphic interface. Since the publication of Mozilla, an enormous number of projects based on it have emerged which have produced a large number of navigators. At the same time, the combination of Mozilla Firefox and OpenOffice.org allows free software to be used for the most common tasks, even in a Microsoft Windows environment (they both work not only on GNU/Linux, *BSD and other Unix-type systems, but also on Windows). For the first time in the history of free software, it has made the transition from proprietary software to free software in office environments a simple task: we can start by using these two applications on Windows, without changing operating systems (for those who use it normally), and over time eliminate the only non-free part and move onto GNU/Linux or FreeBSD. The case of SCO At the beginning of 2003, the SCO corporation (formerly Caldera Systems and Caldera International) presented a legal case against IBM for alleged breach of its intellectual property rights. Although the case was complex, it centred on the accusation that IBM had contributed to the Linux kernel with code belonging to SCO. In May 2007, the matter had still not been resolved and had even become more complicated by further legal suits (IBM and Red Hat against SCO, SCO against AutoZone and DaimlerChrysler, two large IT users) and by SCO's campaigns threatening to pursue big companies that used Linux, etc. Although the winner of this enormous legal battle has still not emerged, the case has highlighted certain legal aspects concerning free software. In particular, many companies have considered the problems that they may have to face if they use Linux and other free programs, and the guarantee that in doing so they are not in breach of third party intellectual or industrial property rights. In some way, this case and other ones (such as those related to the validity of the GPL licences which were resolved in Germany in 2005) may also be interpreted as a sign of the maturity of free software. It has stopped being a stranger to the business world to become part of many of its activities (including those related to legal strategies). Ubuntu, Canonical, Fedora and Red Hat Although Canonical (the company that produces and distributes Ubuntu) could be considered a recent arrival to the business of GNU/Linux distributions, its activities deserve our attention. In a relatively short time, Ubuntu has established itself as one of the best known and most widely used distributions, with a reputation for good quality, and great ease of installation and use. Ubuntu also stands out for its greater attention to including fundamentally free software than most distributions produced by companies. However, the fundamental characteristic of Ubuntu (and of Canonical's strategy) has been to base on Debian, a distribution created and maintained by volunteers. In fact, Ubuntu is not the first case of a distribution based on Debian (another well-known case is gnuLinEx), but perhaps it is the one to have received the most funding. For example, Canonical has hired a large number of Debian experts (many of whom participate in the project) and has pursued a strategy that seeks collaboration with the volunteer project. To some extent, Canonical has tried to fill what it considers is missing from Debian in order to gain acceptance from the average user. Red Hat, in turn, has followed a different path in order to wind up in a fairly similar situation. Starting from a distribution produced entirely with its own resources, it decided to collaborate with Fedora, a group of volunteers that was already working with distributions based on Red Hat, in order to produce Fedora Core, its "popular" distribution. Red Hat maintains its version for companies, but this collaboration with volunteers is, in the end, very similar to the one that has produced Ubuntu. Perhaps all of these movements are no more than the product of the fierce competition taking place in the market for GNU/Linux distributions and of one more notable trend: companies' collaboration with volunteers (with the community) to produce free software. Customised distributions Since Linux came onto the scene, a large number of groups and companies have created their own distributions based on it. But during these years, the phenomenon has caught on with many organisations and companies that want customised versions for their own requirements. Customisation has been able to expand because the process has become cheaper and there is widespread availability of the technical knowledge to do so, even making this a niche market for certain companies. Perhaps one of the best known cases of customised distributions is the one for Spain's autonomous communities. The Extremadura Regional Government with its gnuLinEx sparked a trend that many other autonomous communities have since followed. The process is so common that several of them regularly convene tenders for the creation and maintenance of new versions of their distributions. The creation of customised distributions realises a trend that the world of free software had been discussing for a long time: adapting programs to users' specific needs without it having to be the original producers that necessarily make the adaptation. Bibliography Some of the most well-known distributions of GNU/Linux in the Spanish autonomous communities include: gnuLinEx: http://linex.org (Extremadura) Guadalinex: http://guadalinex.org (Andalucía) Lliurex: http://lliurex.net (Comunidad Valenciana) Augustux: http://www.zaralinux.org/proy/augustux/ (Aragón) MAX: http://www.educa.madrid.org/web/madrid_linux/ (Madrid) MoLinux: http://molinux.info (Castilla-La Mancha) Company-company and volunteer-company collaborations Since practically the beginning of free software, there have been companies that collaborated with volunteers in developing applications. However, in these years when it appears that we are reaching maturity there is a growing number of companies that use free software as part of their strategy to collaborate with other companies, when they find it interesting. Two of the most significant cases, organised specifically with this objective, are ObjectWeb (an alliance formed in France which over time clearly has clearly become international) and Morfeo (in Spain). In both cases, a group of companies has agreed to develop a set of free systems that are of interest to them, and decided to distribute it as free software. In other cases, companies have actively sought to collaborate in free projects promoted by volunteers, or tried to make volunteers collaborate with their own free projects. The GNOME Foundation or the already-mentioned Ubuntu in respect of Debian are examples of this first scenario. Sun and OpenOffice.org and OpenSolaris, or Red Hat with Fedora Core, are examples of the second. Expanding to other spheres Free software has proven that in the field of producing programs there is another way of doing things. In practice, we have seen how granting the freedom to distribute, modify and use can achieve sustainability, either through volunteer work, or through business generation that allows companies to survive. As time passes, this same idea is being transferred to other spheres of intellectual work. The Creative Commons licences have made it possible to free spheres such as literature, music, or video. Wikipedia is proving that a field as particular as the production of encyclopaedias can move through a very interesting path. And there are more and more literary authors, music bands and even film producers interested in models of free production and distribution. In all these domains there is still a long way to go, and in almost all of them practice has not yet fully proven that sustainable creation is possible with free models. But it cannot be denied that experimentation with it is reaching a boiling point. Free software as a subject of study Although some works, such as the renowned "The cathedral and the bazaar" cleared the way for the study of free software as such, it was not until 2001 and subsequent years that the academic community started to consider free software as something worthy of study. Over time, the massive availability of data (almost everything in the world of free software is public and available from public information archives) and the innovations that free software has provided have drawn the attention of many groups. Midway through the decade of 2000 there are already several international conferences centred specifically on free software, top-ranking magazines frequently produce papers on it, and research-funding agencies are opening lines aimed specifically towards it. The future: an obstacle course? Of course, it is difficult to predict the future. And that is certainly not our objective. Therefore, rather than trying to explain what the future of free software will be like, we will try to show the problems that it will foreseeably have to face (and has indeed been facing for a long time). How the world of free software is able to overcome these obstacles will undoubtedly determine its situation in several years' time. FUD (fear, uncertainty, doubt). This is a fairly common technique in the world of information technologies, used by free software's competitors in order to discredit free software, with more or less justification and varying degrees of success. In general terms, free software has been fairly immune to these techniques, perhaps due to its complexity and different ways of seeping into companies. Dissolution. Many companies are testing the limits of free software as a model, and in particular are trying to offer their clients models that present some similar characteristics to free software. The main problem that can present itself with this type of model is that it generates confusion among clients and developers, who need to read the small print in detail in order to realise that what they are being offered does not have the advantages that free software offers them. The most well-known model of this type is the Shared Source program, by Microsoft. Lack of knowledge. In many cases, users turn to free software simply because they think that it is free of charge; or because they think that it is "fashionable". If they do not look deeper into it, and study with a certain amount of detail the advantages that free software can offer as a model, they run the risk of not taking full advantage of them. In many cases, the initial assumptions in the world of free software are so different from the traditional ones in the world of proprietary software that a minimum analysis is required in order to understand that what in one case is frequent in the other may be impossible, and vice versa. Therefore, lack of knowledge can only generate dissatisfaction and loss of opportunities for any person or organisation approaching free software. Legal obstacles. This is certainly the main problem that free software is going to have to deal with in coming years. Although the legal environment in which free software developed in the 80s and first half of the 90s was not ideal, at least it left enough space for it to grow freely. Since then, extension of the scope of patenting to software (which has occurred in many developed countries) and new copyright legislation (limiting the software developer's liberty to create) are producing increasingly higher barriers to free software's entry into important segments of applications. Summary This chapter presents the history of free software. The sixties was a period dominated by large computers and IBM in which software was distributed together with the hardware, and usually with the source code. In the seventies, software started to be sold separately, and soon proprietary distributions, which did not include source code and did not give permission to modify or redistribute, became almost the only option. Interested readers will find in Appendix B a list of some of the most relevant dates in the history of free software. In the decade of the 1970s, work began on developing the Unix operating system at AT&T's Bell Labs, giving rise later to Unix BSD. Its evolution, in parallel with the birth of the Internet, served as a testing field for new ways of developing in collaboration, which later became common in the world of free software. In 1984, Richard Stallman started to work on the GNU project, founding the Free Software Foundation (FSF), writing the GPL licence, and in general establishing the foundations of free software as we now know it. In the 90s Internet matured offering free software communities new channels for communication and distribution. In 1991, Linus Torvalds started to develop a free kernel (Linux) which helped to complete the GNU system, which already had almost all the parts for becoming a complete system similar to Unix: C compiler (GCC), editor (Emacs), windowing system (X Window), etc. This is how the GNU/Linux operating systems were born, branching out into many distributions, such as Red Hat Linux and Debian GNU/Linux. Towards the end of the 90s, these systems were completed with two desktop environments: KDE and GNOME. In the decade of 2000, free software managed to lead in some sectors (such as for web servers, dominated by Apache), and new tools appeared covering a large number of IT requirements.