Google

Tuesday, April 15, 2008

Bandung Institute of Technology

Bandung Institute of Technology (ITB) (Indonesian: Institut Teknologi Bandung) is a state, coeducational research university located in Bandung, Indonesia. Established in 1920, ITB is the oldest technology-oriented university in Indonesia.

Ceremonial Hall by architect Henri Maclaine-Pont

Ceremonial Hall by architect Henri Maclaine-Pont

The University prides itself on its reputation as one of the country's centers of excellence in science, technology, and art,[citation needed] and was considered the top choice among Indonesia's high school students in 2006.[1][2]

Indonesia's first president, Sukarno, earned his engineering degree in civil engineering (concentrating in architechthure in the 1920s.

The university cultivates professional and social activities by supporting its students’union, the student government councils that exist in every department. Each students' union has its own distinctly designed jacket that, among other traditions, serves as part of its member identity. There are also a number of student activity units/clubs supporting ITB student interests in rounding out their educational experience. It is not uncommon that the students and alumni are identified by the clubs to which they belong (or used to belong) at ITB, in addition to their class year and major.

The university is a member of LAOTSE, an international network of leading universities in Europe and Asia exchanging students and senior scholars.

ITB's march "Mars ITB" and hymn "Hymne ITB" were arranged by a former professor, Prof. Dr. Sudjoko Danoesoebrata. [3]

Read more…

Tuesday, April 8, 2008

4G

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Mobile communication standards
GSM / UMTS (3GPP) Family
cdmaOne / CDMA2000 (3GPP2) Family
AMPS Family
Other Technologies
0G
1G
2G
Pre-4G
Channel Access Methods
Frequency bands

4G (also known as beyond 3G), an acronym for Fourth-Generation Communications System, is a term used to describe the next step in wireless communications. A 4G system will be able to provide a comprehensive IP solution where voice, data and streamed multimedia can be given to users on an "Anytime, Anywhere" basis, and at higher data rates than previous generations. There is no formal definition for what 4G is; however, there are certain objectives that are projected for 4G.

These objectives include: that 4G will be a fully IP-based integrated system. This will be achieved after wired and wireless technologies converge and will be capable of providing between 100 Mbit/s and 1 Gbit/s speeds both indoors and outdoors, with premium quality and high security. 4G will offer all types of services at an affordable cost.[1]


Contents

[hide]

[edit] Objective and approach

[edit] Objectives

4G is being developed to accommodate the quality of service (QoS) and rate requirements set by forthcoming applications like wireless broadband access, Multimedia Messaging Service, video chat, mobile TV, High definition TV content, Digital Video Broadcasting (DVB), minimal service like voice and data, and other streaming services for "anytime-anywhere". The 4G working group has defined the following as objectives of the 4G wireless communication standard:

  • A spectrally efficient system (in bits/s/Hz and bits/s/Hz/site)[2],
  • High network capacity: more simultaneous users per cell[3],
  • A nominal data rate of 100 Mbit/s while the client physically moves at high speeds relative to the station, and 1 Gbit/s while client and station are in relatively fixed positions as defined by the ITU-R[1],
  • A data rate of at least 100 Mbit/s between any two points in the world[1],
  • Smooth handoff across heterogeneous networks[4],
  • Seamless connectivity and global roaming across multiple networks[5],
  • High quality of service for next generation multimedia support (real time audio, high speed data, HDTV video content, mobile TV, etc)[5]
  • Interoperability with existing wireless standards[6], and
  • An all IP, packet switched network[5].

In summary, the 4G system should dynamically share and utilise network resources to meet the minimal requirements of all the 4G enabled users.

[edit] Approaches

As described in 4G consortia including WINNER, WINNER - Towards Ubiquitous Wireless Access, and WWRF, a key technology based approach is summarized as follows, where Wireless-World-Initiative-New-Radio (WINNER) is a consortium to enhance mobile communication systems.[7][8]

[edit] Consideration points

  • Coverage, radio environment, spectrum, services, business models and deployment types, users

[edit] Principal technologies

  • Baseband techniques[9]
    • OFDM: To exploit the frequency selective channel property
    • MIMO: To attain ultra high spectral efficiency
    • Turbo principle: To minimize the required SNR at the reception side
  • Adaptive radio interface
  • Modulation, spatial processing including multi-antenna and multi-user MIMO
  • Relaying, including fixed relay networks (FRNs), and the cooperative relaying concept, known as multi-mode protocol

It introduces a single new ubiquitous radio access system concept, which will be flexible to a variety of beyond-3G wireless systems.

[edit] Wireless System Evolution

First generation: Almost all of the systems from this generation were analog systems where voice was considered to be the main traffic. These systems could often be listened to by third parties. Some of the standards are NMT, AMPS, Hicap, CDPD, Mobitex, DataTac, TACS and ETACS.

Second generation: All the standards belonging to this generation are commercial centric and they are digital in form. Around 60% of the current market is dominated by European standards. The second generation standards are GSM, iDEN, D-AMPS, IS-95, PDC, CSD, PHS, GPRS, HSCSD, and WiDEN.

Third generation: To meet the growing demands in network capacity, rates required for high speed data transfer and multimedia applications, 3G standards started evolving. The systems in this standard are essentially a linear enhancement of 2G systems. They are based on two parallel backbone infrastructures, one consisting of circuit switched nodes, and one of packet oriented nodes. The ITU defines a specific set of air interface technologies as third generation, as part of the IMT-2000 initiative. Currently, transition is happening from 2G to 3G systems. As a part of this transition, numerous technologies are being standardized.

Fourth generation: According to the 4G working groups, the infrastructure and the terminals of 4G will have almost all the standards from 2G to 4G implemented. Although legacy systems are in place to adopt existing users, the infrastructure for 4G will be only packet-based (all-IP). Some proposals suggest having an open platform where the new innovations and evolutions can fit. The technologies which are being considered as pre-4G are the following: WiMax, WiBro, iBurst, 3GPP Long Term Evolution and 3GPP2 Ultra Mobile Broadband.

[edit] Components

[edit] Access schemes

As the wireless standards evolved, the access techniques used also exhibited increase in efficiency, capacity and scalability. The first generation wireless standards used plain TDMA and FDMA. In the wireless channels, TDMA proved to be less efficient in handling the high data rate channels as it requires large guard periods to alleviate the multipath impact. Similarly, FDMA consumed more bandwidth for guard to avoid inter carrier interference. So in second generation systems, one set of standard used the combination of FDMA and TDMA and the other set introduced a new access scheme called CDMA. Usage of CDMA increased the system capacity and also placed a soft limit on it rather than the hard limit. Data rate is also increased as this access scheme is efficient enough to handle the multipath channel. This enabled the third generation systems to used CDMA as the access scheme IS-2000, UMTS, HSXPA, 1xEV-DO, TD-CDMA and TD-SCDMA. The only issue with the CDMA is that it suffers from poor spectrum flexibility and scalability.

Recently, new access schemes like Orthogonal FDMA (OFDMA), Single Carrier FDMA (SC-FDMA), Interleaved FDMA and Multi-carrier code division multiple access (MC-CDMA) are gaining more importance for the next generation systems. WiMax is using OFDMA in the downlink and in the uplink. For the next generation UMTS, OFDMA is being considered for the downlink. By contrast, IFDMA is being considered for the uplink since OFDMA contributes more to the PAPR related issues and results in nonlinear operation of amplifiers. IFDMA provides less power fluctuation and thus avoids amplifier issues. Similarly, MC-CDMA is in the proposal for the IEEE 802.20 standard. These access schemes offer the same efficiencies as older technologies like CDMA. Apart from this, scalability and higher data rates can be achieved.

The other important advantage of the above mentioned access techniques is that they require less complexity for equalization at the receiver. This is an added advantage especially in the MIMO environments since the spatial multiplexing transmission of MIMO systems inherently requires high complexity equalization at the receiver.

In addition to improvements in these multiplexing systems, improved modulation techniques are being used. Whereas earlier standards largely used Phase-shift keying, more efficient systems such as 64QAM are being proposed for use with the 3GPP Long Term Evolution standards.

[edit] IPv6

Main articles: Network layer, Internet protocol, and IPv6

Unlike 3G, which is based on two parallel infrastructures consisting of circuit switched and packet switched network nodes respectively, 4G will be based on packet switching only. This will require low-latency data transmission.

By the time that 4G is deployed, the process of IPv4 address exhaustion is expected to be in its final stages. Therefore, in the context of 4G, IPv6 support is essential in order to support a large number of wireless-enabled devices. By increasing the number of IP addresses, IPv6 removes the need for Network Address Translation (NAT), a method of sharing a limited number of addresses among a larger group of devices.

In the context of 4G, IPv6 also enables a number of applications with better multicast, security, and route optimization capabilities. With the available address space and number of addressing bits in IPv6, many innovative coding schemes can be developed for 4G devices and applications that could aid deployment of 4G networks and services.

[edit] Advanced Antenna Systems

Main articles: MIMO and MU-MIMO

The performance of radio communications obviously depends on the advances of an antenna system, refer to smart or intelligent antenna. Recently, multiple antenna technologies are emerging to achieve the goal of 4G systems such as high rate, high reliability, and long range communications. In the early 90s, to cater the growing data rate needs of data communication, many transmission schemes were proposed. One technology, spatial multiplexing, gained importance for its bandwidth conservation and power efficiency. Spatial multiplexing involves deploying multiple antennas at the transmitter and at the receiver. Independent streams can then be transmitted simultaneously from all the antennas. This increases the data rate into multiple folds with the number equal to minimum of the number of transmit and receive antennas. This is called MIMO (as a branch of intelligent antenna). Apart from this, the reliability in transmitting high speed data in the fading channel can be improved by using more antennas at the transmitter or at the receiver. This is called transmit or receive diversity. Both transmit/receive diversity and transmit spatial multiplexing are categorized into the space-time coding techniques, which does not necessary require the channel knowledge at the transmit. The other category is closed-loop multiple antenna technologies which use the channel knowledge at the transmitter.

[edit] Software-Defined Radio (SDR)

SDR is one form of open wireless architecture (OWA). Since 4G is a collection of wireless standards, the final form of a 4G device will constitute various standards. This can be efficiently realized using SDR technology, which is categorized to the area of the radio convergence.

[edit] Developments

The Japanese company NTT DoCoMo has been testing a 4G communication system prototype with 4x4 MIMO called VSF-OFCDM at 100 Mbit/s while moving, and 1 Gbit/s while stationary. NTT DoCoMo recently reached 5 Gbit/s with 12x12 MIMO while moving at 10 km/h,[10] and is planning on releasing the first commercial network in 2010.

Digiweb, an Irish fixed and wireless broadband company, has announced that they have received a mobile communications license from the Irish Telecoms regulator, ComReg. This service will be issued the mobile code 088 in Ireland and will be used for the provision of 4G Mobile communications.[11] [12]

Pervasive networks are an amorphous and presently entirely hypothetical concept where the user can be simultaneously connected to several wireless access technologies and can seamlessly move between them (See handover, IEEE 802.21). These access technologies can be Wi-Fi, UMTS, EDGE, or any other future access technology. Included in this concept is also smart-radio (also known as cognitive radio technology) to efficiently manage spectrum use and transmission power as well as the use of mesh routing protocols to create a pervasive network.

Sprint plans to launch 4G services in trial markets by the end of 2007 with plans to deploy a network that reaches as many as 100 million people in 2008.... and has announced WiMax service called Xohm. Tested in Chicago, this speed was clocked at 100 Mbit/s.

Verizon Wireless announced on September 20, 2007 that it plans a joint effort with the Vodafone Group to transition its networks to the 4G standard LTE. The time of this transition has yet to be announced.

The German WiMAX operator Deutsche Breitband Dienste (DBD) has launched WiMAX services (DSLonair) in Magdeburg and Dessau. The subscribers are offered a tariff plan costing 9.95 euros per month offering 2 Mbit/s download / 300 kbit/s upload connection speeds and 1.5 GB monthly traffic. The subscribers are also charged a 16.99 euro one-time fee and 69.90 euro for the equipment and installation.[13] DBD received additional national licenses for WiMAX in December 2006 and have already launched the services in Berlin, Leipzig and Dresden.

American WiMAX services provider Clearwire made its debut on Nasdaq in New York on March 8, 2007. The IPO was underwritten by Merrill Lynch, Morgan Stanley and JP Morgan. Clearwire sold 24 million shares at a price of $25 per share. This adds $600 million in cash to Clearwire, and gives the company a market valuation of just over $3.9 billion.[14]

[edit] Applications

The killer application of 4G is not clear, though the improved bandwidths and data throughput offered by 4G networks should provide opportunities for previously impossible products and services to be released. Perhaps the "killer application" is simply to have mobile always on Internet, no walled garden and reasonable flat rate per month charge. Existing 2.5G/3G/3.5G phone operator based services are often expensive, and limited in application.

Already at rates of 15-30 Mbit/s, 4G should be able to provide users with streaming high-definition television. At rates of 100 Mbit/s, the content of a DVD-5, for example a movie, can be downloaded within about 5 minutes for offline access.

[edit] Pre-4G wireless standards

According to a Visant Strategies study there will be multiple competitors in this space:[15]

Fixed WiMax and Mobile WiMax are different systems, as of July 2007, all the deployed WiMax is "Fixed Wireless" and is thus not yet 4G (IMT-advanced) although it can be seen as one of the 4G standards being considered.

[edit] See also

[edit] References

[edit] Citations

  1. ^ a b c Young Kyun, Kim; Prasad, Ramjee. 4G Roadmap and Emerging Communication Technologies. Artech House, pp 12-13. ISBN 1-58053-931-9.
  2. ^ 4G - Beyond 2.5G and 3G Wireless Networks. MobileInfo.com. Retrieved on 2007-03-26.
  3. ^ Jawad Ibrahim (December 2002). 4G Features. Bechtel Telecommunications Technical Journal. Retrieved on 2007-03-26.
  4. ^ Mobility Management Challenges and Issues in 4G Heterogeneous Networks. ACM Proceedings of the first international conference on Integrated internet ad hoc and sensor networks (May 30 - 31, 2006). Retrieved on 2007-03-26.
  5. ^ a b c Werner Mohr (2002). Mobile Communications Beyond 3G in the Global Context. Siemens mobile. Retrieved on 2007-03-26.
  6. ^ Noah Schmitz (March 2005). The Path To 4G Will Take Many Turns. Wireless Systems Design. Retrieved on 2007-03-26.
  7. ^ WINNER - Towards Ubiquitous Wireless Access. WINNER (2007).
  8. ^ WINNER II - Public Deliverable. WINNER II (2006-07).
  9. ^ G. Fettweis, E. Zimmermann, H. Bonneville, W. Schott, K. Gosse, M. de Courville (2004). High Throughput WLAN/WPAN. WWRF.
  10. ^ DoCoMo Achieves 5 Gbit/s Data Speed. NTT DoCoMo Press (9 February 2007).
  11. ^ Press Release: Digiweb Mobile Takes 088
  12. ^ RTÉ News article: Ireland gets new mobile phone provider
  13. ^ Privatkunden Tarife (de). Deutsche Breitband Dienste. Retrieved on 2007-08-30.
  14. ^ WiMAX Day (March 8th, 2007). WiMAX rallies market as Clearwire IPO nets $600 million. WiMAX Spectrum Owners Alliance (WiSOA).
  15. ^ WiMAX Has Company. Wireless Week (1 February 2006). Retrieved on 2007-03-26.

[edit] Additional resources

Open source software

Open source software

From Wikipedia, the free encyclopedia

(Redirected from Open-source software)
Jump to: navigation, search
The logo of the Open Source Initiative
The logo of the Open Source Initiative

Open source software is computer software for which the human-readable source code is made available under a copyright license (or arrangement such as the public domain) that meets the Open Source Definition. This permits users to use, change, and improve the software, and to redistribute it in modified or unmodified form. It is often developed in a public, collaborative manner. Open source software is the most prominent example of open source development and often compared to user generated content.[1]

Contents

[hide]

[edit] History

Main article: Open source movement

The free software movement was launched in 1983. In 1998, a group of individuals advocated that the term free software be replaced by open source software (OSS) as an expression which is less ambiguous and more comfortable for the corporate world.[2] Software developers may want to publish their software with an open source license, so that anybody may also develop the same software or understand how it works. Open source software generally allows anyone to make a new version of the software, port it to new operating systems and processor architectures, share it with others or market it. The aim of open source is to let the product be more understandable, modifiable, duplicatable,reliable or simply accessible, while it is still marketable.

The Open Source Definition, notably, presents an open source philosophy, and further defines a boundary on the usage, modification and redistribution of open source software. Software licenses grant rights to users which would otherwise be prohibited by copyright. These include rights on usage, modification and redistribution. Several open source software licenses have qualified within the boundary of the Open Source Definition. The most prominent example is the popular GNU General Public License (GPL). While open source presents a way to broadly make the sources of a product publicly accessible, the open source licenses allow the authors to fine tune such access.

The "open source" label came out of a strategy session held in Palo Alto in reaction to Netscape's January 1998 announcement of a source code release for Navigator (as Mozilla). A group of individuals at the session included Todd Anderson, Larry Augustin, John Hall, Sam Ockman, Christine Peterson and Eric S. Raymond. They used the opportunity before the release of Navigator's source code to clarify a potential confusion caused by the ambiguity of the word "free" in English. The 'open source' movement is generally thought to have begun with this strategy session. Many people, nevertheless, claimed that the birth of the Internet, since 1969, started the open source movement, while others do not distinguish between open source and free software movements.

The Free Software Foundation (FSF), started in 1985, intended the word 'free' to mean "free as in free speech" and not "free as in free beer" with emphasis on the positive freedom to distribute rather than a negative freedom from cost. Since a great deal of free software already was (and still is) free of charge, such free software became associated with zero cost, which seemed anti-commercial.

The Open Source Initiative (OSI) was formed in February 1998 by Eric S. Raymond and Bruce Perens. With at least 20 years of evidence from case histories of closed development versus open development already provided by the Internet, the OSI presented the 'open source' case to commercial businesses, like Netscape. The OSI hoped that the usage of the label "open source," a term suggested by Peterson of the Foresight Institute at the strategy session, would eliminate ambiguity, particularly for individuals who perceive "free software" as anti-commercial. They sought to bring a higher profile to the practical benefits of freely available source code, and they wanted to bring major software businesses and other high-tech industries into open source. Perens attempted to register "open source" as a service mark for the OSI, but that attempt was impractical by trademark standards. Meanwhile, thanks to the presentation of Raymond's paper to the upper management at Netscape (Raymond only discovered when he read the Press Release, and was called by Netscape CEO Jim Barksdale's PA later in the day), Netscape released its Navigator source code as open source, with favorable results.

[edit] Philosophy

In his 1997 essay The Cathedral and the Bazaar[3], open source evangelist Eric S. Raymond suggests a model for developing OSS known as the Bazaar model. Raymond likens the development of software by traditional methodologies to building a cathedral, "carefully crafted by individual wizards or small bands of mages working in splendid isolation"[3]. He suggests that all software should be developed using the bazaar style, which he described as "a great babbling bazaar of differing agendas and approaches."

In the Cathedral model, development takes place in a centralized way. Roles are clearly defined. Roles include people dedicated to designing (the architects), people responsible for managing the project, and people responsible for implementation. Traditional software engineering follows the Cathedral model. Fred P. Brooks in his book The Mythical Man-Month advocates this sort of model. He goes further to say that in order to preserve the architectural integrity of a system, the system design should be done by as few architects as possible.

The Bazaar model, however, is different. In this model, roles are not clearly defined. Gregorio Robles[4] suggests that software developed using the Bazaar model should exhibit the following patterns:

Users should be treated as co-developers
The users are treated like co-developers and so they should have access to the source code of the software. Furthermore users are encouraged to submit additions to the software, code fixes for the software, bug reports, documentation etc. Having more co-developers increases the rate at which the software evolves. Linus's law states that, "Given enough eyeballs all bugs are shallow." This means that if many users view the source code they will eventually find all bugs and suggest how to fix them. Note that some users have advanced programming skills, and furthermore, each user's machine provides an additional testing environment. This new testing environment offers that ability to find and fix a new bug.
Early Releases
The first version of the software should be released as early as possible so as to increase one's chances of finding co-developers early.
Frequent Integration
New code should be integrated as often as possible so as to avoid the overhead of fixing a large number of bugs at the end of the project life cycle. Some open source projects have nightly builds where integration is done automatically on a daily basis.
Several Versions
There should be at least two versions of the software. There should be a buggier version with more features and a more stable version with fewer features. The buggy version (also called the development version) is for users who want the immediate use of the latest features, and are willing to accept the risk of using code that is not yet thoroughly tested. The users can then act as co-developers, reporting bugs and providing bug fixes. The stable version offers the users fewer bugs and fewer features.
High Modularization
The general structure of the software should be modular allowing for parallel development.
Dynamic decision making structure
There is a need for a decision making structure, whether formal or informal, that makes strategic decisions depending on changing user requirements and other factors. Cf. Extreme programming.

Most well-known OSS products follow the Bazaar model as suggested by Eric Raymond. These include projects such as Linux, Netscape, Apache, the GNU Compiler Collection, and Perl to mention a few.

[edit] Licensing

Main article: Open source license

Open source licenses define the privileges and restrictions a licensor must follow in order to use, modify or redistribute the open source software. Open source software includes software with source code in the public domain and software distributed under an open source license.

Examples of open source licenses include Apache License, BSD license, GNU General Public License, GNU Lesser General Public License, MIT License, Eclipse Public License and Mozilla Public License.

The proliferation of open source licenses is one of the few negative aspects of the open source movement because it is often difficult to understand the legal implications of the differences between licenses.

[edit] Open source versus closed source

The open source vs. closed source (alternatively called proprietary software) debate is sometimes heated.

The first conflict comes is from an Economics front: Making money through traditional methods, such as sale of the use of individual copies and patent royalty payment (generally called licensing), is more difficult and in many ways against the very concept of open source software.

Some closed-source advocates see open source software as damaging to the market of commercial software. This is one of the many reasons, as mentioned above, that the term 'free software' was replaced with 'open source', because many company executives could not believe fundamentally in a product that did not participate economically in a free-market or mixed market economy. The very economy that supports their businesses. In addition if something goes wrong who is liable?

The counter to this argument is using open source software to fuel a separate product's or service's market. Such as:

  • Providing support and installation services. Similar to IT Security groups, Linux Distributions, and Systems companies.
  • cost avoidance / cost sharing: many developers need a product, so it makes sense to share development costs (X Window System and the Apache web server)

Another major argument is software defects and security: This is an argument that applies to all open products not just open source software.

Since Open Source software is open, all of the defects and security flaws are easily found. Closed-source advocates argue that this makes it easier for a malicious person to discover security flaws. Further, that there is no incentive for an open-source product to be patched. Open-source advocates argue that this makes it easier also for a patch to be found and that the closed-source argument is security through obscurity, which this form of security will eventually fail, often without anyone knowing of the failure. Further, that just because there is not an immediate financial incentive to patch a product, does not mean there is not any incentive to patch a product. Further, if the patch is that significant to the user, having the source code, the user can technically patch the problem themselves. These arguments are hard to prove. However, most studies show that open-source software does have a higher flaw discovery, quicker flaw discovery, and quicker turn around on patches.

[edit] Open source software versus free software

Critics have said that the term “open source” fosters an ambiguity of a different kind such that it confuses the mere availability of the source with the freedom to use, modify, and redistribute it. Developers have used the alternative terms Free/open source Software (FOSS), or Free/Libre/open source Software (FLOSS), consequently, to describe open source software which is also free software.

The term “Open Source” was originally intended to be trademarkable; however, the term was deemed too descriptive, so no trademark exists.[5] The OSI would prefer that people treat Open Source as if it were a trademark, and use it only to describe software licensed under an OSI approved license.[6] .

There have been instances where software vendors have labeled proprietary software as “open source” because it interfaces with popular OSS (such as Linux).[citation needed] Open source advocates consider this to be both confusing and incorrect. OSI Certified is a trademark licensed only to people who are distributing software licensed under a license listed on the Open Source Initiative's list.[7]

Open source software and free software are different terms for software which comes with certain rights, or freedoms, for the user. They describe two approaches and philosophies towards free software. Open source and free software (or software libre) both describe software which is free from onerous licensing restrictions. It may be used, copied, studied, modified and redistributed without restriction. Free software is not the same as freeware, software available at zero price.

The definition of open source software was written to be almost identical to the free software definition.[8] There are very few cases of software that is free software but is not open source software, and vice versa. The difference in the terms is where they place the emphasis. “Free software” is defined in terms of giving the user freedom. This reflects the goal of the free software movement. “Open source” highlights that the source code is viewable to all and proponents of the term usually emphasize the quality of the software and how this is caused by the development models which are possible and popular among free and open source software projects.

Free software licenses are not written exclusively by the FSF. The FSF and the OSI both list licenses which meet their respective definitions of free software. open source software and free software share an almost identical set of licenses.[citation needed] One exception is an early version of the Apple Public Source License, which was accepted by the OSI but rejected by the FSF because it did not allow private modified versions; this restriction was removed in later version of the license.[citation needed] There are now new versions that are approved by both the OSI and the FSF.

The Open Source Initiative believes that more people will be convinced by the experience of freedom.[citation needed] The FSF believes that more people will be convinced by the concept of freedom. The FSF believes that knowledge of the concept is an essential requirement[9][8], insists on the use of the term free[9][8], and separates itself from the open source movement.[9][8] The Open Source Initiative believes that free has three meanings: free as in beer, free as in freedom, and free as in unsellable.[citation needed] The problem with the term “open source” is it says nothing about the freedom to modify and redistribute, so it is used by people who think that source access without freedom is a sufficient definition. This possibility for misuse is the case for most of the licences that make up Microsoft's “shared source” initiative.

[edit] Open source versus source-available

Although the OSI definition of "open source software" is widely accepted, a small number of people and organizations use the term to refer to software where the source is available for viewing, but which may not legally be modified or redistributed. Such software is more often referred to as source-available, or as shared source, a term coined by Microsoft in opposition to open source.

Michael Tiemann, president of OSI, had criticized[10] companies such as SugarCRM for promoting their software as "open source" when in fact it did not have an OSI-approved license. In SugarCRM's case, it was because the software is so-called "badgeware" [11] since it specified a "badge" that must be displayed in the user interface (SugarCRM has since switched to GPLv3[12]). Another example is Scilab, which calls itself "the open source platform for numerical computation" [13] but has a license [14]that forbids commercial redistribution of modified versions. Because OSI does not have a registered trademark for the term "open source", its legal ability to prevent such usage of the term is limited, but Tiemann advocates using public opinion from OSI, customers, and community members to pressure such organizations to change their license or to use a different term.

Other software that has source code available, but which is not open source, includes the pine email client, and the Microsoft Windows Operating System.

[edit] Development tools

In OSS development the participants, who are mostly volunteers, are distributed amongst different geographic regions so there is need for tools to aid participants to collaborate in source code development. Often these tools are also available as OSS.

Revision control systems such as Concurrent Versions System (CVS) and later Subversion (svn) are examples of tools that help centrally manage the source code files and the changes to those files for a software project.

Utilities that automate testing, compiling and bug reporting help preserve stability and support of software projects that have numerous developers but no managers, quality controller or technical support. Building systems that report compilation errors among different platforms include Tinderbox. Commonly used bugtrackers include Bugzilla and GNATS.

Tools such as mailing lists, IRC, and instant messaging provide means of Internet communications between developers. The Web is also a core feature of all of the above systems. Some sites centralize all the features of these tools as a software development management system, including GNU Savannah, SourceForge, and BountySource.

[edit] Projects and organizations

[edit] Examples

For an extensive list of examples of open source software, see the List of open source software packages.

[edit] See also

Wikibooks
Wikibooks has a book on the topic of

[edit] References

  1. ^ Verts, William T. (2008-01-13). Open source software. World Book Online Reference Center.
  2. ^ Eric S. Raymond (1998-02-08). source.html Goodbye, "free software"; hello, "open source". Retrieved on 2007-02-14.
  3. ^ a b Raymond, Eric (2000-09-11). The Cathedral and the Bazaar. Retrieved on 2004-09-19.
  4. ^ Robles, Gregorio (2004). "A Software Engineering approach to Libre Software", in Robert A. Gehring, Bernd Lutterbeck: Open Source Jahrbuch 2004 (PDF), Berlin: Lehmanns Media. Retrieved on 2005-04-20.
  5. ^ Nelson, Russell (2007-03-2). Certification Mark. The Open Source Initiative (OSI). Retrieved on 2007-07-22.
  6. ^ Raymond, Eric S. (1998-11-22). OSI Launch Announcement. The Open Source Initiative (OSI). Retrieved on 2007-07-22.
  7. ^ Nelson, Russell (2006-09-19). Open Source Licenses by Category. The Open Source Initiative (OSI). Retrieved on 2007-07-22.
  8. ^ a b c d Stallman, Richard (2007-06-16). source-misses-the-point.html Why “Open Source” misses the point of Free Software. Philosophy of the GNU Project. GNU Project. Retrieved on 2007-07-23.
  9. ^ a b c Stallman, Richard (2007-06-19). Why “Free Software” is better than “Open Source”. Philosophy of the GNU Project. GNU Project. Retrieved on 2007-07-23.
  10. ^ Tiemann, Michael (2007-06-21). Will The Real Open Source CRM Please Stand Up?. Retrieved on 2008-01-04.
  11. ^ Berlind, David (2006-11-21). Are SugarCRM, Socialtext, Zimbra, Scalix and others abusing the term “open source?”. Retrieved on 2008-01-04.
  12. ^ Vance, Ashlee (2007-07-25). SugarCRM trades badgeware for GPL 3. The Register.
  13. ^ The open source platform for numerical computation. Retrieved on 2008-01-04.
  14. ^ SCILAB License. Retrieved on 2008-01-04.

[edit] Further reading

[edit] External links

Google