source
stringlengths 31
81
| text
stringlengths 72
169k
|
|---|---|
https://en.wikipedia.org/wiki/AutoCAD
|
AutoCAD is a 2D and
3D computer-aided design (CAD) software application for desktop, web, and mobile developed by Autodesk. It was first released in December 1982 for the CP/M and IBM PC platforms as a desktop app running on microcomputers with internal graphics controllers. Initially a DOS application, subsequent versions were later released for other platforms including Classic Mac OS (1992), Microsoft Windows (1992), web browsers (2010), iOS (2010), macOS (2010), and Android (2011).
AutoCAD is a general drafting and design application used in industry by architects, project managers, engineers, graphic designers, city planners and other professionals to prepare technical drawings. After discontinuing the sale of perpetual licenses in January 2016, commercial versions of AutoCAD are licensed through a term-based subscription.
History
Before AutoCAD was introduced, most commercial CAD programs ran on mainframe computers or minicomputers, with each CAD operator (user) working at a separate graphics terminal.
Origins
AutoCAD was derived from a program that began in 1977, and then released in 1979 called Interact CAD, also referred to in early Autodesk documents as MicroCAD, which was written prior to Autodesk's (then Marinchip Software Partners) formation by Autodesk cofounder Michael Riddle.
The first version by Autodesk was demonstrated at the 1982 Comdex and released that December. AutoCAD supported CP/M-80 computers. As Autodesk's flagship product, by March 1986 AutoCAD had become the most ubiquitous CAD program worldwide. The 2022 release marked the 36th major release of AutoCAD for Windows and the 12th consecutive year of AutoCAD for Mac. The native file format of AutoCAD is .dwg. This and, to a lesser extent, its interchange file format DXF, have become de facto, if proprietary, standards for CAD data interoperability, particularly for 2D drawing exchange. AutoCAD has included support for .dwf, a format developed and promoted by Autodesk, for publishing CAD data.
Features
Compatibility with other software
ESRI ArcMap 10 permits export as AutoCAD drawing files. Civil 3D permits export as AutoCAD objects and as LandXML. Third-party file converters exist for specific formats such as Bentley MX GENIO Extension, PISTE Extension (France), ISYBAU (Germany), OKSTRA and Microdrainage (UK); also, conversion of .pdf files is feasible, however, the accuracy of the results may be unpredictable or distorted. For example, jagged edges may appear. Several vendors provide online conversions for free such as Cometdocs.
Language
AutoCAD and AutoCAD LT are available for English, German, French, Italian, Spanish, Japanese, Korean, Chinese Simplified, Chinese Traditional, Brazilian Portuguese, Russian, Czech, Polish and Hungarian (also through additional language packs). The extent of localization varies from full translation of the product to documentation only. The AutoCAD command set is localized as a part of the software localization.
Extensions
AutoCAD supports a number of APIs for customization and automation. These include AutoLISP, Visual LISP, VBA, .NET and ObjectARX. ObjectARX is a C++ class library, which was also the base for:
products extending AutoCAD functionality to specific fields
creating products such as AutoCAD Architecture, AutoCAD Electrical, AutoCAD Civil 3D
third-party AutoCAD-based application
There are a large number of AutoCAD plugins (add-on applications) available on the application store Autodesk Exchange Apps.
AutoCAD's DXF, drawing exchange format, allows importing and exporting drawing information.
Vertical integration
Autodesk has also developed a few vertical programs for discipline-specific enhancements such as:
Advance Steel
AutoCAD Architecture
AutoCAD Electrical
AutoCAD Map 3D
AutoCAD Mechanical
AutoCAD MEP
AutoCAD Plant 3D
Autodesk Civil 3D
Since AutoCAD 2019 several verticals are included with AutoCAD subscription as Industry-Specific Toolset.
For example, AutoCAD Architecture (formerly Architectural Desktop) permits architectural designers to draw 3D objects, such as walls, doors, and windows, with more intelligent data associated with them rather than simple objects, such as lines and circles. The data can be programmed to represent specific architectural products sold in the construction industry, or extracted into a data file for pricing, materials estimation, and other values related to the objects represented.
Additional tools generate standard 2D drawings, such as elevations and sections, from a 3D architectural model. Similarly, Civil Design, Civil Design 3D, and Civil Design Professional support data-specific objects facilitating easy standard civil engineering calculations and representations.
Softdesk Civil was developed as an AutoCAD add-on by a company in New Hampshire called Softdesk (originally DCA). Softdesk was acquired by Autodesk, and Civil became Land Development Desktop (LDD), later renamed Land Desktop. Civil 3D was later developed and Land Desktop was retired.
File formats
AutoCAD's native file formats are denoted either by a .dwg, .dwt, .dws, or .dxf filename extension.
The primary file format for 2D and 3D drawing files created with AutoCAD is .dwg. While other third-party CAD software applications can create .dwg files, AutoCAD uniquely creates RealDWG files.
Using AutoCAD, any .dwg file may be saved to a derivative format. These derivative formats include:
Drawing Template Files .dwt: New .dwg are created from a .dwt file. Although the default template file is acad.dwt for AutoCAD and acadlt.dwt for AutoCAD LT, custom .dwt files may be created to include foundational configurations such as drawing units and layers.
Drawing Standards File .dws: Using the CAD Standards feature of AutoCAD, a Drawing Standards File may be associated to any .dwg or .dwt file to enforce graphical standards.
Drawing Interchange Format .dxf: The .dxf format is an ASCII representation of a .dwg file, and is used to transfer data between various applications.
Variants
AutoCAD LT
AutoCAD LT is the lower-cost version of AutoCAD, with reduced capabilities, first released in November 1993. Autodesk developed AutoCAD LT to have an entry-level CAD package to compete in the lower price level. Priced at $495, it became the first AutoCAD product priced below $1000. It was sold directly by Autodesk and in computer stores unlike the full version of AutoCAD, which must be purchased from official Autodesk dealers. AutoCAD LT 2015 introduced Desktop Subscription service from $360 per year; as of 2018, three subscription plans were available, from $50 a month to a 3-year, $1170 license.
While there are hundreds of small differences between the full AutoCAD package and AutoCAD LT, there are a few recognized major differences in the software's features:
3D capabilities: AutoCAD LT lacks the ability to create, visualize and render 3D models as well as 3D printing.
Network licensing: AutoCAD LT cannot be used on multiple machines over a network.
Customization: AutoCAD LT does not support customization with LISP, ARX, .NET and VBA.
Management and automation capabilities with Sheet Set Manager and Action Recorder.
CAD standards management tools.
AutoCAD Mobile and AutoCAD Web
AutoCAD Mobile and AutoCAD Web (formerly AutoCAD WS and AutoCAD 360) is an account-based mobile and web application enabling registered users to view, edit, and share AutoCAD files via mobile device and web using a limited AutoCAD feature set — and using cloud-stored drawing files. The program, which is an evolution and combination of previous products, uses a freemium business model with a free plan and two paid levels, including various amounts of storage, tools, and online access to drawings. 360 includes new features such as a "Smart Pen" mode and linking to third-party cloud-based storage such as Dropbox. Having evolved from Flash-based software, AutoCAD Web uses HTML5 browser technology available in newer browsers including Firefox and Google Chrome.
AutoCAD WS began with a version for the iPhone and subsequently expanded to include versions for the iPod Touch, iPad, Android phones, and Android tablets. Autodesk released the iOS version in September 2010, following with the Android version on April 20, 2011. The program is available via download at no cost from the App Store (iOS), Google Play (Android) and Amazon Appstore (Android).
In its initial iOS version, AutoCAD WS supported drawing of lines, circles, and other shapes; creation of text and comment boxes; and management of color, layer, and measurements — in both landscape and portrait modes. Version 1.3, released August 17, 2011, added support for unit typing, layer visibility, area measurement and file management. The Android variant includes the iOS feature set along with such unique features as the ability to insert text or captions by voice command as well as manually. Both Android and iOS versions allow the user to save files on-line — or off-line in the absence of an Internet connection.
In 2011, Autodesk announced plans to migrate the majority of its software to "the cloud", starting with the AutoCAD WS mobile application.
According to a 2013 interview with Ilai Rotbaein, an AutoCAD WS product manager for Autodesk, the name AutoCAD WS had no definitive meaning, and was interpreted variously as Autodesk Web Service, White Sheet or Work Space. In 2013, AutoCAD WS was renamed to AutoCAD 360. Later, it was renamed to AutoCAD Web App.
Student versions
AutoCAD is licensed, for free, to students, educators, and educational institutions, with a 12-month renewable license available. Licenses acquired before March 25, 2020 were a 36-month license, with its last renovation on March 24, 2020. The student version of AutoCAD is functionally identical to the full commercial version, with one exception: DWG files created or edited by a student version have an internal bit-flag set (the "educational flag"). When such a DWG file is printed by any version of AutoCAD (commercial or student) older than AutoCAD 2014 SP1 or AutoCAD 2019 and newer, the output includes a plot stamp/banner on all four sides. Objects created in the Student Version cannot be used for commercial use. Student Version objects "infect" a commercial version DWG file if they are imported in versions older than AutoCAD 2015 or newer than AutoCAD 2018.
Ports
Windows
AutoCAD Release 12 in 1992 was the first version of the software to support the Windows platform - in that case Windows 3.1. After Release 14 in 1997, support for MS-DOS, Unix and Macintosh were dropped, and AutoCAD was exclusively Windows supported. In general any new AutoCAD version supports the current Windows version and some older ones. AutoCAD 2016 to 2020 support Windows 7 up to Windows 10.
Mac
Autodesk stopped supporting Apple's Macintosh computers in 1994. Over the next several years, no compatible versions for the Mac were released. In 2010 Autodesk announced that it would once again support Apple's Mac OS X software in the future. Most of the features found in the 2012 Windows version can be found in the 2012 Mac version. The main difference is the user interface and layout of the program. The interface is designed so that users who are already familiar with Apple's macOS software will find it similar to other Mac applications. Autodesk has also built-in various features in order to take full advantage of Apple's Trackpad capabilities as well as the full-screen mode in Apple's OS X Lion. AutoCAD 2012 for Mac supports both the editing and saving of files in DWG formatting that will allow the file to be compatible with other platforms besides macOS. AutoCAD 2019 for Mac requires OS X El Capitan or later.
AutoCAD LT 2013 was available through the Mac App Store for $899.99. The full-featured version of AutoCAD 2013 for Mac, however, wasn't available through the Mac App Store due to the price limit of $999 set by Apple. AutoCAD 2014 for Mac was available for purchase from Autodesk's web site for $4,195 and AutoCAD LT 2014 for Mac for $1,200, or from an Autodesk authorized reseller. The latest version available for Mac is AutoCAD 2022 as of January 2022.
Version history
See also
Autodesk 3ds Max
Autodesk Maya
Autodesk Revit
AutoShade
AutoSketch
Comparison of computer-aided design software
Design Web Format
Open source CAD software:
LibreCAD
FreeCAD
BRL-CAD
References
Further reading
External links
Engineering and Design Firm
Autodesk products
1982 software
IRIX software
Computer-aided design software
IOS software
Classic Mac OS software
Android (operating system) software
MacOS computer-aided design software
Software that uses Qt
|
https://en.wikipedia.org/wiki/Alkene
|
In organic chemistry, an alkene is a hydrocarbon containing a carbon–carbon double bond. The double bond may be internal or in the terminal position. Terminal alkenes are also known as α-olefins.
The International Union of Pure and Applied Chemistry (IUPAC) recommends using the name "alkene" only for acyclic hydrocarbons with just one double bond; alkadiene, alkatriene, etc., or polyene for acyclic hydrocarbons with two or more double bonds; cycloalkene, cycloalkadiene, etc. for cyclic ones; and "olefin" for the general class – cyclic or acyclic, with one or more double bonds.
Acyclic alkenes, with only one double bond and no other functional groups (also known as mono-enes) form a homologous series of hydrocarbons with the general formula with n being 2 or more (which is two hydrogens less than the corresponding alkane). When n is four or more, isomers are possible, distinguished by the position and conformation of the double bond.
Alkenes are generally colorless non-polar compounds, somewhat similar to alkanes but more reactive. The first few members of the series are gases or liquids at room temperature. The simplest alkene, ethylene () (or "ethene" in the IUPAC nomenclature) is the organic compound produced on the largest scale industrially.
Aromatic compounds are often drawn as cyclic alkenes, however their structure and properties are sufficiently distinct that they are not classified as alkenes or olefins. Hydrocarbons with two overlapping double bonds () are called allenes—the simplest such compound is itself called allene—and those with three or more overlapping bonds (, , etc.) are called cumulenes.
Structural isomerism
Alkenes having four or more carbon atoms can form diverse structural isomers. Most alkenes are also isomers of cycloalkanes. Acyclic alkene structural isomers with only one double bond follow:
: ethylene only
: propylene only
: 3 isomers: 1-butene, 2-butene, and isobutylene
: 5 isomers: 1-pentene, 2-pentene, 2-methyl-1-butene, 3-methyl-1-butene, 2-methyl-2-butene
: 13 isomers: 1-hexene, 2-hexene, 3-hexene, 2-methyl-1-pentene, 3-methyl-1-pentene, 4-methyl-1-pentene, 2-methyl-2-pentene, 3-methyl-2-pentene, 4-methyl-2-pentene, 2,3-dimethyl-1-butene, 3,3-dimethyl-1-butene, 2,3-dimethyl-2-butene, 2-ethyl-1-butene
Many of these molecules exhibit cis–trans isomerism. There may also be chiral carbon atoms particularly within the larger molecules (from ). The number of potential isomers increases rapidly with additional carbon atoms.
Structure and bonding
Bonding
A carbon–carbon double bond consists of a sigma bond and a pi bond. This double bond is stronger than a single covalent bond (611 kJ/mol for C=C vs. 347 kJ/mol for C–C), but not twice as strong. Double bonds are shorter than single bonds with an average bond length of 1.33 Å (133 pm) vs 1.53 Å for a typical C-C single bond.
Each carbon atom of the double bond uses its three sp2 hybrid orbitals to form sigma bonds to three atoms (the other carbon atom and two hydrogen atoms). The unhybridized 2p atomic orbitals, which lie perpendicular to the plane created by the axes of the three sp2 hybrid orbitals, combine to form the pi bond. This bond lies outside the main C–C axis, with half of the bond on one side of the molecule and a half on the other. With a strength of 65 kcal/mol, the pi bond is significantly weaker than the sigma bond.
Rotation about the carbon–carbon double bond is restricted because it incurs an energetic cost to break the alignment of the p orbitals on the two carbon atoms. Consequently cis or trans isomers interconvert so slowly that they can be freely handled at ambient conditions without isomerization. More complex alkenes may be named with the E–Z notation for molecules with three or four different substituents (side groups). For example, of the isomers of butene, the two methyl groups of (Z)-but-2-ene (a.k.a. cis-2-butene) appear on the same side of the double bond, and in (E)-but-2-ene (a.k.a. trans-2-butene) the methyl groups appear on opposite sides. These two isomers of butene have distinct properties.
Shape
As predicted by the VSEPR model of electron pair repulsion, the molecular geometry of alkenes includes bond angles about each carbon atom in a double bond of about 120°. The angle may vary because of steric strain introduced by nonbonded interactions between functional groups attached to the carbon atoms of the double bond. For example, the C–C–C bond angle in propylene is 123.9°.
For bridged alkenes, Bredt's rule states that a double bond cannot occur at the bridgehead of a bridged ring system unless the rings are large enough. Following Fawcett and defining S as the total number of non-bridgehead atoms in the rings, bicyclic systems require S ≥ 7 for stability and tricyclic systems require S ≥ 11.
Isomerism
In organic chemistry,the prefixes cis- and trans- are used to describe the positions of functional groups attached to carbon atoms joined by a double bond. In Latin, cis and trans mean "on this side of" and "on the other side of" respectively. Therefore, if the functional groups are both on the same side of the carbon chain, the bond is said to have cis- configuration, otherwise (i.e. the functional groups are on the opposite side of the carbon chain), the bond is said to have trans- configuration.
For there to be cis- and trans- configurations, there must be a carbon chain, or at least one functional group attached to each carbon is the same for both. E- and Z- configuration can be used instead in a more general case where all four functional groups attached to carbon atoms in a double bond are different. E- and Z- are abbreviations of German words zusammen (together) and entgegen (opposite). In E- and Z-isomerism, each functional group is assigned a priority based on the Cahn–Ingold–Prelog priority rules. If the two groups with higher priority are on the same side of the double bond, the bond is assigned Z- configuration, otherwise (i.e. the two groups with higher priority are on the opposite side of the double bond), the bond is assigned E- configuration. Cis- and trans- configurations do not have a fixed relationship with E- and Z-configurations.
Physical properties
Many of the physical properties of alkenes and alkanes are similar: they are colorless, nonpolar, and combustible. The physical state depends on molecular mass: like the corresponding saturated hydrocarbons, the simplest alkenes (ethylene, propylene, and butene) are gases at room temperature. Linear alkenes of approximately five to sixteen carbon atoms are liquids, and higher alkenes are waxy solids. The melting point of the solids also increases with increase in molecular mass.
Alkenes generally have stronger smells than their corresponding alkanes. Ethylene has a sweet and musty odor. Strained alkenes, in particular, like norbornene and trans-cyclooctene are known to have strong, unpleasant odors, a fact consistent with the stronger π complexes they form with metal ions including copper.
Boiling and melting points
Below is a list of the boiling and melting points of various alkenes with the corresponding alkane and alkyne analogues.
Infrared spectroscopy
The stretching of C=C bond will give an IR absorption peak at 1670–1600 cm−1, while the bending of C=C bond absorbs between 1000 and 650 cm−1 wavelength.
NMR spectroscopy
In 1H NMR spectroscopy, the hydrogen bonded to the carbon adjacent to double bonds will give a δH of 4.5–6.5 ppm. The double bond will also deshield the hydrogen attached to the carbons adjacent to sp2 carbons, and this generates δH=1.6–2. ppm peaks. Cis/trans isomers are distinguishable due to different J-coupling effect. Cis vicinal hydrogens will have coupling constants in the range of 6–14 Hz, whereas the trans will have coupling constants of 11–18 Hz.
In their 13C NMR spectra of alkenes, double bonds also deshield the carbons, making them have low field shift. C=C double bonds usually have chemical shift of about 100–170 ppm.
Combustion
Like most other hydrocarbons, alkenes combust to give carbon dioxide and water.
The combustion of alkenes release less energy than burning same molarity of saturated ones with same number of carbons.
This trend can be clearly seen in the list of standard enthalpy of combustion of hydrocarbons.
Reactions
Alkenes are relatively stable compounds, but are more reactive than alkanes. Most reactions of alkenes involve additions to this pi bond, forming new single bonds. Alkenes serve as a feedstock for the petrochemical industry because they can participate in a wide variety of reactions, prominently polymerization and alkylation. Except for ethylene, alkenes have two sites of reactivity: the carbon–carbon pi-bond and the presence of allylic CH centers. The former dominates but the allylic sites are important too.
Addition to the unsaturated bonds
Hydrogenation involves the addition of H2 resulting in an alkane. The equation of hydrogenation of ethylene to form ethane is:
H2C=CH2 + H2→H3C−CH3
Hydrogenation reactions usually require catalysts to increase their reaction rate. The total number of hydrogens that can be added to an unsaturated hydrocarbon depends on its degree of unsaturation.
Similar to hydrogen, halogens added to double bonds.
H2C=CH2 + Br2→H2CBr−CH2Br
Halonium ions are intermediates. These reactions do not require catalysts.
Bromine test is used to test the saturation of hydrocarbons. The bromine test can also be used as an indication of the degree of unsaturation for unsaturated hydrocarbons. Bromine number is defined as gram of bromine able to react with 100g of product. Similar as hydrogenation, the halogenation of bromine is also depend on the number of π bond. A higher bromine number indicates higher degree of unsaturation.
The π bonds of alkenes hydrocarbons are also susceptible to hydration. The reaction usually involves strong acid as catalyst. The first step in hydration often involves formation of a carbocation. The net result of the reaction will be an alcohol. The reaction equation for hydration of ethylene is:
H2C=CH2 + H2O→
Hydrohalogenation involves addition of H−X to unsaturated hydrocarbons. This reaction results in new C−H and C−X σ bonds. The formation of the intermediate carbocation is selective and follows Markovnikov's rule. The hydrohalogenation of alkene will result in haloalkane. The reaction equation of HBr addition to ethylene is:
H2C=CH2 + HBr →
Cycloaddition
Alkenes add to dienes to give cyclohexenes. This conversion is an example of a Diels-Alder reaction. Such reaction proceed with retention of stereochemistry. The rates are sensitive to electron-withdrawing or electron-donating substituents. When irradiated by UV-light, alkenes dimerize to give cyclobutanes. Another example is the Schenck ene reaction, in which singlet oxygen reacts with an allylic structure to give a transposed allyl peroxide:
Oxidation
Alkenes react with percarboxylic acids and even hydrogen peroxide to yield epoxides:
For ethylene, the epoxidation is conducted on a very large scale industrially using oxygen in the presence of silver-based catalysts:
Alkenes react with ozone, leading to the scission of the double bond. The process is called ozonolysis. Often the reaction procedure includes a mild reductant, such as dimethylsulfide ():
When treated with a hot concentrated, acidified solution of , alkenes are cleaved to form ketones and/or carboxylic acids. The stoichiometry of the reaction is sensitive to conditions. This reaction and the ozonolysis can be used to determine the position of a double bond in an unknown alkene.
The oxidation can be stopped at the vicinal diol rather than full cleavage of the alkene by using osmium tetroxide or other oxidants:
R'CH=CR2 + 1/2 O2 + H2O -> R'CH(OH)-C(OH)R2
This reaction is called dihydroxylation.
In the presence of an appropriate photosensitiser, such as methylene blue and light, alkenes can undergo reaction with reactive oxygen species generated by the photosensitiser, such as hydroxyl radicals, singlet oxygen or superoxide ion. Reactions of the excited sensitizer can involve electron or hydrogen transfer, usually with a reducing substrate (Type I reaction) or interaction with oxygen (Type II reaction). These various alternative processes and reactions can be controlled by choice of specific reaction conditions, leading to a wide range of products. A common example is the [4+2]-cycloaddition of singlet oxygen with a diene such as cyclopentadiene to yield an endoperoxide:
Polymerization
Terminal alkenes are precursors to polymers via processes termed polymerization. Some polymerizations are of great economic significance, as they generate the plastics polyethylene and polypropylene. Polymers from alkene are usually referred to as polyolefins although they contain no olefins. Polymerization can proceed via diverse mechanisms. Conjugated dienes such as buta-1,3-diene and isoprene (2-methylbuta-1,3-diene) also produce polymers, one example being natural rubber.
Allylic substitution
The presence of a C=C π bond in unsaturated hydrocarbons weakens the dissociation energy of the allylic C−H bonds. Thus, these groupings are susceptible to free radical substitution at these C-H sites as well as addition reactions at the C=C site. In the presence of radical initiators, allylic C-H bonds can be halogenated. The presence of two C=C bonds flanking one methylene, i.e., doubly allylic, results in particularly weak HC-H bonds. The high reactivity of these situations is the basis for certain free radical reactions, manifested in the chemistry of drying oils.
Metathesis
Alkenes undergo olefin metathesis, which cleaves and interchanges the substituents of the alkene. A related reaction is ethenolysis:
Metal complexation
In transition metal alkene complexes, alkenes serve as ligands for metals. In this case, the π electron density is donated to the metal d orbitals. The stronger the donation is, the stronger the back bonding from the metal d orbital to π* anti-bonding orbital of the alkene. This effect lowers the bond order of the alkene and increases the C-C bond length. One example is the complex . These complexes are related to the mechanisms of metal-catalyzed reactions of unsaturated hydrocarbons.
Reaction overview
Synthesis
Industrial methods
Alkenes are produced by hydrocarbon cracking. Raw materials are mostly natural gas condensate components (principally ethane and propane) in the US and Mideast and naphtha in Europe and Asia. Alkanes are broken apart at high temperatures, often in the presence of a zeolite catalyst, to produce a mixture of primarily aliphatic alkenes and lower molecular weight alkanes. The mixture is feedstock and temperature dependent, and separated by fractional distillation. This is mainly used for the manufacture of small alkenes (up to six carbons).
Related to this is catalytic dehydrogenation, where an alkane loses hydrogen at high temperatures to produce a corresponding alkene. This is the reverse of the catalytic hydrogenation of alkenes.
This process is also known as reforming. Both processes are endothermic and are driven towards the alkene at high temperatures by entropy.
Catalytic synthesis of higher α-alkenes (of the type RCH=CH2) can also be achieved by a reaction of ethylene with the organometallic compound triethylaluminium in the presence of nickel, cobalt, or platinum.
Elimination reactions
One of the principal methods for alkene synthesis in the laboratory is the room elimination of alkyl halides, alcohols, and similar compounds. Most common is the β-elimination via the E2 or E1 mechanism, but α-eliminations are also known.
The E2 mechanism provides a more reliable β-elimination method than E1 for most alkene syntheses. Most E2 eliminations start with an alkyl halide or alkyl sulfonate ester (such as a tosylate or triflate). When an alkyl halide is used, the reaction is called a dehydrohalogenation. For unsymmetrical products, the more substituted alkenes (those with fewer hydrogens attached to the C=C) tend to predominate (see Zaitsev's rule). Two common methods of elimination reactions are dehydrohalogenation of alkyl halides and dehydration of alcohols. A typical example is shown below; note that if possible, the H is anti to the leaving group, even though this leads to the less stable Z-isomer.
Alkenes can be synthesized from alcohols via dehydration, in which case water is lost via the E1 mechanism. For example, the dehydration of ethanol produces ethylene:
CH3CH2OH → H2C=CH2 + H2O
An alcohol may also be converted to a better leaving group (e.g., xanthate), so as to allow a milder syn-elimination such as the Chugaev elimination and the Grieco elimination. Related reactions include eliminations by β-haloethers (the Boord olefin synthesis) and esters (ester pyrolysis).
Alkenes can be prepared indirectly from alkyl amines. The amine or ammonia is not a suitable leaving group, so the amine is first either alkylated (as in the Hofmann elimination) or oxidized to an amine oxide (the Cope reaction) to render a smooth elimination possible. The Cope reaction is a syn-elimination that occurs at or below 150 °C, for example:
The Hofmann elimination is unusual in that the less substituted (non-Zaitsev) alkene is usually the major product.
Alkenes are generated from α-halosulfones in the Ramberg–Bäcklund reaction, via a three-membered ring sulfone intermediate.
Synthesis from carbonyl compounds
Another important method for alkene synthesis involves construction of a new carbon–carbon double bond by coupling of a carbonyl compound (such as an aldehyde or ketone) to a carbanion equivalent. Such reactions are sometimes called olefinations. The most well-known of these methods is the Wittig reaction, but other related methods are known, including the Horner–Wadsworth–Emmons reaction.
The Wittig reaction involves reaction of an aldehyde or ketone with a Wittig reagent (or phosphorane) of the type Ph3P=CHR to produce an alkene and Ph3P=O. The Wittig reagent is itself prepared easily from triphenylphosphine and an alkyl halide. The reaction is quite general and many functional groups are tolerated, even esters, as in this example:
Related to the Wittig reaction is the Peterson olefination, which uses silicon-based reagents in place of the phosphorane. This reaction allows for the selection of E- or Z-products. If an E-product is desired, another alternative is the Julia olefination, which uses the carbanion generated from a phenyl sulfone. The Takai olefination based on an organochromium intermediate also delivers E-products. A titanium compound, Tebbe's reagent, is useful for the synthesis of methylene compounds; in this case, even esters and amides react.
A pair of ketones or aldehydes can be deoxygenated to generate an alkene. Symmetrical alkenes can be prepared from a single aldehyde or ketone coupling with itself, using titanium metal reduction (the McMurry reaction). If different ketones are to be coupled, a more complicated method is required, such as the Barton–Kellogg reaction.
A single ketone can also be converted to the corresponding alkene via its tosylhydrazone, using sodium methoxide (the Bamford–Stevens reaction) or an alkyllithium (the Shapiro reaction).
Synthesis from alkenes
The formation of longer alkenes via the step-wise polymerisation of smaller ones is appealing, as ethylene (the smallest alkene) is both inexpensive and readily available, with hundreds of millions of tonnes produced annually. The Ziegler–Natta process allows for the formation of very long chains, for instance those used for polyethylene. Where shorter chains are wanted, as they for the production of surfactants, then processes incorporating a olefin metathesis step, such as the Shell higher olefin process are important.
Olefin metathesis is also used commercially for the interconversion of ethylene and 2-butene to propylene. Rhenium- and molybdenum-containing heterogeneous catalysis are used in this process:
CH2=CH2 + CH3CH=CHCH3 → 2 CH2=CHCH3
Transition metal catalyzed hydrovinylation is another important alkene synthesis process starting from alkene itself. It involves the addition of a hydrogen and a vinyl group (or an alkenyl group) across a double bond.
From alkynes
Reduction of alkynes is a useful method for the stereoselective synthesis of disubstituted alkenes. If the cis-alkene is desired, hydrogenation in the presence of Lindlar's catalyst (a heterogeneous catalyst that consists of palladium deposited on calcium carbonate and treated with various forms of lead) is commonly used, though hydroboration followed by hydrolysis provides an alternative approach. Reduction of the alkyne by sodium metal in liquid ammonia gives the trans-alkene.
For the preparation multisubstituted alkenes, carbometalation of alkynes can give rise to a large variety of alkene derivatives.
Rearrangements and related reactions
Alkenes can be synthesized from other alkenes via rearrangement reactions. Besides olefin metathesis (described above), many pericyclic reactions can be used such as the ene reaction and the Cope rearrangement.
In the Diels–Alder reaction, a cyclohexene derivative is prepared from a diene and a reactive or electron-deficient alkene.
Application
Unsaturated hydrocarbons are widely used to produce plastics, medicines, and other useful materials.
Natural occurrence
Alkenes are pervasive in nature.
Plants are the main natural source of alkenes in the form of terpenes. Many of the most vivid natural pigments are terpenes; e.g. lycopene (red in tomatoes) and carotene (orange of carrots). The simplest of all alkenes, ethylene (plant hormone) is a signaling molecule that influences the ripening of plants.
IUPAC Nomenclature
Although the nomenclature is not followed widely, according to IUPAC, an alkene is an acyclic hydrocarbon with just one double bond between carbon atoms. Olefins comprise a larger collection of cyclic and acyclic alkenes as well as dienes and polyenes.
To form the root of the IUPAC names for straight-chain alkenes, change the -an- infix of the parent to -en-. For example, CH3-CH3 is the alkane ethANe. The name of CH2=CH2 is therefore ethENe.
For straight-chain alkenes with 4 or more carbon atoms, that name does not completely identify the compound. For those cases, and for branched acyclic alkenes, the following rules apply:
Find the longest carbon chain in the molecule. If that chain does not contain the double bond, name the compound according to the alkane naming rules. Otherwise:
Number the carbons in that chain starting from the end that is closest to the double bond.
Define the location k of the double bond as being the number of its first carbon.
Name the side groups (other than hydrogen) according to the appropriate rules.
Define the position of each side group as the number of the chain carbon it is attached to.
Write the position and name of each side group.
Write the names of the alkane with the same chain, replacing the "-ane" suffix by "k-ene".
The position of the double bond is often inserted before the name of the chain (e.g. "2-pentene"), rather than before the suffix ("pent-2-ene").
The positions need not be indicated if they are unique. Note that the double bond may imply a different chain numbering than that used for the corresponding alkane: C–– is "2,2-dimethyl pentane", whereas C–= is "3,3-dimethyl 1-pentene".
More complex rules apply for polyenes and cycloalkenes.
Cis–trans isomerism
If the double bond of an acyclic mono-ene is not the first bond of the chain, the name as constructed above still does not completely identify the compound, because of cis–trans isomerism. Then one must specify whether the two single C–C bonds adjacent to the double bond are on the same side of its plane, or on opposite sides. For monoalkenes, the configuration is often indicated by the prefixes cis- (from Latin "on this side of") or trans- ("across", "on the other side of") before the name, respectively; as in cis-2-pentene or trans-2-butene.
More generally, cis–trans isomerism will exist if each of the two carbons of in the double bond has two different atoms or groups attached to it. Accounting for these cases, the IUPAC recommends the more general E–Z notation, instead of the cis and trans prefixes. This notation considers the group with highest CIP priority in each of the two carbons. If these two groups are on opposite sides of the double bond's plane, the configuration is labeled E (from the German entgegen meaning "opposite"); if they are on the same side, it is labeled Z (from German zusammen, "together"). This labeling may be taught with mnemonic "Z means 'on ze zame zide'".
Groups containing C=C double bonds
IUPAC recognizes two names for hydrocarbon groups containing carbon–carbon double bonds, the vinyl group and the allyl group.
See also
Alpha-olefin
Annulene
Aromatic hydrocarbon ("Arene")
Dendralene
Nitroalkene
Radialene
Nomenclature links
Rule A-3. Unsaturated Compounds and Univalent Radicals IUPAC Blue Book.
Rule A-4. Bivalent and Multivalent Radicals IUPAC Blue Book.
Rules A-11.3, A-11.4, A-11.5 Unsaturated monocyclic hydrocarbons and substituents IUPAC Blue Book.
Rule A-23. Hydrogenated Compounds of Fused Polycyclic Hydrocarbons IUPAC Blue Book.
References
|
https://en.wikipedia.org/wiki/Allenes
|
In organic chemistry, allenes are organic compounds in which one carbon atom has double bonds with each of its two adjacent carbon atoms (, where R is H or some organyl group). Allenes are classified as cumulated dienes. The parent compound of this class is propadiene (), which is itself also called allene. An group of the structure is called allenyl, where R is H or some alkyl group. Compounds with an allene-type structure but with more than three carbon atoms are members of a larger class of compounds called cumulenes with bonding.
History
For many years, allenes were viewed as curiosities but thought to be synthetically useless and difficult to prepare and to work with. Reportedly, the first synthesis of an allene, glutinic acid, was performed in an attempt to prove the non-existence of this class of compounds. The situation began to change in the 1950s, and more than 300 papers on allenes have been published in 2012 alone. These compounds are not just interesting intermediates but synthetically valuable targets themselves; for example, over 150 natural products are known with an allene or cumulene fragment.
Structure and properties
Geometry
The central carbon atom of allenes forms two sigma bonds and two pi bonds. The central carbon atom is sp-hybridized, and the two terminal carbon atoms are sp2-hybridized. The bond angle formed by the three carbon atoms is 180°, indicating linear geometry for the central carbon atom. The two terminal carbon atoms are planar, and these planes are twisted 90° from each other. The structure can also be viewed as an "extended tetrahedral" with a similar shape to methane, an analogy that is continued into the stereochemical analysis of certain derivative molecules.
Symmetry
The symmetry and isomerism of allenes has long fascinated organic chemists. For allenes with four identical substituents, there exist two twofold axes of rotation through the central carbon atom, inclined at 45° to the CH2 planes at either end of the molecule. The molecule can thus be thought of as a two-bladed propeller. A third twofold axis of rotation passes through the C=C=C bonds, and there is a mirror plane passing through both CH2 planes. Thus this class of molecules belong to the D2d point group. Because of the symmetry, an unsubstituted allene has no net dipole moment, that is, it is a non-polar molecule.
An allene with two different substituents on each of the two carbon atoms will be chiral because there will no longer be any mirror planes. The chirality of these types of allenes was first predicted in 1875 by Jacobus Henricus van 't Hoff, but not proven experimentally until 1935. Where A has a greater priority than B according to the Cahn–Ingold–Prelog priority rules, the configuration of the axial chirality can be determined by considering the substituents on the front atom followed by the back atom when viewed along the allene axis. For the back atom, only the group of higher priority need be considered.
Chiral allenes have been recently used as building blocks in the construction of organic materials with exceptional chiroptical properties. There are a few examples of drug molecule having an allene system in their structure. Mycomycin, an antibiotic with tuberculostatic properties, is a typical example. This drug exhibits enantiomerism due to the presence of a suitably substituted allene system.
Although the semi-localized textbook σ-π separation model describes the bonding of allene using a pair of localized orthogonal π orbitals, the full molecular orbital description of the bonding is more subtle. The symmetry-correct doubly-degenerate HOMOs of allene (adapted to the D2d point group) can either be represented by a pair of orthogonal MOs or as twisted helical linear combinations of these orthogonal MOs. The symmetry of the system and the degeneracy of these orbitals imply that both descriptions are correct (in the same way that there are infinitely many ways to depict the doubly-degenerate HOMOs and LUMOs of benzene that correspond to different choices of eigenfunctions in a two-dimensional eigenspace). However, this degeneracy is lifted in substituted allenes, and the helical picture becomes the only symmetry-correct description for the HOMO and HOMO–1 of the C2-symmetric 1,3-dimethylallene. This qualitative MO description extends to higher odd-carbon cumulenes (e.g., 1,2,3,4-pentatetraene).
Chemical and spectral properties
Allenes differ considerably from other alkenes in terms of their chemical properties. Compared to isolated and conjugated dienes, they are considerably less stable: comparing the isomeric pentadienes, the allenic 1,2-pentadiene has a heat of formation of 33.6 kcal/mol, compared to 18.1 kcal/mol for (E)-1,3-pentadiene and 25.4 kcal/mol for the isolated 1,4-pentadiene.
The C–H bonds of allenes are considerably weaker and more acidic compared to typical vinylic C–H bonds: the bond dissociation energy is 87.7 kcal/mol (compared to 111 kcal/mol in ethylene), while the gas-phase acidity is 381 kcal/mol (compared to 409 kcal/mol for ethylene), making it slightly more acidic than the propargylic C–H bond of propyne (382 kcal/mol).
The 13C NMR spectrum of allenes is characterized by the signal of the sp-hybridized carbon atom, resonating at a characteristic 200-220 ppm. In contrast, the sp2-hybridized carbon atoms resonate around 80 ppm in a region typical for alkyne and nitrile carbon atoms, while the protons of a CH2 group of a terminal allene resonate at around 4.5 ppm — somewhat upfield of a typical vinylic proton.
Allenes possess a rich cycloaddition chemistry, including both [4+2] and [2+2] modes of addition, as well as undergoing formal cycloaddition processes catalyzed by transition metals. Allenes also serve as substrates for transition metal catalyzed hydrofunctionalization reactions.
Synthesis
Although allenes often require specialized syntheses, the parent allene, propadiene is produced industrially on a large scale as an equilibrium mixture with methylacetylene:
H2C=C=CH2 <=> H3C-C#CH
This mixture, known as MAPP gas, is commercially available. At 298 K, the ΔG° of this reaction is –1.9 kcal/mol, corresponding to Keq = 24.7.
The first allene to be synthesized was penta-2,3-dienedioic acid, which was prepared by Burton and Pechmann in 1887. However, the structure was only correctly identified in 1954.
Laboratory methods for the formation of allenes include:
from geminal dihalocyclopropanes and organolithium compounds (or metallic sodium or magnesium) in the Skattebøl rearrangement (Doering–LaFlamme allene synthesis) via rearrangement of cyclopropylidene carbenes/carbenoids
from reaction of certain terminal alkynes with formaldehyde, copper(I) bromide, and added base (Crabbé–Ma allene synthesis)
from propargylic halides by SN2′ displacement by an organocuprate
from dehydrohalogenation of certain dihalides
from reaction of a triphenylphosphinyl ester with an acid halide, a Wittig reaction accompanied by dehydrohalogenation
from propargylic alcohols via the Myers allene synthesis protocol—a stereospecific process
from metalation of allene or substituted allenes with BuLi and reaction with electrophiles (RX, R3SiX, D2O, etc.)
The chemistry of allenes has been reviewed in a number of books and journal articles. Some key approaches towards allenes are outlined in the following scheme:
One of the older methods is the Skattebøl rearrangement (also called the Doering–Moore–Skattebøl or Doering–LaFlamme rearrangement), in which a gem-dihalocyclopropane 3 is treated with an organolithium compound (or dissolving metal) and the presumed intermediate rearranges into an allene either directly or via carbene-like species. Notably, even strained allenes can be generated by this procedure. Modifications involving leaving groups of different nature are also known. Arguably, the most convenient modern method of allene synthesis is by sigmatropic rearrangement of propargylic substrates. Johnson–Claisen and Ireland–Claisen rearrangements of ketene acetals 4 have been used a number of times to prepare allenic esters and acids. Reactions of vinyl ethers 5 (the Saucy–Marbet rearrangement) give allene aldehydes, while propargylic sulfenates 6 give allene sulfoxides. Allenes can also be prepared by nucleophilic substitution in 9 and 10 (nucleophile Nu− can be a hydride anion), 1,2-elimination from 8, proton transfer in 7, and other, less general, methods.
Use and occurrence
The dominant use of allenes is allene itself, which, in equilibrium with propyne is a component of MAP gas.
Research
The reactivity of substituted allenes has been well explored.
The two π-bonds are located at the 90° angle to each other, and thus require a reagent to approach from somewhat different directions. With an appropriate substitution pattern, allenes exhibit axial chirality as predicted by van’t Hoff as early as 1875. Protonation of allenes gives cations 11 that undergo further transformations. Reactions with soft electrophiles (e.g. Br+) deliver positively charged onium ions 13. Transition-metal-catalysed reactions proceed via allylic intermediates 15 and have attracted significant interest in recent years. Numerous cycloadditions are also known, including [4+2]-, (2+1)-, and [2+2]-variants, which deliver, e.g., 12, 14, and 16, respectively.
Occurrence
Numerous natural products contain the allene functional group. Noteworthy are the pigments fucoxanthin and peridinin. Little is known about the biosynthesis, although it is conjectured that they are often generated from alkyne precursors.
Allenes serve as ligands in organometallic chemistry. A typical complex is Pt(η2-allene)(PPh3)2. Ni(0) reagents catalyze the cyclooligomerization of allene. Using a suitable catalyst (e.g. Wilkinson's catalyst), it is possible to reduce just one of the double bonds of an allene.
Delta convention
Many rings or ring systems are known by semisystematic names that assume a maximum number of noncumulative bonds. To unambiguously specify derivatives that include cumulated bonds (and hence fewer hydrogen atoms than would be expected from the skeleton), a lowercase delta may be used with a subscript indicating the number of cumulated double bonds from that atom, e.g. 8δ2-benzocyclononene. This may be combined with the λ-convention for specifying nonstandard valency states, e.g. 2λ4δ2,5λ4δ2-thieno[3,4-c]thiophene.
See also
Compounds with three or more adjacent carbon–carbon double bonds are called cumulenes.
References
Further reading
Brummond, Kay M. (editor). Allene chemistry (special thematic issue). Beilstein Journal of Organic Chemistry 7: 394–943.
External links
Synthesis of allenes
Jacobus Henricus van 't Hoff
|
https://en.wikipedia.org/wiki/Astrobiology
|
Astrobiology is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth.
Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth.
The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline.
Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications.
The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions.
Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications.
Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research.
Overview
The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin.
While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory.
The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive.
In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field.
The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars.
Theoretical foundations
Planetary habitability
Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability.
Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds.
Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry.
Environmental Stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. (See also: Habitability of red dwarf systems).
Energy source: It is assumed that any life elsewhere in the universe would also require an energy source. Previously, it was assumed that this would necessarily be from a sun-like star, however with developments within extremophile research contemporary astrobiological research often focuses on identifying environments that have the potential to support life based on the availability of an energy source, such as the presence of volcanic activity on a planet or moon that could provide a source of heat and energy.
It is important to note that these assumptions are based on our current understanding of life on Earth and the conditions under which it can exist. As our understanding of life and the potential for it to exist in different environments evolves, these assumptions may change.
Methodology
Astrobiological research concerning the study of habitable environments in our solar system and beyond utilises methodologies within the geosciences. Research within this branch primarily concerns the geobiology of organisms that can survive in extreme environments on Earth, such as in volcanic or deep sea environments, to understand the limits of life, and the conditions under which life might be able to survive on other planets. This includes, but is not limited to;
Deep-sea extremophiles: Researchers are studying organisms that live in the extreme environments of deep-sea hydrothermal vents and cold seeps. These organisms survive in the absence of sunlight, and some are able to survive in high temperatures and pressures, and use chemical energy instead of sunlight to produce food.
Desert extremophiles: Researchers are studying organisms that can survive in extreme dry, high temperature conditions, such as in deserts.
Microbes in extreme environments: Researchers are investigating the diversity and activity of microorganisms in environments such as deep mines, subsurface soil, cold glaciers and polar ice, and high-altitude environments.
Research also regards the long-term survival of life on Earth, and the possibilities and hazards of life on other planets, including;
Biodiversity and ecosystem resilience: Scientists are studying how the diversity of life and the interactions between different species contribute to the resilience of ecosystems and their ability to recover from disturbances.
Climate change and extinction: Researchers are investigating the impacts of climate change on different species and ecosystems, and how they may lead to extinction or adaptation. This includes the evolution of Earth's climate and geology, and their potential impact on the habitability of the planet in the future, especially for humans.
Human impact on the biosphere: Scientists are studying the ways in which human activities, such as deforestation, pollution, and the introduction of invasive species, are affecting the biosphere and the long-term survival of life on Earth.
Long-term preservation of life: Researchers are exploring ways to preserve samples of life on Earth for long periods of time, such as cryopreservation and genomic preservation, in the event of a catastrophic event that could wipe out most of life on Earth.
Emerging astrobiological research concerning the search for planetary biosignatures of past or present extraterrestrial life utilise methodologies within planetary sciences. These include;
The study of microbial life in the subsurface of Mars: Scientists are using data from Mars rover missions to study the composition of the subsurface of Mars, searching for biosignatures of past or present microbial life.
The study of subsurface oceans on icy moons: Recent discoveries of subsurface oceans on moons such as Europa and Enceladus have opened up new habitability zones thus targets for the search for extraterrestrial life. Currently, missions like the Europa Clipper are being planned to search for biosignatures within these environments.
The study of the atmospheres of planets: Scientists are studying the potential for life to exist in the atmospheres of planets, with a focus on the study of the physical and chemical conditions necessary for such life to exist, namely the detection of organic molecules and biosignature gases; for example, the study of the possibility of life in the atmospheres of exoplanets that orbit red dwarfs and the study of the potential for microbial life in the upper atmosphere of Venus.
Telescopes and remote sensing of exoplanets: The discovery of thousands of exoplanets has opened up new opportunities for the search for biosignatures. Scientists are using telescopes such as the James Webb Space Telescope and the Transiting Exoplanet Survey Satellite to search for biosignatures on exoplanets. They are also developing new techniques for the detection of biosignatures, such as the use of remote sensing to search for biosignatures in the atmosphere of exoplanets.
SETI and CETI: Scientists search for signals from intelligent extraterrestrial civilizations using radio and optical telescopes within the discipline of extraterrestrial intelligence communications (CETI). CETI focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. Communication attempts by humans have included broadcasting mathematical languages, pictorial systems such as the Arecibo message, and computational approaches to detecting and deciphering 'natural' language communication. While some high-profile scientists, such as Carl Sagan, have advocated the transmission of messages, theoretical physicist Stephen Hawking warned against it, suggesting that aliens may raid Earth for its resources.
Emerging astrobiological research concerning the study of the origin and early evolution of life on Earth utilises methodologies within the palaeosciences. These include;
The study of the early atmosphere: Researchers are investigating the role of the early atmosphere in providing the right conditions for the emergence of life, such as the presence of gases that could have helped to stabilise the climate and the formation of organic molecules.
The study of the early magnetic field: Researchers are investigating the role of the early magnetic field in protecting the Earth from harmful radiation and helping to stabilise the climate. This research has immense astrobiological implications where the subjects of current astrobiological research like Mars lack such a field.
The study of prebiotic chemistry: Scientists are studying the chemical reactions that could have occurred on the early Earth that led to the formation of the building blocks of life- amino acids, nucleotides, and lipids- and how these molecules could have formed spontaneously under early Earth conditions.
The study of impact events: Scientists are investigating the potential role of impact events- especially meteorites- in the delivery of water and organic molecules to early Earth.
The study of the primordial soup: Researchers are investigating the conditions and ingredients that were present on the early Earth that could have led to the formation of the first living organisms, such as the presence of water and organic molecules, and how these ingredients could have led to the formation of the first living organisms. This includes the role of water in the formation of the first cells and in catalysing chemical reactions.
The study of the role of minerals: Scientists are investigating the role of minerals like clay in catalysing the formation of organic molecules, thus playing a role in the emergence of life on Earth.
The study of the role of energy and electricity: Scientists are investigating the potential sources of energy and electricity that could have been available on the early Earth, and their role in the formation of organic molecules, thus the emergence of life.
The study of the early oceans: Scientists are investigating the composition and chemistry of the early oceans and how it may have played a role in the emergence of life, such as the presence of dissolved minerals that could have helped to catalyse the formation of organic molecules.
The study of hydrothermal vents: Scientists are investigating the potential role of hydrothermal vents in the origin of life, as these environments may have provided the energy and chemical building blocks needed for its emergence.
The study of plate tectonics: Scientists are investigating the role of plate tectonics in creating a diverse range of environments on the early Earth.
The study of the early biosphere: Researchers are investigating the diversity and activity of microorganisms in the early Earth, and how these organisms may have played a role in the emergence of life.
The study of microbial fossils: Scientists are investigating the presence of microbial fossils in ancient rocks, which can provide clues about the early evolution of life on Earth and the emergence of the first organisms.
Research
The systematic search for possible life outside Earth is a valid multidisciplinary scientific endeavor. However, hypotheses and predictions as to its existence and origin vary widely, and at the present, the development of hypotheses firmly grounded on science may be considered astrobiology's most concrete practical application. It has been proposed that viruses are likely to be encountered on other life-bearing planets, and may be present even if there are no biological cells.
Research outcomes
, no evidence of extraterrestrial life has been identified. Examination of the Allan Hills 84001 meteorite, which was recovered in Antarctica in 1984 and originated from Mars, is thought by David McKay, as well as few other scientists, to contain microfossils of extraterrestrial origin; this interpretation is controversial.
Yamato 000593, the second largest meteorite from Mars, was found on Earth in 2000. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to some NASA scientists.
On 5 March 2011, Richard B. Hoover, a scientist with the Marshall Space Flight Center, speculated on the finding of alleged microfossils similar to cyanobacteria in CI1 carbonaceous meteorites in the fringe Journal of Cosmology, a story widely reported on by mainstream media. However, NASA formally distanced itself from Hoover's claim. According to American astrophysicist Neil deGrasse Tyson: "At the moment, life on Earth is the only known life in the universe, but there are compelling arguments to suggest we are not alone."
Elements of astrobiology
Astronomy
Most astronomy-related astrobiology research falls into the category of extrasolar planet (exoplanet) detection, the hypothesis being that if life arose on Earth, then it could also arise on other planets with similar characteristics. To that end, a number of instruments designed to detect Earth-sized exoplanets have been considered, most notably NASA's Terrestrial Planet Finder (TPF) and ESA's Darwin programs, both of which have been cancelled. NASA launched the Kepler mission in March 2009, and the French Space Agency launched the COROT space mission in 2006. There are also several less ambitious ground-based efforts underway.
The goal of these missions is not only to detect Earth-sized planets but also to directly detect light from the planet so that it may be studied spectroscopically. By examining planetary spectra, it would be possible to determine the basic composition of an extrasolar planet's atmosphere and/or surface. Given this knowledge, it may be possible to assess the likelihood of life being found on that planet. A NASA research group, the Virtual Planet Laboratory, is using computer modeling to generate a wide variety of virtual planets to see what they would look like if viewed by TPF or Darwin. It is hoped that once these missions come online, their spectra can be cross-checked with these virtual planetary spectra for features that might indicate the presence of life.
An estimate for the number of planets with intelligent communicative extraterrestrial life can be gleaned from the Drake equation, essentially an equation expressing the probability of intelligent life as the product of factors such as the fraction of planets that might be habitable and the fraction of planets on which life might arise:
where:
N = The number of communicative civilizations
R* = The rate of formation of suitable stars (stars such as the Sun)
fp = The fraction of those stars with planets (current evidence indicates that planetary systems may be common for stars like the Sun)
ne = The number of Earth-sized worlds per planetary system
fl = The fraction of those Earth-sized planets where life actually develops
fi = The fraction of life sites where intelligence develops
fc = The fraction of communicative planets (those on which electromagnetic communications technology develops)
L = The "lifetime" of communicating civilizations
However, whilst the rationale behind the equation is sound, it is unlikely that the equation will be constrained to reasonable limits of error any time soon. The problem with the formula is that it is not used to generate or support hypotheses because it contains factors that can never be verified. The first term, R*, number of stars, is generally constrained within a few orders of magnitude. The second and third terms, fp, stars with planets and fe, planets with habitable conditions, are being evaluated for the star's neighborhood. Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference, but some applications of the formula had been taken literally and related to simplistic or pseudoscientific arguments. Another associated topic is the Fermi paradox, which suggests that if intelligent life is common in the universe, then there should be obvious signs of it.
Another active research area in astrobiology is planetary system formation. It has been suggested that the peculiarities of the Solar System (for example, the presence of Jupiter as a protective shield) may have greatly increased the probability of intelligent life arising on Earth.
Biology
Biology cannot state that a process or phenomenon, by being mathematically possible, has to exist forcibly in an extraterrestrial body. Biologists specify what is speculative and what is not. The discovery of extremophiles, organisms able to survive in extreme environments, became a core research element for astrobiologists, as they are important to understand four areas in the limits of life in planetary context: the potential for panspermia, forward contamination due to human exploration ventures, planetary colonization by humans, and the exploration of extinct and extant extraterrestrial life.
Until the 1970s, life was thought to be entirely dependent on energy from the Sun. Plants on Earth's surface capture energy from sunlight to photosynthesize sugars from carbon dioxide and water, releasing oxygen in the process that is then consumed by oxygen-respiring organisms, passing their energy up the food chain. Even life in the ocean depths, where sunlight cannot reach, was thought to obtain its nourishment either from consuming organic detritus rained down from the surface waters or from eating animals that did. The world's ability to support life was thought to depend on its access to sunlight. However, in 1977, during an exploratory dive to the Galapagos Rift in the deep-sea exploration submersible Alvin, scientists discovered colonies of giant tube worms, clams, crustaceans, mussels, and other assorted creatures clustered around undersea volcanic features known as black smokers. These creatures thrive despite having no access to sunlight, and it was soon discovered that they comprise an entirely independent ecosystem. Although most of these multicellular lifeforms need dissolved oxygen (produced by oxygenic photosynthesis) for their aerobic cellular respiration and thus are not completely independent from sunlight by themselves, the basis for their food chain is a form of bacterium that derives its energy from oxidization of reactive chemicals, such as hydrogen or hydrogen sulfide, that bubble up from the Earth's interior. Other lifeforms entirely decoupled from the energy from sunlight are green sulfur bacteria which are capturing geothermal light for anoxygenic photosynthesis or bacteria running chemolithoautotrophy based on the radioactive decay of uranium. This chemosynthesis revolutionized the study of biology and astrobiology by revealing that life need not be sunlight-dependent; it only requires water and an energy gradient in order to exist.
Biologists have found extremophiles that thrive in ice, boiling water, acid, alkali, the water core of nuclear reactors, salt crystals, toxic waste and in a range of other extreme habitats that were previously thought to be inhospitable for life. This opened up a new avenue in astrobiology by massively expanding the number of possible extraterrestrial habitats. Characterization of these organisms, their environments and their evolutionary pathways, is considered a crucial component to understanding how life might evolve elsewhere in the universe. For example, some organisms able to withstand exposure to the vacuum and radiation of outer space include the lichen fungi Rhizocarpon geographicum and Xanthoria elegans, the bacterium Bacillus safensis, Deinococcus radiodurans, Bacillus subtilis, yeast Saccharomyces cerevisiae, seeds from Arabidopsis thaliana ('mouse-ear cress'), as well as the invertebrate animal Tardigrade. While tardigrades are not considered true extremophiles, they are considered extremotolerant microorganisms that have contributed to the field of astrobiology. Their extreme radiation tolerance and presence of DNA protection proteins may provide answers as to whether life can survive away from the protection of the Earth's atmosphere.
Jupiter's moon, Europa, and Saturn's moon, Enceladus, are now considered the most likely locations for extant extraterrestrial life in the Solar System due to their subsurface water oceans where radiogenic and tidal heating enables liquid water to exist.
The origin of life, known as abiogenesis, distinct from the evolution of life, is another ongoing field of research. Oparin and Haldane postulated that the conditions on the early Earth were conducive to the formation of organic compounds from inorganic elements and thus to the formation of many of the chemicals common to all forms of life we see today. The study of this process, known as prebiotic chemistry, has made some progress, but it is still unclear whether or not life could have formed in such a manner on Earth. The alternative hypothesis of panspermia is that the first elements of life may have formed on another planet with even more favorable conditions (or even in interstellar space, asteroids, etc.) and then have been carried over to Earth.
The cosmic dust permeating the universe contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. Further, a scientist suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life."
More than 20% of the carbon in the universe may be associated with polycyclic aromatic hydrocarbons (PAHs), possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. PAHs are subjected to interstellar medium conditions and are transformed through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively".
In October 2020, astronomers proposed the idea of detecting life on distant planets by studying the shadows of trees at certain times of the day to find patterns that could be detected through observation of exoplanets.
Rare Earth hypothesis
The Rare Earth hypothesis postulates that multicellular life forms found on Earth may actually be more of a rarity than scientists assume. According to this hypothesis, life on Earth (and more, multi-cellular life) is possible because of a conjunction of the right circumstances (galaxy and location within it, planetary system, star, orbit, planetary size, atmosphere, etc.); and the chance for all those circumstances to repeat elsewhere may be rare. It provides a possible answer to the Fermi paradox which suggests, "If extraterrestrial aliens are common, why aren't they obvious?" It is apparently in opposition to the principle of mediocrity, assumed by famed astronomers Frank Drake, Carl Sagan, and others. The principle of mediocrity suggests that life on Earth is not exceptional, and it is more than likely to be found on innumerable other worlds.
Missions
Research into the environmental limits of life and the workings of extreme ecosystems is ongoing, enabling researchers to better predict what planetary environments might be most likely to harbor life. Missions such as the Phoenix lander, Mars Science Laboratory, ExoMars, Mars 2020 rover to Mars, and the Cassini probe to Saturn's moons aim to further explore the possibilities of life on other planets in the Solar System.
Viking program
The two Viking landers each carried four types of biological experiments to the surface of Mars in the late 1970s. These were the only Mars landers to carry out experiments looking specifically for metabolism by current microbial life on Mars. The landers used a robotic arm to collect soil samples into sealed test containers on the craft. The two landers were identical, so the same tests were carried out at two places on Mars' surface; Viking 1 near the equator and Viking 2 further north. The result was inconclusive, and is still disputed by some scientists.
Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life.
Beagle 2
Beagle 2 was an unsuccessful British Mars lander that formed part of the European Space Agency's 2003 Mars Express mission. Its primary purpose was to search for signs of life on Mars, past or present. Although it landed safely, it was unable to correctly deploy its solar panels and telecom antenna.
EXPOSE
EXPOSE is a multi-user facility mounted in 2008 outside the International Space Station dedicated to astrobiology. EXPOSE was developed by the European Space Agency (ESA) for long-term spaceflights that allow exposure of organic chemicals and biological samples to outer space in low Earth orbit.
Mars Science Laboratory
The Mars Science Laboratory (MSL) mission landed the Curiosity rover that is currently in operation on Mars. It was launched 26 November 2011, and landed at Gale Crater on 6 August 2012. Mission objectives are to help assess Mars' habitability and in doing so, determine whether Mars is or has ever been able to support life, collect data for a future human mission, study Martian geology, its climate, and further assess the role that water, an essential ingredient for life as we know it, played in forming minerals on Mars.
Tanpopo
The Tanpopo mission is an orbital astrobiology experiment investigating the potential interplanetary transfer of life, organic compounds, and possible terrestrial particles in the low Earth orbit. The purpose is to assess the panspermia hypothesis and the possibility of natural interplanetary transport of microbial life as well as prebiotic organic compounds. Early mission results show evidence that some clumps of microorganism can survive for at least one year in space. This may support the idea that clumps greater than 0.5 millimeters of microorganisms could be one way for life to spread from planet to planet.
ExoMars rover
ExoMars is a robotic mission to Mars to search for possible biosignatures of Martian life, past or present. This astrobiological mission is currently under development by the European Space Agency (ESA) in partnership with the Russian Federal Space Agency (Roscosmos); it is planned for a 2022 launch.
Mars 2020
Mars 2020 successfully landed its rover Perseverance in Jezero Crater on 18 February 2021. It will investigate environments on Mars relevant to astrobiology, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures and biomolecules within accessible geological materials. The Science Definition Team is proposing the rover collect and package at least 31 samples of rock cores and soil for a later mission to bring back for more definitive analysis in laboratories on Earth. The rover could make measurements and technology demonstrations to help designers of a human expedition understand any hazards posed by Martian dust and demonstrate how to collect carbon dioxide (CO2), which could be a resource for making molecular oxygen (O2) and rocket fuel.
Europa Clipper
Europa Clipper is a mission planned by NASA for a 2025 launch that will conduct detailed reconnaissance of Jupiter's moon Europa and will investigate whether its internal ocean could harbor conditions suitable for life. It will also aid in the selection of future landing sites.
Dragonfly
Dragonfly is a NASA mission scheduled to land on Titan in 2036 to assess its microbial habitability and study its prebiotic chemistry. Dragonfly is a rotorcraft lander that will perform controlled flights between multiple locations on the surface, which allows sampling of diverse regions and geological contexts.
Proposed concepts
Icebreaker Life
Icebreaker Life is a lander mission that was proposed for NASA's Discovery Program for the 2021 launch opportunity, but it was not selected for development. It would have had a stationary lander that would be a near copy of the successful 2008 Phoenix and it would have carried an upgraded astrobiology scientific payload, including a 1-meter-long core drill to sample ice-cemented ground in the northern plains to conduct a search for organic molecules and evidence of current or past life on Mars. One of the key goals of the Icebreaker Life mission is to test the hypothesis that the ice-rich ground in the polar regions has significant concentrations of organics due to protection by the ice from oxidants and radiation.
Journey to Enceladus and Titan
Journey to Enceladus and Titan (JET) is an astrobiology mission concept to assess the habitability potential of Saturn's moons Enceladus and Titan by means of an orbiter.
Enceladus Life Finder
Enceladus Life Finder (ELF) is a proposed astrobiology mission concept for a space probe intended to assess the habitability of the internal aquatic ocean of Enceladus, Saturn's sixth-largest moon.
Life Investigation For Enceladus
Life Investigation For Enceladus (LIFE) is a proposed astrobiology sample-return mission concept. The spacecraft would enter into Saturn orbit and enable multiple flybys through Enceladus' icy plumes to collect icy plume particles and volatiles and return them to Earth on a capsule. The spacecraft may sample Enceladus' plumes, the E ring of Saturn, and the upper atmosphere of Titan.
Oceanus
Oceanus is an orbiter proposed in 2017 for the New Frontiers mission No. 4. It would travel to the moon of Saturn, Titan, to assess its habitability. Oceanus objectives are to reveal Titan's organic chemistry, geology, gravity, topography, collect 3D reconnaissance data, catalog the organics and determine where they may interact with liquid water.
Explorer of Enceladus and Titan
Explorer of Enceladus and Titan (E2T) is an orbiter mission concept that would investigate the evolution and habitability of the Saturnian satellites Enceladus and Titan. The mission concept was proposed in 2017 by the European Space Agency.
See also
The Living Cosmos
References
Bibliography
The International Journal of Astrobiology, published by Cambridge University Press, is the forum for practitioners in this interdisciplinary field.
Astrobiology, published by Mary Ann Liebert, Inc., is a peer-reviewed journal that explores the origins of life, evolution, distribution, and destiny in the universe.
Loeb, Avi (2021). Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Houghton Mifflin Harcourt.
Further reading
D. Goldsmith, T. Owen, The Search For Life in the Universe, Addison-Wesley Publishing Company, 2001 (3rd edition).
Andy Weir's 2021 novel, Project Hail Mary, centers on astrobiology.
External links
Astrobiology.nasa.gov
UK Centre for Astrobiology
Spanish Centro de Astrobiología
Astrobiology Research at The Library of Congress
Astrobiology Survey – An introductory course on astrobiology
Summary - Search For Life Beyond Earth (NASA; 25 June 2021)
Extraterrestrial life
Origin of life
Astronomical sub-disciplines
Branches of biology
Speculative evolution
|
https://en.wikipedia.org/wiki/Aerodynamics
|
Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature.
History
Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes.
In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes.
In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903.
During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers.
As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft.
By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations.
Fundamental concepts
Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields.
Flow classification
Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow.
Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results.
Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine).
Continuum assumption
Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow.
The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics.
Conservation laws
The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used:
Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation.
Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components).
Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest.
Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations.
The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables.
Branches of aerodynamics
Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe.
Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic.
The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows.
Incompressible aerodynamics
An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included.
Subsonic flow
Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions.
In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics.
Compressible aerodynamics
According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows.
Transonic flow
The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic.
Supersonic flow
Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem.
Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes.
Hypersonic flow
In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas.
Associated terminology
The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence.
Boundary layers
The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically.
Turbulence
In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow.
Aerodynamics in other fields
Engineering design
Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines.
The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine.
Environmental design
Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems.
Aerodynamic equations are used in numerical weather prediction.
Ball-control in sports
Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect".
See also
Aeronautics
Aerostatics
Aviation
Insect flight – how bugs fly
List of aerospace engineering topics
List of engineering topics
Nose cone design
Fluid dynamics
Computational fluid dynamics
References
Further reading
General aerodynamics
Subsonic aerodynamics
Obert, Ed (2009). . Delft; About practical aerodynamics in industry and the effects on design of aircraft. .
Transonic aerodynamics
Supersonic aerodynamics
Hypersonic aerodynamics
History of aerodynamics
Aerodynamics related to engineering
Ground vehicles
Fixed-wing aircraft
Helicopters
Missiles
Model aircraft
Related branches of aerodynamics
Aerothermodynamics
Aeroelasticity
Boundary layers
Turbulence
External links
NASA Beginner's Guide to Aerodynamics
Aerodynamics for Students
Aerodynamics for Pilots
Aerodynamics and Race Car Tuning
Aerodynamic Related Projects
eFluids Bicycle Aerodynamics
Application of Aerodynamics in Formula One (F1)
Aerodynamics in Car Racing
Aerodynamics of Birds
NASA Aerodynamics Index
Dynamics
Energy in transport
|
https://en.wikipedia.org/wiki/Ash
|
Ash or ashes are the solid remnants of fires. Specifically, ash refers to all non-aqueous, non-gaseous residues that remain after something burns. In analytical chemistry, to analyse the mineral and metal content of chemical samples, ash is the non-gaseous, non-liquid residue after complete combustion.
Ashes as the end product of incomplete combustion are mostly mineral, but usually still contain an amount of combustible organic or other oxidizable residues. The best-known type of ash is wood ash, as a product of wood combustion in campfires, fireplaces, etc. The darker the wood ashes, the higher the content of remaining charcoal from incomplete combustion. The ashes are of different types. Some ashes contain natural compounds that make soil fertile. Others have chemical compounds that can be toxic but may break up in soil from chemical changes and microorganism activity.
Like soap, ash is also a disinfecting agent (alkaline). The World Health Organization recommends ash or sand as alternative for handwashing when soap is not available.
Natural occurrence
Ash occurs naturally from any fire that burns vegetation, and may disperse in the soil to fertilise it, or clump under it for long enough to carbonise into coal.
Specific types
Wood ash
Products of coal combustion
Bottom ash
Fly ash
Cigarette or cigar ash
Incinerator bottom ash, a form of ash produced in incinerators
Volcanic ash, ash that consists of fragmented glass, rock, and minerals that appears during an eruption.
Cremation ashes
Cremation ashes, also called cremated remains or "cremains," are the bodily remains left from cremation. They often take the form of a grey powder resembling coarse sand. While often referred to as ashes, the remains primarily consist of powdered bone fragments due to the cremation process, which eliminates the body's organic materials. People often store these ashes in containers like urns, although they are also sometimes buried or scattered in specific locations.
See also
Ash (analytical chemistry)
Cinereous, consisting of ashes, ash-colored or ash-like
Potash, a term for many useful potassium salts that traditionally derived from plant ashes, but today are typically mined from underground deposits
coal, consisting of carbon as ash, and ash can be converted into coal
carbon, basic component of ashes
charcoal, carbon residue after heating wood mainly used as traditional fuel
References
Combustion
|
https://en.wikipedia.org/wiki/Antiderivative
|
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and .
Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.
In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference.
Examples
The function is an antiderivative of , since the derivative of is . And since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. Essentially, the graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value .
More generally, the power function has antiderivative if , and if .
In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). Thus, integration produces the relations of acceleration, velocity and displacement:
Uses and properties
Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if is an antiderivative of the integrable function over the interval , then:
Because of this, each of the infinitely many antiderivatives of a given function may be called the "indefinite integral" of f and written using the integral symbol with no bounds:
If is an antiderivative of , and the function is defined on some interval, then every other antiderivative of differs from by a constant: there exists a number such that for all . is called the constant of integration. If the domain of is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance
is the most general antiderivative of on its natural domain
Every continuous function has an antiderivative, and one antiderivative is given by the definite integral of with variable upper boundary:
for any in the domain of . Varying the lower boundary produces other antiderivatives, but not necessarily all possible antiderivatives. This is another formulation of the fundamental theorem of calculus.
There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions (like polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations). Examples of these are
the error function
the Fresnel function
the sine integral
the logarithmic integral function and
sophomore's dream
For a more detailed discussion, see also Differential Galois theory.
Techniques of integration
Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives (indeed, there is no pre-defined method for computing indefinite integrals). For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. To learn more, see elementary functions and nonelementary integral.
There exist many properties and techniques for finding antiderivatives. These include, among others:
The linearity of integration (which breaks complicated integrals into simpler ones)
Integration by substitution, often combined with trigonometric identities or the natural logarithm
The inverse chain rule method (a special case of integration by substitution)
Integration by parts (to integrate products of functions)
Inverse function integration (a formula that expresses the antiderivative of the inverse of an invertible and continuous function , in terms of the antiderivative of and of ).
The method of partial fractions in integration (which allows us to integrate all rational functions—fractions of two polynomials)
The Risch algorithm
Additional techniques for multiple integrations (see for instance double integrals, polar coordinates, the Jacobian and the Stokes' theorem)
Numerical integration (a technique for approximating a definite integral when no elementary antiderivative exists, as in the case of )
Algebraic manipulation of integrand (so that other integration techniques, such as integration by substitution, may be used)
Cauchy formula for repeated integration (to calculate the -times antiderivative of a function)
Computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy. Integrals which have already been derived can be looked up in a table of integrals.
Of non-continuous functions
Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that:
Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives.
In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable.
Assuming that the domains of the functions are open intervals:
A necessary, but not sufficient, condition for a function to have an antiderivative is that have the intermediate value property. That is, if is a subinterval of the domain of and is any real number between and , then there exists a between and such that . This is a consequence of Darboux's theorem.
The set of discontinuities of must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function having an antiderivative, which has the given set as its set of discontinuities.
If has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the Henstock–Kurzweil integral, every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative.
If has an antiderivative on a closed interval , then for any choice of partition if one chooses sample points as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value . However if is unbounded, or if is bounded but the set of discontinuities of has positive Lebesgue measure, a different choice of sample points may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below.
Some examples
Basic formulae
If , then .
See also
Antiderivative (complex analysis)
Formal antiderivative
Jackson integral
Lists of integrals
Symbolic integration
Area
Notes
References
Further reading
Introduction to Classical Real Analysis, by Karl R. Stromberg; Wadsworth, 1981 (see also)
Historical Essay On Continuity Of Derivatives by Dave L. Renfro
External links
Wolfram Integrator — Free online symbolic integration with Mathematica
Function Calculator from WIMS
Integral at HyperPhysics
Antiderivatives and indefinite integrals at the Khan Academy
Integral calculator at Symbolab
The Antiderivative at MIT
Introduction to Integrals at SparkNotes
Antiderivatives at Harvy Mudd College
Integral calculus
Linear operators in calculus
|
https://en.wikipedia.org/wiki/Advertising
|
Advertising is the practice and techniques employed to bring attention to a product or service. Advertising aims to put a product or service in the spotlight in hopes of drawing it attention from consumers. It is typically used to promote a specific good or service, but there are wide range of uses, the most common being the commercial advertisement.
Commercial advertisements often seek to generate increased consumption of their products or services through "branding", which associates a product name or image with certain qualities in the minds of consumers. On the other hand, ads that intend to elicit an immediate sale are known as direct-response advertising. Non-commercial entities that advertise more than consumer products or services include political parties, interest groups, religious organizations and governmental agencies. Non-profit organizations may use free modes of persuasion, such as a public service announcement. Advertising may also help to reassure employees or shareholders that a company is viable or successful.
In the 19th century, soap businesses were among the first to employ large-scale advertising campaigns. Thomas J. Barratt was hired by Pears to be its brand manager—the first of its kind—and in addition to creating slogans and images he recruited West End stage actress and socialite Lillie Langtry to become the poster-girl for Pears, making her the first celebrity to endorse a commercial product. Modern advertising originated with the techniques introduced with tobacco advertising in the 1920s, most significantly with the campaigns of Edward Bernays, considered the founder of modern, "Madison Avenue" advertising.
Worldwide spending on advertising in 2015 amounted to an estimated . Advertising's projected distribution for 2017 was 40.4% on TV, 33.3% on digital, 9% on newspapers, 6.9% on magazines, 5.8% on outdoor and 4.3% on radio. Internationally, the largest ("Big Five") advertising agency groups are Omnicom, WPP, Publicis, Interpublic, and Dentsu.
In Latin, advertere means "to turn towards".
History
Egyptians used papyrus to make sales messages and wall posters. Commercial messages and political campaign displays have been found in the ruins of Pompeii and ancient Arabia. Lost and found advertising on papyrus was common in ancient Greece and ancient Rome. Wall or rock painting for commercial advertising is another manifestation of an ancient advertising form, which is present to this day in many parts of Asia, Africa, and South America. The tradition of wall painting can be traced back to Indian rock art paintings that date back to 4000 BC.
In ancient China, the earliest advertising known was oral, as recorded in the Classic of Poetry (11th to 7th centuries BC) of bamboo flutes played to sell confectionery. Advertisement usually takes in the form of calligraphic signboards and inked papers. A copper printing plate dated back to the Song dynasty used to print posters in the form of a square sheet of paper with a rabbit logo with "Jinan Liu's Fine Needle Shop" and "We buy high-quality steel rods and make fine-quality needles, to be ready for use at home in no time" written above and below is considered the world's earliest identified printed advertising medium.
In Europe, as the towns and cities of the Middle Ages began to grow, and the general population was unable to read, instead of signs that read "cobbler", "miller", "tailor", or "blacksmith", images associated with their trade would be used such as a boot, a suit, a hat, a clock, a diamond, a horseshoe, a candle or even a bag of flour. Fruits and vegetables were sold in the city square from the backs of carts and wagons and their proprietors used street callers (town criers) to announce their whereabouts. The first compilation of such advertisements was gathered in "Les Crieries de Paris", a thirteenth-century poem by Guillaume de la Villeneuve.
18th-19th century: Newspaper Advertising
In the 18th century advertisements started to appear in weekly newspapers in England. These early print advertisements were used mainly to promote books and newspapers, which became increasingly affordable with advances in the printing press; and medicines, which were increasingly sought after. However, false advertising and so-called "quack" advertisements became a problem, which ushered in the regulation of advertising content.
In the United States, newspapers grew quickly in the first few decades of the 19th century, in part due to advertising. By 1822, the United States had more newspaper readers than any other country. About half of the content of these newspapers consisted of advertising, usually local advertising, with half of the daily newspapers in the 1810s using the word "advertiser" in their name.
In June 1836, French newspaper La Presse was the first to include paid advertising in its pages, allowing it to lower its price, extend its readership and increase its profitability and the formula was soon copied by all titles. Around 1840, Volney B. Palmer established the roots of the modern day advertising agency in Philadelphia. In 1842 Palmer bought large amounts of space in various newspapers at a discounted rate then resold the space at higher rates to advertisers. The actual ad – the copy, layout, and artwork – was still prepared by the company wishing to advertise; in effect, Palmer was a space broker. The situation changed when the first full-service advertising agency of N.W. Ayer & Son was founded in 1869 in Philadelphia. Ayer & Son offered to plan, create, and execute complete advertising campaigns for its customers. By 1900 the advertising agency had become the focal point of creative planning, and advertising was firmly established as a profession.
Around the same time, in France, Charles-Louis Havas extended the services of his news agency, Havas to include advertisement brokerage, making it the first French group to organize. At first, agencies were brokers for advertisement space in newspapers.
Late 19th century: Modern Advertising
Thomas J. Barratt of London has been called "the father of modern advertising". Working for the Pears soap company, Barratt created an effective advertising campaign for the company products, which involved the use of targeted slogans, images and phrases. One of his slogans, "Good morning. Have you used Pears' soap?" was famous in its day and into the 20th century. In 1882, Barratt recruited English actress and socialite Lillie Langtry to become the poster-girl for Pears, making her the first celebrity to endorse a commercial product.
Becoming the company's brand manager in 1865, listed as the first of its kind by the Guinness Book of Records, Barratt introduced many of the crucial ideas that lie behind successful advertising and these were widely circulated in his day. He constantly stressed the importance of a strong and exclusive brand image for Pears and of emphasizing the product's availability through saturation campaigns. He also understood the importance of constantly reevaluating the market for changing tastes and mores, stating in 1907 that "tastes change, fashions change, and the advertiser has to change with them. An idea that was effective a generation ago would fall flat, stale, and unprofitable if presented to the public today. Not that the idea of today is always better than the older idea, but it is different – it hits the present taste."
Enhanced advertising revenues was one effect of the Industrial Revolution in Britain. Thanks to the revolution and the consumers it created, by the mid-19th century biscuits and chocolate became products for the masses, and British biscuit manufacturers were among the first to introduce branding to distinguish grocery products. One the world's first global brands, Huntley & Palmers biscuits were sold in 172 countries in 1900, and their global reach was reflected in their advertisements.
20th century
As a result of massive industrialization, advertising increased dramatically in the United States. In 1919 it was 2.5 percent of gross domestic product (GDP) in the US, and it averaged 2.2 percent of GDP between then and at least 2007, though it may have declined dramatically since the Great Recession.
Industry could not benefit from its increased productivity without a substantial increase in consumer spending. This contributed to the development of mass marketing designed to influence the population's economic behavior on a larger scale. In the 1910s and 1920s, advertisers in the U.S. adopted the doctrine that human instincts could be targeted and harnessed – "sublimated" into the desire to purchase commodities. Edward Bernays, a nephew of Sigmund Freud, became associated with the method and is sometimes called the founder of modern advertising and public relations. Bernays claimed that:In other words, selling products by appealing to the rational minds of customers (the main method used prior to Bernays) was much less effective than selling products based on the unconscious desires that Bernays felt were the true motivators of human action. "Sex sells" became a controversial issue, with techniques for titillating and enlarging the audience posing a challenge to conventional morality.
In the 1920s, under Secretary of Commerce Herbert Hoover, the American government promoted advertising. Hoover himself delivered an address to the Associated Advertising Clubs of the World in 1925 called 'Advertising Is a Vital Force in Our National Life." In October 1929, the head of the U.S. Bureau of Foreign and Domestic Commerce, Julius Klein, stated "Advertising is the key to world prosperity." This was part of the "unparalleled" collaboration between business and government in the 1920s, according to a 1933 European economic journal.
The tobacco companies became major advertisers in order to sell packaged cigarettes. The tobacco companies pioneered the new advertising techniques when they hired Bernays to create positive associations with tobacco smoking.
Advertising was also used as a vehicle for cultural assimilation, encouraging workers to exchange their traditional habits and community structure in favor of a shared "modern" lifestyle. An important tool for influencing immigrant workers was the American Association of Foreign Language Newspapers (AAFLN). The AAFLN was primarily an advertising agency but also gained heavily centralized control over much of the immigrant press.
At the turn of the 20th century, advertising was one of the few career choices for women. Since women were responsible for most household purchasing done, advertisers and agencies recognized the value of women's insight during the creative process. In fact, the first American advertising to use a sexual sell was created by a woman – for a soap product. Although tame by today's standards, the advertisement featured a couple with the message "A skin you love to touch".
In the 1920s, psychologists Walter D. Scott and John B. Watson contributed applied psychological theory to the field of advertising. Scott said, "Man has been called the reasoning animal but he could with greater truthfulness be called the creature of suggestion. He is reasonable, but he is to a greater extent suggestible". He demonstrated this through his advertising technique of a direct command to the consumer.
Radio from the 1920s
In the early 1920s, the first radio stations were established by radio equipment manufacturers, followed by non-profit organizations such as schools, clubs and civic groups who also set up their own stations. Retailer and consumer goods manufacturers quickly recognized radio's potential to reach consumers in their home and soon adopted advertising techniques that would allow their messages to stand out; slogans, mascots, and jingles began to appear on radio in the 1920s and early television in the 1930s.
The rise of mass media communications allowed manufacturers of branded goods to bypass retailers by advertising directly to consumers. This was a major paradigm shift which forced manufacturers to focus on the brand and stimulated the need for superior insights into consumer purchasing, consumption and usage behaviour; their needs, wants and aspirations. The earliest radio drama series were sponsored by soap manufacturers and the genre became known as a soap opera. Before long, radio station owners realized they could increase advertising revenue by selling 'air-time' in small time allocations which could be sold to multiple businesses. By the 1930s, these advertising spots, as the packets of time became known, were being sold by the station's geographical sales representatives, ushering in an era of national radio advertising.
By the 1940s, manufacturers began to recognize the way in which consumers were developing personal relationships with their brands in a social/psychological/anthropological sense. Advertisers began to use motivational research and consumer research to gather insights into consumer purchasing. Strong branded campaigns for Chrysler and Exxon/Esso, using insights drawn research methods from psychology and cultural anthropology, led to some of the most enduring campaigns of the 20th century.
Commercial television in the 1950s
In the early 1950s, the DuMont Television Network began the modern practice of selling advertisement time to multiple sponsors. Previously, DuMont had trouble finding sponsors for many of their programs and compensated by selling smaller blocks of advertising time to several businesses. This eventually became the standard for the commercial television industry in the United States. However, it was still a common practice to have single sponsor shows, such as The United States Steel Hour. In some instances the sponsors exercised great control over the content of the show – up to and including having one's advertising agency actually writing the show. The single sponsor model is much less prevalent now, a notable exception being the Hallmark Hall of Fame.
Cable television from the 1980s
The late 1980s and early 1990s saw the introduction of cable television and particularly MTV. Pioneering the concept of the music video, MTV ushered in a new type of advertising: the consumer tunes in for the advertising message, rather than it being a by-product or afterthought. As cable and satellite television became increasingly prevalent, specialty channels emerged, including channels entirely devoted to advertising, such as QVC, Home Shopping Network, and ShopTV Canada.
Internet from the 1990s
With the advent of the ad server, online advertising grew, contributing to the "dot-com" boom of the 1990s. Entire corporations operated solely on advertising revenue, offering everything from coupons to free Internet access. At the turn of the 21st century, some websites, including the search engine Google, changed online advertising by personalizing ads based on web browsing behavior. This has led to other similar efforts and an increase in interactive advertising.
The share of advertising spending relative to GDP has changed little across large changes in media since 1925. In 1925, the main advertising media in America were newspapers, magazines, signs on streetcars, and outdoor posters. Advertising spending as a share of GDP was about 2.9 percent. By 1998, television and radio had become major advertising media; by 2017, the balance between broadcast and online advertising had shifted, with online spending exceeding broadcast. Nonetheless, advertising spending as a share of GDP was slightly lower – about 2.4 percent.
Guerrilla marketing involves unusual approaches such as staged encounters in public places, giveaways of products such as cars that are covered with brand messages, and interactive advertising where the viewer can respond to become part of the advertising message. This type of advertising is unpredictable, which causes consumers to buy the product or idea. This reflects an increasing trend of interactive and "embedded" ads, such as via product placement, having consumers vote through text messages, and various campaigns utilizing social network services such as Facebook or Twitter.
The advertising business model has also been adapted in recent years. In media for equity, advertising is not sold, but provided to start-up companies in return for equity. If the company grows and is sold, the media companies receive cash for their shares.
Domain name registrants (usually those who register and renew domains as an investment) sometimes "park" their domains and allow advertising companies to place ads on their sites in return for per-click payments. These ads are typically driven by pay per click search engines like Google or Yahoo, but ads can sometimes be placed directly on targeted domain names through a domain lease or by making contact with the registrant of a domain name that describes a product. Domain name registrants are generally easy to identify through WHOIS records that are publicly available at registrar websites.
Classification
Advertising may be categorized in a variety of ways, including by style, target audience, geographic scope, medium, or purpose. For example, in print advertising, classification by style can include display advertising (ads with design elements sold by size) vs. classified advertising (ads without design elements sold by the word or line). Advertising may be local, national or global. An ad campaign may be directed toward consumers or to businesses. The purpose of an ad may be to raise awareness (brand advertising), or to elicit an immediate sale (direct response advertising). The term above the line (ATL) is used for advertising involving mass media; more targeted forms of advertising and promotion are referred to as below the line (BTL). The two terms date back to 1954 when Procter & Gamble began paying their advertising agencies differently from other promotional agencies. In the 2010s, as advertising technology developed, a new term, through the line (TTL) began to come into use, referring to integrated advertising campaigns.
Traditional media
Virtually any medium can be used for advertising. Commercial advertising media can include wall paintings, billboards, street furniture components, printed flyers and rack cards, radio, cinema and television adverts, web banners, mobile telephone screens, shopping carts, web popups, skywriting, bus stop benches, human billboards and forehead advertising, magazines, newspapers, town criers, sides of buses, banners attached to or sides of airplanes ("logojets"), in-flight advertisements on seatback tray tables or overhead storage bins, taxicab doors, roof mounts and passenger screens, musical stage shows, subway platforms and trains, elastic bands on disposable diapers, doors of bathroom stalls, stickers on apples in supermarkets, shopping cart handles (grabertising), the opening section of streaming audio and video, posters, and the backs of event tickets and supermarket receipts. Any situation in which an "identified" sponsor pays to deliver their message through a medium is advertising.
Television Television advertising is one of the most expensive types of advertising; networks charge large amounts for commercial airtime during popular events. The annual Super Bowl football game in the United States is known as the most prominent advertising event on television – with an audience of over 108 million and studies showing that 50% of those only tuned in to see the advertisements. During the 2014 edition of this game, the average thirty-second ad cost US$4 million, and $8 million was charged for a 60-second spot. Virtual advertisements may be inserted into regular programming through computer graphics. It is typically inserted into otherwise blank backdrops or used to replace local billboards that are not relevant to the remote broadcast audience. Virtual billboards may be inserted into the background where none exist in real-life. This technique is especially used in televised sporting events. Virtual product placement is also possible. An infomercial is a long-format television commercial, typically five minutes or longer. The name blends the words "information" and "commercial". The main objective in an infomercial is to create an impulse purchase, so that the target sees the presentation and then immediately buys the product through the advertised toll-free telephone number or website. Infomercials describe and often demonstrate products, and commonly have testimonials from customers and industry professionals.
Radio Radio advertisements are broadcast as radio waves to the air from a transmitter to an antenna and a thus to a receiving device. Airtime is purchased from a station or network in exchange for airing the commercials. While radio has the limitation of being restricted to sound, proponents of radio advertising often cite this as an advantage. Radio is an expanding medium that can be found on air, and also online. According to Arbitron, radio has approximately 241.6 million weekly listeners, or more than 93 percent of the U.S. population.
Online Online advertising is a form of promotion that uses the Internet and World Wide Web for the expressed purpose of delivering marketing messages to attract customers. Online ads are delivered by an ad server. Examples of online advertising include contextual ads that appear on search engine results pages, banner ads, in pay per click text ads, rich media ads, Social network advertising, online classified advertising, advertising networks and e-mail marketing, including e-mail spam. A newer form of online advertising is Native Ads; they go in a website's news feed and are supposed to improve user experience by being less intrusive. However, some people argue this practice is deceptive.
Domain names Domain name advertising is most commonly done through pay per click web search engines, however, advertisers often lease space directly on domain names that generically describe their products. When an Internet user visits a website by typing a domain name directly into their web browser, this is known as "direct navigation", or "type in" web traffic. Although many Internet users search for ideas and products using search engines and mobile phones, a large number of users around the world still use the address bar. They will type a keyword into the address bar such as "geraniums" and add ".com" to the end of it. Sometimes they will do the same with ".org" or a country-code Top Level Domain (TLD such as ".co.uk" for the United Kingdom or ".ca" for Canada). When Internet users type in a generic keyword and add .com or another top-level domain (TLD) ending, it produces a targeted sales lead. Domain name advertising was originally developed by Oingo (later known as Applied Semantics), one of Google's early acquisitions.
Product placements is when a product or brand is embedded in entertainment and media. For example, in a film, the main character can use an item or other of a definite brand, as in the movie Minority Report, where Tom Cruise's character John Anderton owns a phone with the Nokia logo clearly written in the top corner, or his watch engraved with the Bulgari logo. Another example of advertising in film is in I, Robot, where main character played by Will Smith mentions his Converse shoes several times, calling them "classics", because the film is set far in the future. I, Robot and Spaceballs also showcase futuristic cars with the Audi and Mercedes-Benz logos clearly displayed on the front of the vehicles. Cadillac chose to advertise in the movie The Matrix Reloaded, which as a result contained many scenes in which Cadillac cars were used. Similarly, product placement for Omega Watches, Ford, VAIO, BMW and Aston Martin cars are featured in recent James Bond films, most notably Casino Royale. In "Fantastic Four: Rise of the Silver Surfer", the main transport vehicle shows a large Dodge logo on the front. Blade Runner includes some of the most obvious product placement; the whole film stops to show a Coca-Cola billboard.
Print Print advertising describes advertising in a printed medium such as a newspaper, magazine, or trade journal. This encompasses everything from media with a very broad readership base, such as a major national newspaper or magazine, to more narrowly targeted media such as local newspapers and trade journals on very specialized topics. One form of print advertising is classified advertising, which allows private individuals or companies to purchase a small, narrowly targeted ad paid by the word or line. Another form of print advertising is the display ad, which is generally a larger ad with design elements that typically run in an article section of a newspaper.
Outdoor
Billboards, also known as hoardings in some parts of the world, are large structures located in public places which display advertisements to passing pedestrians and motorists. Most often, they are located on main roads with a large amount of passing motor and pedestrian traffic; however, they can be placed in any location with large numbers of viewers, such as on mass transit vehicles and in stations, in shopping malls or office buildings, and in stadiums. The form known as street advertising first came to prominence in the UK by Street Advertising Services to create outdoor advertising on street furniture and pavements. Working with products such as Reverse Graffiti, air dancers and 3D pavement advertising, for getting brand messages out into public spaces. Sheltered outdoor advertising combines outdoor with indoor advertisement by placing large mobile, structures (tents) in public places on temporary bases. The large outer advertising space aims to exert a strong pull on the observer, the product is promoted indoors, where the creative decor can intensify the impression. Mobile billboards are generally vehicle mounted billboards or digital screens. These can be on dedicated vehicles built solely for carrying advertisements along routes preselected by clients, they can also be specially equipped cargo trucks or, in some cases, large banners strewn from planes. The billboards are often lighted; some being backlit, and others employing spotlights. Some billboard displays are static, while others change; for example, continuously or periodically rotating among a set of advertisements. Mobile displays are used for various situations in metropolitan areas throughout the world, including: target advertising, one-day and long-term campaigns, conventions, sporting events, store openings and similar promotional events, and big advertisements from smaller companies.
Point-of-sale In-store advertising is any advertisement placed in a retail store. It includes placement of a product in visible locations in a store, such as at eye level, at the ends of aisles and near checkout counters (a.k.a. POP – point of purchase display), eye-catching displays promoting a specific product, and advertisements in such places as shopping carts and in-store video displays.
Novelties Advertising printed on small tangible items such as coffee mugs, T-shirts, pens, bags, and such is known as novelty advertising. Some printers specialize in printing novelty items, which can then be distributed directly by the advertiser, or items may be distributed as part of a cross-promotion, such as ads on fast food containers.
Celebrity endorsements Advertising in which a celebrity endorses a product or brand leverages celebrity power, fame, money, popularity to gain recognition for their products or to promote specific stores' or products. Advertisers often advertise their products, for example, when celebrities share their favorite products or wear clothes by specific brands or designers. Celebrities are often involved in advertising campaigns such as television or print adverts to advertise specific or general products. The use of celebrities to endorse a brand can have its downsides, however; one mistake by a celebrity can be detrimental to the public relations of a brand. For example, following his performance of eight gold medals at the 2008 Olympic Games in Beijing, China, swimmer Michael Phelps' contract with Kellogg's was terminated, as Kellogg's did not want to associate with him after he was photographed smoking marijuana. Celebrities such as Britney Spears have advertised for multiple products including Pepsi, Candies from Kohl's, Twister, NASCAR, and Toyota.
Aerial Using aircraft, balloons or airships to create or display advertising media. Skywriting is a notable example.
New media approaches
A new advertising approach is known as advanced advertising, which is data-driven advertising, using large quantities of data, precise measuring tools and precise targeting. Advanced advertising also makes it easier for companies which sell ad-space to attribute customer purchases to the ads they display or broadcast.
Increasingly, other media are overtaking many of the "traditional" media such as television, radio and newspaper because of a shift toward the usage of the Internet for news and music as well as devices like digital video recorders (DVRs) such as TiVo.
Online advertising began with unsolicited bulk e-mail advertising known as "e-mail spam". Spam has been a problem for e-mail users since 1978. As new online communication channels became available, advertising followed. The first banner ad appeared on the World Wide Web in 1994. Prices of Web-based advertising space are dependent on the "relevance" of the surrounding web content and the traffic that the website receives.
In online display advertising, display ads generate awareness quickly. Unlike search, which requires someone to be aware of a need, display advertising can drive awareness of something new and without previous knowledge. Display works well for direct response. Display is not only used for generating awareness, it is used for direct response campaigns that link to a landing page with a clear 'call to action'.
As the mobile phone became a new mass medium in 1998 when the first paid downloadable content appeared on mobile phones in Finland, mobile advertising followed, also first launched in Finland in 2000. By 2007 the value of mobile advertising had reached $2 billion and providers such as Admob delivered billions of mobile ads.
More advanced mobile ads include banner ads, coupons, Multimedia Messaging Service picture and video messages, advergames and various engagement marketing campaigns. A particular feature driving mobile ads is the 2D barcode, which replaces the need to do any typing of web addresses, and uses the camera feature of modern phones to gain immediate access to web content. 83 percent of Japanese mobile phone users already are active users of 2D barcodes.
Some companies have proposed placing messages or corporate logos on the side of booster rockets and the International Space Station.
Unpaid advertising (also called "publicity advertising"), can include personal recommendations ("bring a friend", "sell it"), spreading buzz, or achieving the feat of equating a brand with a common noun (in the United States, "Xerox" = "photocopier", "Kleenex" = tissue, "Vaseline" = petroleum jelly, "Hoover" = vacuum cleaner, and "Band-Aid" = adhesive bandage). However, some companies oppose the use of their brand name to label an object. Equating a brand with a common noun also risks turning that brand into a generic trademark – turning it into a generic term which means that its legal protection as a trademark is lost.
Early in its life, The CW aired short programming breaks called "Content Wraps", to advertise one company's product during an entire commercial break. The CW pioneered "content wraps" and some products featured were Herbal Essences, Crest, Guitar Hero II, CoverGirl, and Toyota.
A new promotion concept has appeared, "ARvertising", advertising on augmented reality technology.
Controversy exists on the effectiveness of subliminal advertising (see mind control), and the pervasiveness of mass messages (propaganda).
Rise in new media
With the Internet came many new advertising opportunities. Pop-up, Flash, banner, pop-under, advergaming, and email advertisements (all of which are often unwanted or spam in the case of email) are now commonplace. Particularly since the rise of "entertaining" advertising, some people may like an advertisement enough to wish to watch it later or show a friend. In general, the advertising community has not yet made this easy, although some have used the Internet to widely distribute their ads to anyone willing to see or hear them. In the last three quarters of 2009, mobile and Internet advertising grew by 18% and 9% respectively, while older media advertising saw declines: −10.1% (TV), −11.7% (radio), −14.8% (magazines) and −18.7% (newspapers). Between 2008 and 2014, U.S. newspapers lost more than half their print advertising revenue.
Niche marketing
Another significant trend regarding future of advertising is the growing importance of the niche market using niche or targeted ads. Also brought about by the Internet and the theory of the long tail, advertisers will have an increasing ability to reach specific audiences. In the past, the most efficient way to deliver a message was to blanket the largest mass market audience possible. However, usage tracking, customer profiles and the growing popularity of niche content brought about by everything from blogs to social networking sites, provide advertisers with audiences that are smaller but much better defined, leading to ads that are more relevant to viewers and more effective for companies' marketing products. Among others, Comcast Spotlight is one such advertiser employing this method in their video on demand menus. These advertisements are targeted to a specific group and can be viewed by anyone wishing to find out more about a particular business or practice, from their home. This causes the viewer to become proactive and actually choose what advertisements they want to view.
Niche marketing could also be helped by bringing the issue of colour into advertisements. Different colours play major roles when it comes to marketing strategies, for example, seeing the blue can promote a sense of calmness and gives a sense of security which is why many social networks such as Facebook use blue in their logos.
Google AdSense is an example of niche marketing. Google calculates the primary purpose of a website and adjusts ads accordingly; it uses keywords on the page (or even in emails) to find the general ideas of topics disused and places ads that will most likely be clicked on by viewers of the email account or website visitors.
Crowdsourcing
The concept of crowdsourcing has given way to the trend of user-generated advertisements. User-generated ads are created by people, as opposed to an advertising agency or the company themselves, often resulting from brand sponsored advertising competitions. For the 2007 Super Bowl, the Frito-Lays division of PepsiCo held the "Crash the Super Bowl" contest, allowing people to create their own Doritos commercials. Chevrolet held a similar competition for their Tahoe line of SUVs. Due to the success of the Doritos user-generated ads in the 2007 Super Bowl, Frito-Lays relaunched the competition for the 2009 and 2010 Super Bowl. The resulting ads were among the most-watched and most-liked Super Bowl ads. In fact, the winning ad that aired in the 2009 Super Bowl was ranked by the USA Today Super Bowl Ad Meter as the top ad for the year while the winning ads that aired in the 2010 Super Bowl were found by Nielsen's BuzzMetrics to be the "most buzzed-about". Another example of companies using crowdsourcing successfully is the beverage company Jones Soda that encourages consumers to participate in the label design themselves.
This trend has given rise to several online platforms that host user-generated advertising competitions on behalf of a company. Founded in 2007, Zooppa has launched ad competitions for brands such as Google, Nike, Hershey's, General Mills, Microsoft, NBC Universal, Zinio, and Mini Cooper. Crowdsourcing remains controversial, as the long-term impact on the advertising industry is still unclear.
Globalization
Advertising has gone through five major stages of development: domestic, export, international, multi-national, and global. For global advertisers, there are four, potentially competing, business objectives that must be balanced when developing worldwide advertising: building a brand while speaking with one voice, developing economies of scale in the creative process, maximising local effectiveness of ads, and increasing the company's speed of implementation. Born from the evolutionary stages of global marketing are the three primary and fundamentally different approaches to the development of global advertising executions: exporting executions, producing local executions, and importing ideas that travel.
Advertising research is key to determining the success of an ad in any country or region. The ability to identify which elements and/or moments of an ad contribute to its success is how economies of scale are maximized. Once one knows what works in an ad, that idea or ideas can be imported by any other market. Market research measures, such as Flow of Attention, Flow of Emotion and branding moments provide insight into what is working in an ad in any country or region because the measures are based on the visual, not verbal, elements of the ad.
Foreign public messaging
Foreign governments, particularly those that own marketable commercial products or services, often promote their interests and positions through the advertising of those goods because the target audience is not only largely unaware of the forum as a vehicle for foreign messaging but also willing to receive the message while in a mental state of absorbing information from advertisements during television commercial breaks, while reading a periodical, or while passing by billboards in public spaces. A prime example of this messaging technique is advertising campaigns to promote international travel. While advertising foreign destinations and services may stem from the typical goal of increasing revenue by drawing more tourism, some travel campaigns carry the additional or alternative intended purpose of promoting good sentiments or improving existing ones among the target audience towards a given nation or region. It is common for advertising promoting foreign countries to be produced and distributed by the tourism ministries of those countries, so these ads often carry political statements and/or depictions of the foreign government's desired international public perception. Additionally, a wide range of foreign airlines and travel-related services which advertise separately from the destinations, themselves, are owned by their respective governments; examples include, though are not limited to, the Emirates airline (Dubai), Singapore Airlines (Singapore), Qatar Airways (Qatar), China Airlines (Taiwan/Republic of China), and Air China (People's Republic of China). By depicting their destinations, airlines, and other services in a favorable and pleasant light, countries market themselves to populations abroad in a manner that could mitigate prior public impressions.
Diversification
In the realm of advertising agencies, continued industry diversification has seen observers note that "big global clients don't need big global agencies any more". This is reflected by the growth of non-traditional agencies in various global markets, such as Canadian business TAXI and SMART in Australia and has been referred to as "a revolution in the ad world".
New technology
The ability to record shows on digital video recorders (such as TiVo) allow watchers to record the programs for later viewing, enabling them to fast forward through commercials. Additionally, as more seasons of pre-recorded box sets are offered for sale of television programs; fewer people watch the shows on TV. However, the fact that these sets are sold, means the company will receive additional profits from these sets.
To counter this effect, a variety of strategies have been employed. Many advertisers have opted for product placement on TV shows like Survivor. Other strategies include integrating advertising with internet-connected program guidess (EPGs), advertising on companion devices (like smartphones and tablets) during the show, and creating mobile apps for TV programs. Additionally, some like brands have opted for social television sponsorship.
The emerging technology of drone displays has recently been used for advertising purposes.
Education
In recent years there have been several media literacy initiatives, and more specifically concerning advertising, that seek to empower citizens in the face of media advertising campaigns.
Advertising education has become popular with bachelor, master and doctorate degrees becoming available in the emphasis. A surge in advertising interest is typically attributed to the strong relationship advertising plays in cultural and technological changes, such as the advance of online social networking. A unique model for teaching advertising is the student-run advertising agency, where advertising students create campaigns for real companies. Organizations such as the American Advertising Federation establish companies with students to create these campaigns.
Purposes
Advertising is at the front of delivering the proper message to customers and prospective customers. The purpose of advertising is to inform the consumers about their product and convince customers that a company's services or products are the best, enhance the image of the company, point out and create a need for products or services, demonstrate new uses for established products, announce new products and programs, reinforce the salespeople's individual messages, draw customers to the business, and to hold existing customers.
Sales promotions and brand loyalty
Sales promotions are another way to advertise. Sales promotions are double purposed because they are used to gather information about what type of customers one draws in and where they are, and to jump start sales. Sales promotions include things like contests and games, sweepstakes, product giveaways, samples coupons, loyalty programs, and discounts. The ultimate goal of sales promotions is to stimulate potential customers to action.
Criticisms
While advertising can be seen as necessary for economic growth, it is not without social costs. Unsolicited commercial e-mail and other forms of spam have become so prevalent as to have become a major nuisance to users of these services, as well as being a financial burden on internet service providers. Advertising is increasingly invading public spaces, such as schools, which some critics argue is a form of child exploitation. This increasing difficulty in limiting exposure to specific audiences can result in negative backlash for advertisers. In tandem with these criticisms, the advertising industry has seen low approval rates in surveys and negative cultural portrayals.
One of the most controversial criticisms of advertisement in the present day is that of the predominance of advertising of foods high in sugar, fat, and salt specifically to children. Critics claim that food advertisements targeting children are exploitive and are not sufficiently balanced with proper nutritional education to help children understand the consequences of their food choices. Additionally, children may not understand that they are being sold something, and are therefore more impressionable. Michelle Obama has criticized large food companies for advertising unhealthy foods largely towards children and has requested that food companies either limit their advertising to children or advertise foods that are more in line with dietary guidelines. The other criticisms include the change that are brought by those advertisements on the society and also the deceiving ads that are aired and published by the corporations. Cosmetic and health industry are the ones which exploited the highest and created reasons of concern.
A 2021 study found that for more than 80% of brands, advertising had a negative return on investment. Unsolicited ads have been criticized as attention theft.
Regulation
There have been increasing efforts to protect the public interest by regulating the content and the influence of advertising. Some examples include restrictions for advertising alcohol, tobacco or gambling imposed in many countries, as well as the bans around advertising to children, which exist in parts of Europe. Advertising regulation focuses heavily on the veracity of the claims and as such, there are often tighter restrictions placed around advertisements for food and healthcare products.
The advertising industries within some countries rely less on laws and more on systems of self-regulation. Advertisers and the media agree on a code of advertising standards that they attempt to uphold. The general aim of such codes is to ensure that any advertising is 'legal, decent, honest and truthful'. Some self-regulatory organizations are funded by the industry, but remain independent, with the intent of upholding the standards or codes like the Advertising Standards Authority in the UK.
In the UK, most forms of outdoor advertising such as the display of billboards is regulated by the UK Town and County Planning system. Currently, the display of an advertisement without consent from the Planning Authority is a criminal offense liable to a fine of £2,500 per offense. In the US, many communities believe that many forms of outdoor advertising blight the public realm. As long ago as the 1960s in the US, there were attempts to ban billboard advertising in the open countryside. Cities such as São Paulo have introduced an outright ban with London also having specific legislation to control unlawful displays.
Some governments restrict the languages that can be used in advertisements, but advertisers may employ tricks to try avoiding them. In France for instance, advertisers sometimes print English words in bold and French translations in fine print to deal with Article 120 of the 1994 Toubon Law limiting the use of English.
The advertising of pricing information is another topic of concern for governments. In the United States for instance, it is common for businesses to only mention the existence and amount of applicable taxes at a later stage of a transaction. In Canada and New Zealand, taxes can be listed as separate items, as long as they are quoted up-front. In most other countries, the advertised price must include all applicable taxes, enabling customers to easily know how much it will cost them.
Theory
Hierarchy-of-effects models
Various competing models of hierarchies of effects attempt to provide a theoretical underpinning to advertising practice.
The model of Clow and Baack clarifies the objectives of an advertising campaign and for each individual advertisement. The model postulates six steps a buyer moves through when making a purchase:
Awareness
Knowledge
Liking
Preference
Conviction
Purchase
Means-end theory suggests that an advertisement should contain a message or means that leads the consumer to a desired end-state.
Leverage points aim to move the consumer from understanding a product's benefits to linking those benefits with personal values.
Marketing mix
The marketing mix was proposed by professor E. Jerome McCarthy in the 1960s. It consists of four basic elements called the "four Ps". Product is the first P representing the actual product. Price represents the process of determining the value of a product. Place represents the variables of getting the product to the consumer such as distribution channels, market coverage and movement organization. The last P stands for Promotion which is the process of reaching the target market and convincing them to buy the product.
In the 1990s, the concept of four Cs was introduced as a more customer-driven replacement of four P's. There are two theories based on four Cs: Lauterborn's four Cs (consumer, cost, communication, convenience)
and Shimizu's four Cs (commodity, cost, communication, channel) in the 7Cs Compass Model (Co-marketing). Communications can include advertising, sales promotion, public relations, publicity, personal selling, corporate identity, internal communication, SNS, and MIS.
Research
Advertising research is a specialized form of research that works to improve the effectiveness and efficiency of advertising. It entails numerous forms of research which employ different methodologies. Advertising research includes pre-testing (also known as copy testing) and post-testing of ads and/or campaigns.
Pre-testing includes a wide range of qualitative and quantitative techniques, including: focus groups, in-depth target audience interviews (one-on-one interviews), small-scale quantitative studies and physiological measurement. The goal of these investigations is to better understand how different groups respond to various messages and visual prompts, thereby providing an assessment of how well the advertisement meets its communications goals.
Post-testing employs many of the same techniques as pre-testing, usually with a focus on understanding the change in awareness or attitude attributable to the advertisement. With the emergence of digital advertising technologies, many firms have begun to continuously post-test ads using real-time data. This may take the form of A/B split-testing or multivariate testing.
Continuous ad tracking and the Communicus System are competing examples of post-testing advertising research types.
Semiotics
Meanings between consumers and marketers depict signs and symbols that are encoded in everyday objects. Semiotics is the study of signs and how they are interpreted. Advertising has many hidden signs and meanings within brand names, logos, package designs, print advertisements, and television advertisements. Semiotics aims to study and interpret the message being conveyed in (for example) advertisements. Logos and advertisements can be interpreted at two levels – known as the surface level and the underlying level. The surface level uses signs creatively to create an image or personality for a product. These signs can be images, words, fonts, colors, or slogans. The underlying level is made up of hidden meanings. The combination of images, words, colors, and slogans must be interpreted by the audience or consumer. The "key to advertising analysis" is the signifier and the signified. The signifier is the object and the signified is the mental concept. A product has a signifier and a signified. The signifier is the color, brand name, logo design, and technology. The signified has two meanings known as denotative and connotative. The denotative meaning is the meaning of the product. A television's denotative meaning might be that it is high definition. The connotative meaning is the product's deep and hidden meaning. A connotative meaning of a television would be that it is top-of-the-line.
Apple's commercials used a black silhouette of a person that was the age of Apple's target market. They placed the silhouette in front of a blue screen so that the picture behind the silhouette could be constantly changing. However, the one thing that stays the same in these ads is that there is music in the background and the silhouette is listening to that music on a white iPod through white headphones. Through advertising, the white color on a set of earphones now signifies that the music device is an iPod. The white color signifies almost all of Apple's products.
The semiotics of gender plays a key influence on the way in which signs are interpreted. When considering gender roles in advertising, individuals are influenced by three categories. Certain characteristics of stimuli may enhance or decrease the elaboration of the message (if the product is perceived as feminine or masculine). Second, the characteristics of individuals can affect attention and elaboration of the message (traditional or non-traditional gender role orientation). Lastly, situational factors may be important to influence the elaboration of the message.
There are two types of marketing communication claims-objective and subjective. Objective claims stem from the extent to which the claim associates the brand with a tangible product or service feature. For instance, a camera may have auto-focus features. Subjective claims convey emotional, subjective, impressions of intangible aspects of a product or service. They are non-physical features of a product or service that cannot be directly perceived, as they have no physical reality. For instance the brochure has a beautiful design. Males tend to respond better to objective marketing-communications claims while females tend to respond better to subjective marketing communications claims.
Voiceovers are commonly used in advertising. Most voiceovers are done by men, with figures of up to 94% having been reported. There have been more female voiceovers in recent years, but mainly for food, household products, and feminine-care products.
Gender effects on comprehension
According to a 1977 study by David Statt, females process information comprehensively, while males process information through heuristic devices such as procedures, methods or strategies for solving problems, which could have an effect on how they interpret advertising. According to this study, men prefer to have available and apparent cues to interpret the message, whereas females engage in more creative, associative, imagery-laced interpretation. Later research by a Danish team found that advertising attempts to persuade men to improve their appearance or performance, whereas its approach to women aims at transformation toward an impossible ideal of female presentation. In Paul Suggett's article "The Objectification of Women in Advertising" he discusses the negative impact that these women in advertisements, who are too perfect to be real, have on women, as well as men, in real life. Advertising's manipulation of women's aspiration to these ideal types as portrayed in film, in erotic art, in advertising, on stage, within music videos and through other media exposures requires at least a conditioned rejection of female reality and thereby takes on a highly ideological cast. Studies show that these expectations of women and young girls negatively affect their views about their bodies and appearances. These advertisements are directed towards men. Not everyone agrees: one critic viewed this monologic, gender-specific interpretation of advertising as excessively skewed and politicized. There are some companies like Dove and aerie that are creating commercials to portray more natural women, with less post production manipulation, so more women and young girls are able to relate to them.
More recent research by Martin (2003) reveals that males and females differ in how they react to advertising depending on their mood at the time of exposure to the ads and on the affective tone of the advertising. When feeling sad, males prefer happy ads to boost their mood. In contrast, females prefer happy ads when they are feeling happy. The television programs in which ads are embedded influence a viewer's mood state. Susan Wojcicki, author of the article "Ads that Empower Women don't just Break Stereotypes—They're also Effective" discusses how advertising to women has changed since the first Barbie commercial, where a little girl tells the doll that, she wants to be just like her. Little girls grow up watching advertisements of scantily clad women advertising things from trucks to burgers and Wojcicki states that this shows girls that they are either arm candy or eye candy.
Alternatives
Other approaches to revenue include donations, paid subscriptions, microtransactions, and data monetization. Websites and applications are "ad-free" when not using advertisements at all for revenue. For example, the online encyclopaedia Wikipedia provides free content by receiving funding from charitable donations.
"Fathers" of advertising
Late 1700s – Benjamin Franklin (1706–1790) – "father of advertising in America"
Late 1800s – Thomas J. Barratt (1841–1914) of London – called "the father of modern advertising" by T.F.G. Coates
Early 1900s – J. Henry ("Slogan") Smythe, Jr of Philadelphia – "world's best known slogan writer"
Early 1900s – Albert Lasker (1880–1952) – the "father of modern advertising"; defined advertising as "salesmanship in print, driven by a reason why"
Mid-1900s – David Ogilvy (1911–1999) – advertising tycoon, founder of Ogilvy & Mather, known as the "father of advertising"
Influential thinkers in advertising theory and practice
N. W. Ayer & Son – probably the first advertising agency to use mass media (i.e. telegraph) in a promotional campaign
Claude C. Hopkins (1866–1932) – popularised the use of test campaigns, especially coupons in direct mail, to track the efficiency of marketing spend
Ernest Dichter (1907–1991) – developed the field of motivational research, used extensively in advertising
E. St. Elmo Lewis (1872–1948) – developed the first hierarchy of effects model (AIDA) used in sales and advertising
Arthur Nielsen (1897–1980) – founded one of the earliest international advertising agencies and developed ratings for radio & TV
David Ogilvy (1911–1999) – pioneered the positioning concept and advocated of the use of brand image in advertising
Charles Coolidge Parlin (1872–1942) – regarded as the pioneer of the use of marketing research in advertising
Rosser Reeves (1910–1984) – developed the concept of the unique selling proposition (USP) and advocated the use of repetition in advertising
Al Ries (1926–2022) – advertising executive, author and credited with coining the term "positioning" in the late 1960s
Daniel Starch (1883–1979) – developed the Starch score method of measuring print media effectiveness (still in use)
J Walter Thompson – one of the earliest advertising agencies
See also
Advertisements in schools
Advertorial
Annoyance factor
Bibliography of advertising
Branded content
Commercial speech
Comparative advertising
Conquesting
Copywriting
Demo mode
Direct-to-consumer advertising
Family in advertising
Graphic design
Gross rating point
History of Advertising Trust
Informative advertising
Integrated marketing communications
List of advertising awards
Local advertising
Market overhang
Media planning
Meta-advertising
Mobile marketing
Performance-based advertising
Promotional mix
Senior media creative
Shock advertising
Viral marketing
World Federation of Advertisers
References
Notes
Further reading
Arens, William, and Michael Weigold. Contemporary Advertising: And Integrated Marketing Communications (2012)
Belch, George E., and Michael A. Belch. Advertising and Promotion: An Integrated Marketing Communications Perspective (10th ed. 2014)
Biocca, Frank. Television and Political Advertising: Volume I: Psychological Processes (Routledge, 2013)
Chandra, Ambarish, and Ulrich Kaiser. "Targeted advertising in magazine markets and the advent of the internet." Management Science 60.7 (2014) pp: 1829–1843.
Chen, Yongmin, and Chuan He. "Paid placement: Advertising and search on the internet*." The Economic Journal 121#556 (2011): F309–F328. online
Johnson-Cartee, Karen S., and Gary Copeland. Negative political advertising: Coming of age (2013)
McAllister, Matthew P. and Emily West, eds. HardcoverThe Routledge Companion to Advertising and Promotional Culture (2013)
McFall, Elizabeth Rose Advertising: a cultural economy (2004), cultural and sociological approaches to advertising
Moriarty, Sandra, and Nancy Mitchell. Advertising & IMC: Principles and Practice (10th ed. 2014)
Okorie, Nelson. The Principles of Advertising: concepts and trends in advertising (2011)
Reichert, Tom, and Jacqueline Lambiase, eds. Sex in advertising: Perspectives on the erotic appeal (Routledge, 2014)
Sheehan, Kim Bartel. Controversies in contemporary advertising (Sage Publications, 2013)
Vestergaard, Torben and Schrøder, Kim. The Language of Advertising. Oxford: Basil Blackwell, 1985.
Splendora, Anthony. "Discourse", a Review of Vestergaard and Schrøder, The Language of Advertising in Language in Society Vol. 15, No. 4 (Dec., 1986), pp. 445–449
History
Brandt, Allan. The Cigarette Century (2009)
Crawford, Robert. But Wait, There's More!: A History of Australian Advertising, 1900–2000 (2008)
Ewen, Stuart. Captains of Consciousness: Advertising and the Social Roots of Consumer Culture. New York: McGraw-Hill, 1976.
Fox, Stephen R. The mirror makers: A history of American advertising and its creators (University of Illinois Press, 1984)
Friedman, Walter A. Birth of a Salesman (Harvard University Press, 2005), In the United States
Jacobson, Lisa. Raising consumers: Children and the American mass market in the early twentieth century (Columbia University Press, 2013)
Jamieson, Kathleen Hall. Packaging the presidency: A history and criticism of presidential campaign advertising (Oxford University Press, 1996)
Laird, Pamela Walker. Advertising progress: American business and the rise of consumer marketing (Johns Hopkins University Press, 2001.)
Lears, Jackson. Fables of abundance: A cultural history of advertising in America (1995)
Liguori, Maria Chiara. "North and South: Advertising Prosperity in the Italian Economic Boom Years." Advertising & Society Review (2015) 15#4
Meyers, Cynthia B. A Word from Our Sponsor: Admen, Advertising, and the Golden Age of Radio (2014)
Mazzarella, William. Shoveling smoke: Advertising and globalization in contemporary India (Duke University Press, 2003)
Moriarty, Sandra, et al. Advertising: Principles and practice (Pearson Australia, 2014), Australian perspectives
Nevett, Terence R. Advertising in Britain: a history (1982)
Oram, Hugh. The advertising book: The history of advertising in Ireland (MOL Books, 1986)
Presbrey, Frank. "The history and development of advertising." Advertising & Society Review (2000) 1#1 online
Saunders, Thomas J. "Selling under the Swastika: Advertising and Commercial Culture in Nazi Germany." German History (2014): ghu058.
Short, John Phillip. "Advertising Empire: Race and Visual Culture in Imperial Germany." Enterprise and Society (2014): khu013.
Sivulka, Juliann. Soap, sex, and cigarettes: A cultural history of American advertising (Cengage Learning, 2011)
Spring, Dawn. "The Globalization of American Advertising and Brand Management: A Brief History of the J. Walter Thompson Company, Proctor and Gamble, and US Foreign Policy." Global Studies Journal (2013). 5#4
Stephenson, Harry Edward, and Carlton McNaught. The Story of Advertising in Canada: A Chronicle of Fifty Years (Ryerson Press, 1940)
Tungate, Mark. Adland: a global history of advertising (Kogan Page Publishers, 2007.)
West, Darrell M. Air Wars: Television Advertising and Social Media in Election Campaigns, 1952–2012 (Sage, 2013)
External links
Hartman Center for Sales, Advertising & Marketing History at Duke University
Duke University Libraries Digital Collections:
Ad*Access, over 7,000 U.S. and Canadian advertisements, dated 1911–1955, includes World War II propaganda.
Emergence of Advertising in America, 9,000 advertising items and publications dating from 1850 to 1940, illustrating the rise of consumer culture and the birth of a professionalized advertising industry in the United States.
AdViews, vintage television commercials
ROAD 2.0, 30,000 outdoor advertising images
Medicine & Madison Avenue, documents advertising of medical and pharmaceutical products
Art & Copy, a 2009 documentary film about the advertising industry
Articles containing video clips
Communication design
Promotion and marketing communications
Business models
|
https://en.wikipedia.org/wiki/AI-complete
|
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.
Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property could be useful, for example, to test for the presence of humans as CAPTCHAs aim to do, and for computer security to circumvent brute-force attacks.
History
The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. Early uses of the term are in Erik Mueller's 1987 PhD dissertation and in Eric Raymond's 1991 Jargon File.
AI-complete problems
AI-complete problems are hypothesized to include:
AI peer review (composite natural language understanding, automated reasoning, automated theorem proving, formalized logic expert system)
Bongard problems
Computer vision (and subproblems such as object recognition)
Natural language understanding (and subproblems such as text mining, machine translation, and word-sense disambiguation)
Autonomous driving
Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.
Machine translation
To translate accurately, a machine must be able to understand the text. It must be able to follow the author's argument, so it must have some ability to reason. It must have extensive world knowledge so that it knows what is being discussed — it must at least be familiar with all the same commonsense facts that the average human translator knows. Some of this knowledge is in the form of facts that can be explicitly represented, but some knowledge is unconscious and closely tied to the human body: for example, the machine may need to understand how an ocean makes one feel to accurately translate a specific metaphor in the text. It must also model the authors' goals, intentions, and emotional states to accurately reproduce them in a new language. In short, the machine is required to have wide variety of human intellectual skills, including reason, commonsense knowledge and the intuitions that underlie motion and manipulation, perception, and social intelligence. Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.
Software brittleness
Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real-world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on.
DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens."
Formalization
Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterised formally. Since many AI problems have no formalisation yet, conventional complexity theory does not allow the definition of AI-completeness.
To address this problem, a complexity theory for AI has been proposed. It is based on a model of computation that splits the computational burden between a computer and a human: one part is solved by computer and the other part solved by human. This is formalised by a human-assisted Turing machine. The formalisation defines algorithm complexity, problem complexity and reducibility which in turn allows equivalence classes to be defined.
The complexity of executing an algorithm with a human-assisted Turing machine is given by a pair , where the first element represents the complexity of the human's part and the second element is the complexity of the machine's part.
Results
The complexity of solving the following problems with a human-assisted Turing machine is:
Optical character recognition for printed text:
Turing test:
for an -sentence conversation where the oracle remembers the conversation history (persistent oracle):
for an -sentence conversation where the conversation history must be retransmitted:
for an -sentence conversation where the conversation history must be retransmitted and the person takes linear time to read the query:
ESP game:
Image labelling (based on the Arthur–Merlin protocol):
Image classification: human only: , and with less reliance on the human: .
See also
ASR-complete
List of unsolved problems in computer science
Synthetic intelligence
Practopoiesis
References
Artificial intelligence
Computational problems
|
https://en.wikipedia.org/wiki/Archaeoastronomy
|
Archaeoastronomy (also spelled archeoastronomy) is the interdisciplinary or multidisciplinary study of how people in the past "have understood the phenomena in the sky, how they used these phenomena and what role the sky played in their cultures". Clive Ruggles argues it is misleading to consider archaeoastronomy to be the study of ancient astronomy, as modern astronomy is a scientific discipline, while archaeoastronomy considers symbolically rich cultural interpretations of phenomena in the sky by other cultures. It is often twinned with ethnoastronomy, the anthropological study of skywatching in contemporary societies. Archaeoastronomy is also closely associated with historical astronomy, the use of historical records of heavenly events to answer astronomical problems and the history of astronomy, which uses written records to evaluate past astronomical practice.
Archaeoastronomy uses a variety of methods to uncover evidence of past practices including archaeology, anthropology, astronomy, statistics and probability, and history. Because these methods are diverse and use data from such different sources, integrating them into a coherent argument has been a long-term difficulty for archaeoastronomers. Archaeoastronomy fills complementary niches in landscape archaeology and cognitive archaeology. Material evidence and its connection to the sky can reveal how a wider landscape can be integrated into beliefs about the cycles of nature, such as Mayan astronomy and its relationship with agriculture. Other examples which have brought together ideas of cognition and landscape include studies of the cosmic order embedded in the roads of settlements.
Archaeoastronomy can be applied to all cultures and all time periods. The meanings of the sky vary from culture to culture; nevertheless there are scientific methods which can be applied across cultures when examining ancient beliefs. It is perhaps the need to balance the social and scientific aspects of archaeoastronomy which led Clive Ruggles to describe it as "a field with academic work of high quality at one end but uncontrolled speculation bordering on lunacy at the other".
History
Two hundred years before John Michell wrote the above, there were no archaeoastronomers and there were no professional archaeologists, but there were astronomers and antiquarians. Some of their works are considered precursors of archaeoastronomy; antiquarians interpreted the astronomical orientation of the ruins that dotted the English countryside as William Stukeley did of Stonehenge in 1740, while John Aubrey in 1678 and Henry Chauncy in 1700 sought similar astronomical principles underlying the orientation of churches. Late in the nineteenth century astronomers such as Richard Proctor and Charles Piazzi Smyth investigated the astronomical orientations of the pyramids.
The term archaeoastronomy was advanced by Elizabeth Chesley Baity (following the suggestion of Euan MacKie) in 1973, but as a topic of study it may be much older, depending on how archaeoastronomy is defined. Clive Ruggles says that Heinrich Nissen, working in the mid-nineteenth century was arguably the first archaeoastronomer. Rolf Sinclair says that Norman Lockyer, working in the late 19th and early 20th centuries, could be called the 'father of archaeoastronomy'. Euan MacKie would place the origin even later, stating: "...the genesis and modern flowering of archaeoastronomy must surely lie in the work of Alexander Thom in Britain between the 1930s and the 1970s".
In the 1960s the work of the engineer Alexander Thom and that of the astronomer Gerald Hawkins, who proposed that Stonehenge was a Neolithic computer, inspired new interest in the astronomical features of ancient sites. The claims of Hawkins were largely dismissed, but this was not the case for Alexander Thom's work, whose survey results of megalithic sites hypothesized widespread practice of accurate astronomy in the British Isles. Euan MacKie, recognizing that Thom's theories needed to be tested, excavated at the Kintraw standing stone site in Argyllshire in 1970 and 1971 to check whether the latter's prediction of an observation platform on the hill slope above the stone was correct. There was an artificial platform there and this apparent verification of Thom's long alignment hypothesis (Kintraw was diagnosed as an accurate winter solstice site) led him to check Thom's geometrical theories at the Cultoon stone circle in Islay, also with a positive result. MacKie therefore broadly accepted Thom's conclusions and published new prehistories of Britain. In contrast a re-evaluation of Thom's fieldwork by Clive Ruggles argued that Thom's claims of high accuracy astronomy were not fully supported by the evidence. Nevertheless, Thom's legacy remains strong, Edwin C. Krupp wrote in 1979, "Almost singlehandedly he has established the standards for archaeo-astronomical fieldwork and interpretation, and his amazing results have stirred controversy during the last three decades." His influence endures and practice of statistical testing of data remains one of the methods of archaeoastronomy.
The approach in the New World, where anthropologists began to consider more fully the role of astronomy in Amerindian civilizations, was markedly different. They had access to sources that the prehistory of Europe lacks such as ethnographies and the historical records of the early colonizers. Following the pioneering example of Anthony Aveni, this allowed New World archaeoastronomers to make claims for motives which in the Old World would have been mere speculation. The concentration on historical data led to some claims of high accuracy that were comparatively weak when compared to the statistically led investigations in Europe.
This came to a head at a meeting sponsored by the International Astronomical Union (IAU) in Oxford in 1981. The methodologies and research questions of the participants were considered so different that the conference proceedings were published as two volumes. Nevertheless, the conference was considered a success in bringing researchers together and Oxford conferences have continued every four or five years at locations around the world. The subsequent conferences have resulted in a move to more interdisciplinary approaches with researchers aiming to combine the contextuality of archaeological research, which broadly describes the state of archaeoastronomy today, rather than merely establishing the existence of ancient astronomies, archaeoastronomers seek to explain why people would have an interest in the night sky.
Relations to other disciplines
Archaeoastronomy has long been seen as an interdisciplinary field that uses written and unwritten evidence to study the astronomies of other cultures. As such, it can be seen as connecting other disciplinary approaches for investigating ancient astronomy: astroarchaeology (an obsolete term for studies that draw astronomical information from the alignments of ancient architecture and landscapes), history of astronomy (which deals primarily with the written textual evidence), and ethnoastronomy (which draws on the ethnohistorical record and contemporary ethnographic studies).
Reflecting Archaeoastronomy's development as an interdisciplinary subject, research in the field is conducted by investigators trained in a wide range of disciplines. Authors of recent doctoral dissertations have described their work as concerned with the fields of archaeology and cultural anthropology; with various fields of history including the history of specific regions and periods, the history of science and the history of religion; and with the relation of astronomy to art, literature and religion. Only rarely did they describe their work as astronomical, and then only as a secondary category.
Both practicing archaeoastronomers and observers of the discipline approach it from different perspectives. Other researchers relate archaeoastronomy to the history of science, either as it relates to a culture's observations of nature and the conceptual framework they devised to impose an order on those observations or as it relates to the political motives which drove particular historical actors to deploy certain astronomical concepts or techniques. Art historian Richard Poss took a more flexible approach, maintaining that the astronomical rock art of the North American Southwest should be read employing "the hermeneutic traditions of western art history and art criticism" Astronomers, however, raise different questions, seeking to provide their students with identifiable precursors of their discipline, and are especially concerned with the important question of how to confirm that specific sites are, indeed, intentionally astronomical.
The reactions of professional archaeologists to archaeoastronomy have been decidedly mixed. Some expressed incomprehension or even hostility, varying from a rejection by the archaeological mainstream of what they saw as an archaeoastronomical fringe to an incomprehension between the cultural focus of archaeologists and the quantitative focus of early archaeoastronomers. Yet archaeologists have increasingly come to incorporate many of the insights from archaeoastronomy into archaeology textbooks and, as mentioned above, some students wrote archaeology dissertations on archaeoastronomical topics.
Since archaeoastronomers disagree so widely on the characterization of the discipline, they even dispute its name. All three major international scholarly associations relate archaeoastronomy to the study of culture, using the term Astronomy in Culture or a translation. Michael Hoskin sees an important part of the discipline as fact-collecting, rather than theorizing, and proposed to label this aspect of the discipline Archaeotopography. Ruggles and Saunders proposed Cultural Astronomy as a unifying term for the various methods of studying folk astronomies. Others have argued that astronomy is an inaccurate term, what are being studied are cosmologies and people who object to the use of logos have suggested adopting the Spanish cosmovisión.
When debates polarise between techniques, the methods are often referred to by a colour code, based on the colours of the bindings of the two volumes from the first Oxford Conference, where the approaches were first distinguished. Green (Old World) archaeoastronomers rely heavily on statistics and are sometimes accused of missing the cultural context of what is a social practice. Brown (New World) archaeoastronomers in contrast have abundant ethnographic and historical evidence and have been described as 'cavalier' on matters of measurement and statistical analysis. Finding a way to integrate various approaches has been a subject of much discussion since the early 1990s.
Methodology
There is no one way to do archaeoastronomy. The divisions between archaeoastronomers tend not to be between the physical scientists and the social scientists. Instead, it tends to depend on the location and/or kind of data available to the researcher. In the Old World, there is little data but the sites themselves; in the New World, the sites were supplemented by ethnographic and historic data. The effects of the isolated development of archaeoastronomy in different places can still often be seen in research today. Research methods can be classified as falling into one of two approaches, though more recent projects often use techniques from both categories.
Green archaeoastronomy
Green archaeoastronomy is named after the cover of the book Archaeoastronomy in the Old World. It is based primarily on statistics and is particularly apt for prehistoric sites where the social evidence is relatively scant compared to the historic period. The basic methods were developed by Alexander Thom during his extensive surveys of British megalithic sites.
Thom wished to examine whether or not prehistoric peoples used high-accuracy astronomy. He believed that by using horizon astronomy, observers could make estimates of dates in the year to a specific day. The observation required finding a place where on a specific date the Sun set into a notch on the horizon. A common theme is a mountain that blocked the Sun, but on the right day would allow the tiniest fraction to re-emerge on the other side for a 'double sunset'. The animation below shows two sunsets at a hypothetical site, one the day before the summer solstice and one at the summer solstice, which has a double sunset.
To test this idea he surveyed hundreds of stone rows and circles. Any individual alignment could indicate a direction by chance, but he planned to show that together the distribution of alignments was non-random, showing that there was an astronomical intent to the orientation of at least some of the alignments. His results indicated the existence of eight, sixteen, or perhaps even thirty-two approximately equal divisions of the year. The two solstices, the two equinoxes and four cross-quarter days, days halfway between a solstice and the equinox were associated with the medieval Celtic calendar. While not all these conclusions have been accepted, it has had an enduring influence on archaeoastronomy, especially in Europe.
Euan MacKie has supported Thom's analysis, to which he added an archaeological context by comparing Neolithic Britain to the Mayan civilization to argue for a stratified society in this period. To test his ideas he conducted a couple of excavations at proposed prehistoric observatories in Scotland. Kintraw is a site notable for its four-meter high standing stone. Thom proposed that this was a foresight to a point on the distant horizon between Beinn Shianaidh and Beinn o'Chaolias on Jura. This, Thom argued, was a notch on the horizon where a double sunset would occur at midwinter. However, from ground level, this sunset would be obscured by a ridge in the landscape, and the viewer would need to be raised by two meters: another observation platform was needed. This was identified across a gorge where a platform was formed from small stones. The lack of artifacts caused concern for some archaeologists and the petrofabric analysis was inconclusive, but further research at Maes Howe and on the Bush Barrow Lozenge led MacKie to conclude that while the term 'science' may be anachronistic, Thom was broadly correct upon the subject of high-accuracy alignments.
In contrast Clive Ruggles has argued that there are problems with the selection of data in Thom's surveys. Others have noted that the accuracy of horizon astronomy is limited by variations in refraction near the horizon. A deeper criticism of Green archaeoastronomy is that while it can answer whether there was likely to be an interest in astronomy in past times, its lack of a social element means that it struggles to answer why people would be interested, which makes it of limited use to people asking questions about the society of the past. Keith Kintigh wrote: "To put it bluntly, in many cases it doesn't matter much to the progress of anthropology whether a particular archaeoastronomical claim is right or wrong because the information doesn't inform the current interpretive questions." Nonetheless, the study of alignments remains a staple of archaeoastronomical research, especially in Europe.
Brown archaeoastronomy
In contrast to the largely alignment-oriented statistically led methods of green archaeoastronomy, brown archaeoastronomy has been identified as being closer to the history of astronomy or to cultural history, insofar as it draws on historical and ethnographic records to enrich its understanding of early astronomies and their relations to calendars and ritual. The many records of native customs and beliefs made by Spanish chroniclers and ethnographic researchers means that brown archaeoastronomy is often associated with studies of astronomy in the Americas.
One famous site where historical records have been used to interpret sites is Chichen Itza. Rather than analyzing the site and seeing which targets appear popular, archaeoastronomers have instead examined the ethnographic records to see what features of the sky were important to the Mayans and then sought archaeological correlates. One example which could have been overlooked without historical records is the Mayan interest in the planet Venus. This interest is attested to by the Dresden codex which contains tables with information about Venus's appearances in the sky. These cycles would have been of astrological and ritual significance as Venus was associated with Quetzalcoatl or Xolotl. Associations of architectural features with settings of Venus can be found in Chichen Itza, Uxmal, and probably some other Mesoamerican sites.
The Temple of the Warriors bears iconography depicting feathered serpents associated with Quetzalcoatl or Kukulcan. This means that the building's alignment towards the place on the horizon where Venus first appears in the evening sky (when it coincides with the rainy season) may be meaningful. However, since both the date and the azimuth of this event change continuously, a solar interpretation of this orientation is much more likely.
Aveni claims that another building associated with the planet Venus in the form of Kukulcan, and the rainy season at Chichen Itza is the Caracol. This is a building with a circular tower and doors facing the cardinal directions. The base faces the most northerly setting of Venus. Additionally the pillars of a stylobate on the building's upper platform were painted black and red. These are colours associated with Venus as an evening and morning star. However the windows in the tower seem to have been little more than slots, making them poor at letting light in, but providing a suitable place to view out. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles considered the interpretation that the Caracol is an observatory site was debated among specialists, meeting the second of their four levels of site credibility.
Aveni states that one of the strengths of the brown methodology is that it can explore astronomies invisible to statistical analysis and offers the astronomy of the Incas as another example. The empire of the Incas was conceptually divided using ceques, radial routes emanating from the capital at Cusco. Thus there are alignments in all directions which would suggest there is little of astronomical significance, However, ethnohistorical records show that the various directions do have cosmological and astronomical significance with various points in the landscape being significant at different times of the year. In eastern Asia archaeoastronomy has developed from the history of astronomy and much archaeoastronomy is searching for material correlates of the historical record. This is due to the rich historical record of astronomical phenomena which, in China, stretches back into the Han dynasty, in the second century BC.
A criticism of this method is that it can be statistically weak. Schaefer in particular has questioned how robust the claimed alignments in the Caracol are. Because of the wide variety of evidence, which can include artefacts as well as sites, there is no one way to practice archaeoastronomy. Despite this it is accepted that archaeoastronomy is not a discipline that sits in isolation. Because archaeoastronomy is an interdisciplinary field, whatever is being investigated should make sense both archaeologically and astronomically. Studies are more likely to be considered sound if they use theoretical tools found in archaeology like analogy and homology and if they can demonstrate an understanding of accuracy and precision found in astronomy. Both quantitative analyses and interpretations based on ethnographic analogies and other contextual evidence have recently been applied in systematic studies of architectural orientations in the Maya area and in other parts of Mesoamerica.
Source materials
Because archaeoastronomy is about the many and various ways people interacted with the sky, there are a diverse range of sources giving information about astronomical practices.
Alignments
A common source of data for archaeoastronomy is the study of alignments. This is based on the assumption that the axis of alignment of an archaeological site is meaningfully oriented towards an astronomical target. Brown archaeoastronomers may justify this assumption through reading historical or ethnographic sources, while green archaeoastronomers tend to prove that alignments are unlikely to be selected by chance, usually by demonstrating common patterns of alignment at multiple sites.
An alignment is calculated by measuring the azimuth, the angle from north, of the structure and the altitude of the horizon it faces The azimuth is usually measured using a theodolite or a compass. A compass is easier to use, though the deviation of the Earth's magnetic field from true north, known as its magnetic declination must be taken into account. Compasses are also unreliable in areas prone to magnetic interference, such as sites being supported by scaffolding. Additionally a compass can only measure the azimuth to a precision of a half a degree.
A theodolite can be considerably more accurate if used correctly, but it is also considerably more difficult to use correctly. There is no inherent way to align a theodolite with North and so the scale has to be calibrated using astronomical observation, usually the position of the Sun. Because the position of celestial bodies changes with the time of day due to the Earth's rotation, the time of these calibration observations must be accurately known, or else there will be a systematic error in the measurements. Horizon altitudes can be measured with a theodolite or a clinometer.
Artifacts
For artifacts such as the Sky Disc of Nebra, alleged to be a Bronze Age artefact depicting the cosmos, the analysis would be similar to typical post-excavation analysis as used in other sub-disciplines in archaeology. An artefact is examined and attempts are made to draw analogies with historical or ethnographical records of other peoples. The more parallels that can be found, the more likely an explanation is to be accepted by other archaeologists.
A more mundane example is the presence of astrological symbols found on some shoes and sandals from the Roman Empire. The use of shoes and sandals is well known, but Carol van Driel-Murray has proposed that astrological symbols etched onto sandals gave the footwear spiritual or medicinal meanings. This is supported through citation of other known uses of astrological symbols and their connection to medical practice and with the historical records of the time.
Another well-known artefact with an astronomical use is the Antikythera mechanism. In this case analysis of the artefact, and reference to the description of similar devices described by Cicero, would indicate a plausible use for the device. The argument is bolstered by the presence of symbols on the mechanism, allowing the disc to be read.
Art and inscriptions
Art and inscriptions may not be confined to artefacts, but also appear painted or inscribed on an archaeological site. Sometimes inscriptions are helpful enough to give instructions to a site's use. For example, a Greek inscription on a stele (from Itanos) has been translated as:"Patron set this up for Zeus Epopsios. Winter solstice. Should anyone wish to know: off 'the little pig' and the stele the sun turns." From Mesoamerica come Mayan and Aztec codices. These are folding books made from Amatl, processed tree bark on which are glyphs in Mayan or Aztec script. The Dresden codex contains information regarding the Venus cycle, confirming its importance to the Mayans.
More problematic are those cases where the movement of the Sun at different times and seasons causes light and shadow interactions with petroglyphs. A widely known example is the Sun Dagger of Fajada Butte at which a glint of sunlight passes over a spiral petroglyph. The location of a dagger of light on the petroglyph varies throughout the year. At the summer solstice a dagger can be seen through the heart of the spiral; at the winter solstice two daggers appear to either side of it. It is proposed that this petroglyph was created to mark these events. Recent studies have identified many similar sites in the US Southwest and Northwestern Mexico. It has been argued that the number of solstitial markers at these sites provides statistical evidence that they were intended to mark the solstices. The Sun Dagger site on Fajada Butte in Chaco Canyon, New Mexico, stands out for its explicit light markings that record all the key events of both the solar and lunar cycles: summer solstice, winter solstice, equinox, and the major and minor lunar standstills of the Moon's 18.6 year cycle. In addition at two other sites on Fajada Butte, there are five light markings on petroglyphs recording the summer and winter solstices, equinox and solar noon. Numerous buildings and interbuilding alignments of the great houses of Chaco Canyon and outlying areas are oriented to the same solar and lunar directions that are marked at the Sun Dagger site.
If no ethnographic nor historical data are found which can support this assertion then acceptance of the idea relies upon whether or not there are enough petroglyph sites in North America that such a correlation could occur by chance. It is helpful when petroglyphs are associated with existing peoples. This allows ethnoastronomers to question informants as to the meaning of such symbols.
Ethnographies
As well as the materials left by peoples themselves, there are also the reports of other who have encountered them. The historical records of the Conquistadores are a rich source of information about the pre-Columbian Americans. Ethnographers also provide material about many other peoples.
Aveni uses the importance of zenith passages as an example of the importance of ethnography. For peoples living between the tropics of Cancer and Capricorn there are two days of the year when the noon Sun passes directly overhead and casts no shadow. In parts of Mesoamerica this was considered a significant day as it would herald the arrival of rains, and so play a part in the cycle of agriculture. This knowledge is still considered important amongst Mayan Indians living in Central America today. The ethnographic records suggested to archaeoastronomers that this day may have been important to the ancient Mayans. There are also shafts known as 'zenith tubes' which illuminate subterranean rooms when the Sun passes overhead found at places like Monte Albán and Xochicalco. It is only through the ethnography that we can speculate that the timing of the illumination was considered important in Mayan society. Alignments to the sunrise and sunset on the day of the zenith passage have been claimed to exist at several sites. However, it has been shown that, since there are very few orientations that can be related to these phenomena, they likely have different explanations.
Ethnographies also caution against over-interpretation of sites. At a site in Chaco Canyon can be found a pictograph with a star, crescent and hand. It has been argued by some astronomers that this is a record of the 1054 Supernova. However recent reexaminations of related 'supernova petroglyphs' raises questions about such sites in general. Cotte and Ruggles used the Supernova petroglyph as an example of a completely refuted site and anthropological evidence suggests other interpretations. The Zuni people, who claim a strong ancestral affiliation with Chaco, marked their sun-watching station with a crescent, star, hand and sundisc, similar to those found at the Chaco site.
Ethnoastronomy is also an important field outside of the Americas. For example, anthropological work with Aboriginal Australians is producing much information about their Indigenous astronomies and about their interaction with the modern world.
Recreating the ancient sky
Once the researcher has data to test, it is often necessary to attempt to recreate ancient sky conditions to place the data in its historical environment.
Declination
To calculate what astronomical features a structure faced a coordinate system is needed. The stars provide such a system. On a clear night observe the stars spinning around the celestial pole can be observed. This point is +90° of the North Celestial Pole or −90° observing the Southern Celestial Pole. The concentric circles the stars trace out are lines of celestial latitude, known as declination. The arc connecting the points on the horizon due East and due West (if the horizon is flat) and all points midway between the Celestial Poles is the Celestial Equator which has a declination of 0°. The visible declinations vary depending where you are on the globe. Only an observer on the North Pole of Earth would be unable to see any stars from the Southern Celestial Hemisphere at night (see diagram below). Once a declination has been found for the point on the horizon that a building faces it is then possible to say whether a specific body can be seen in that direction.
Solar positioning
While the stars are fixed to their declinations the Sun is not. The rising point of the Sun varies throughout the year. It swings between two limits marked by the solstices a bit like a pendulum, slowing as it reaches the extremes, but passing rapidly through the midpoint. If an archaeoastronomer can calculate from the azimuth and horizon height that a site was built to view a declination of +23.5° then he or she need not wait until 21 June to confirm the site does indeed face the summer solstice. For more information see History of solar observation.
Lunar positioning
The Moon's appearance is considerably more complex. Its motion, like the Sun, is between two limits—known as lunistices rather than solstices. However, its travel between lunistices is considerably faster. It takes a sidereal month to complete its cycle rather than the year-long trek of the Sun. This is further complicated as the lunistices marking the limits of the Moon's movement move on an 18.6 year cycle. For slightly over nine years the extreme limits of the Moon are outside the range of sunrise. For the remaining half of the cycle the Moon never exceeds the limits of the range of sunrise. However, much lunar observation was concerned with the phase of the Moon. The cycle from one New Moon to the next runs on an entirely different cycle, the Synodic month. Thus when examining sites for lunar significance the data can appear sparse due to the extremely variable nature of the Moon. See Moon for more details.
Stellar positioning
Finally there is often a need to correct for the apparent movement of the stars. On the timescale of human civilisation the stars have largely maintained the same position relative to each other. Each night they appear to rotate around the celestial poles due to the Earth's rotation about its axis. However, the Earth spins rather like a spinning top. Not only does the Earth rotate, it wobbles. The Earth's axis takes around 25,800 years to complete one full wobble. The effect to the archaeoastronomer is that stars did not rise over the horizon in the past in the same places as they do today. Nor did the stars rotate around Polaris as they do now. In the case of the Egyptian pyramids, it has been shown they were aligned towards Thuban, a faint star in the constellation of Draco. The effect can be substantial over relatively short lengths of time, historically speaking. For instance a person born on 25 December in Roman times would have been born with the Sun in the constellation Capricorn. In the modern period a person born on the same date would have the Sun in Sagittarius due to the precession of the equinoxes.
Transient phenomena
Additionally there are often transient phenomena, events which do not happen on an annual cycle. Most predictable are events like eclipses. In the case of solar eclipses these can be used to date events in the past. A solar eclipse mentioned by Herodotus enables us to date a battle between the Medes and the Lydians, which following the eclipse failed to happen, to 28 May, 585 BC. Other easily calculated events are supernovae whose remains are visible to astronomers and therefore their positions and magnitude can be accurately calculated.
Some comets are predictable, most famously Halley's Comet. Yet as a class of object they remain unpredictable and can appear at any time. Some have extremely lengthy orbital periods which means their past appearances and returns cannot be predicted. Others may have only ever passed through the Solar System once and so are inherently unpredictable.
Meteor showers should be predictable, but some meteors are cometary debris and so require calculations of orbits which are currently impossible to complete. Other events noted by ancients include aurorae, sun dogs and rainbows all of which are as impossible to predict as the ancient weather, but nevertheless may have been considered important phenomena.
Major topics of archaeoastronomical research
The use of calendars
A common justification for the need for astronomy is the need to develop an accurate calendar for agricultural reasons. Ancient texts like Hesiod's Works and Days, an ancient farming manual, would appear to partially confirm this: astronomical observations are used in combination with ecological signs, such as bird migrations to determine the seasons. Ethnoastronomical studies of the Hopi of the southwestern United States indicate that they carefully observed the rising and setting positions of the Sun to determine the proper times to plant crops. However, ethnoastronomical work with the Mursi of Ethiopia shows that their luni-solar calendar was somewhat haphazard, indicating the limits of astronomical calendars in some societies. All the same, calendars appear to be an almost universal phenomenon in societies as they provide tools for the regulation of communal activities.
One such example is the Tzolk'in calendar of 260 days. Together with the 365-day year, it was used in pre-Columbian Mesoamerica, forming part of a comprehensive calendrical system, which combined a series of astronomical observations and ritual cycles. Archaeoastronomical studies throughout Mesoamerica have shown that the orientations of most structures refer to the Sun and were used in combination with the 260-day cycle for scheduling agricultural activities and the accompanying rituals. The distribution of dates and intervals marked by orientations of monumental ceremonial complexes in the area along the southern Gulf Coast in Mexico, dated to about 1100 to 700 BCE, represents the earliest evidence of the use of this cycle.
Other peculiar calendars include ancient Greek calendars. These were nominally lunar, starting with the New Moon. In reality the calendar could pause or skip days with confused citizens inscribing dates by both the civic calendar and ton theoi, by the moon. The lack of any universal calendar for ancient Greece suggests that coordination of panhellenic events such as games or rituals could be difficult and that astronomical symbolism may have been used as a politically neutral form of timekeeping. Orientation measurements in Greek temples and Byzantine churches have been associated to deity's name day, festivities, and special events.
Myth and cosmology
Another motive for studying the sky is to understand and explain the universe. In these cultures myth was a tool for achieving this, and the explanations, while not reflecting the standards of modern science, are cosmologies.
The Incas arranged their empire to demonstrate their cosmology. The capital, Cusco, was at the centre of the empire and connected to it by means of ceques, conceptually straight lines radiating out from the centre. These ceques connected the centre of the empire to the four suyus, which were regions defined by their direction from Cusco. The notion of a quartered cosmos is common across the Andes. Gary Urton, who has conducted fieldwork in the Andean villagers of Misminay, has connected this quartering with the appearance of the Milky Way in the night sky. In one season it will bisect the sky and in another bisect it in a perpendicular fashion.
The importance of observing cosmological factors is also seen on the other side of the world. The Forbidden City in Beijing is laid out to follow cosmic order though rather than observing four directions. The Chinese system was composed of five directions: North, South, East, West and Centre. The Forbidden City occupied the centre of ancient Beijing. One approaches the Emperor from the south, thus placing him in front of the circumpolar stars. This creates the situation of the heavens revolving around the person of the Emperor. The Chinese cosmology is now better known through its export as feng shui.
There is also much information about how the universe was thought to work stored in the mythology of the constellations. The Barasana of the Amazon plan part of their annual cycle based on observation of the stars. When their constellation of the Caterpillar-Jaguar (roughly equivalent to the modern Scorpius) falls they prepare to catch the pupating caterpillars of the forest as they fall from the trees. The caterpillars provide food at a season when other foods are scarce.
A more well-known source of constellation myth are the texts of the Greeks and Romans. The origin of their constellations remains a matter of vigorous and occasionally fractious debate.
The loss of one of the sisters, Merope, in some Greek myths may reflect an astronomical event wherein one of the stars in the Pleiades disappeared from view by the naked eye.
Giorgio de Santillana, professor of the History of Science in the School of Humanities at the Massachusetts Institute of Technology, along with Hertha von Dechend believed that the old mythological stories handed down from antiquity were not random fictitious tales but were accurate depictions of celestial cosmology clothed in tales to aid their oral transmission. The chaos, monsters and violence in ancient myths are representative of the forces that shape each age. They believed that ancient myths are the remains of preliterate astronomy that became lost with the rise of the Greco-Roman civilization. Santillana and von Dechend in their book Hamlet's Mill, An Essay on Myth and the Frame of Time (1969) clearly state that ancient myths have no historical or factual basis other than a cosmological one encoding astronomical phenomena, especially the precession of the equinoxes. Santillana and von Dechend's approach is not widely accepted.
Displays of power
By including celestial motifs in clothing it becomes possible for the wearer to make claims the power on Earth is drawn from above. It has been said that the Shield of Achilles described by Homer is also a catalogue of constellations. In North America shields depicted in Comanche petroglyphs appear to include Venus symbolism.
Solsticial alignments also can be seen as displays of power. When viewed from a ceremonial plaza on the Island of the Sun (the mythical origin place of the Sun) in Lake Titicaca, the Sun was seen to rise at the June solstice between two towers on a nearby ridge. The sacred part of the island was separated from the remainder of it by a stone wall and ethnographic records indicate that access to the sacred space was restricted to members of the Inca ruling elite. Ordinary pilgrims stood on a platform outside the ceremonial area to see the solstice Sun rise between the towers.
In Egypt the temple of Amun-Re at Karnak has been the subject of much study. Evaluation of the site, taking into account the change over time of the obliquity of the ecliptic show that the Great Temple was aligned on the rising of the midwinter Sun. The length of the corridor down which sunlight would travel would have limited illumination at other times of the year.
In a later period the Serapeum of Alexandria was also said to have contained a solar alignment so that, on a specific sunrise, a shaft of light would pass across the lips of the statue of Serapis thus symbolising the Sun saluting the god.
Major sites of archaeoastronomical interest
Clive Ruggles and Michel Cotte recently edited a book on heritage sites of astronomy and archaeoastronomy which discussed a worldwide sample of astronomical and archaeoastronomical sites and provided criteria for the classification of archaeoastronomical sites.
Newgrange
Newgrange is a passage tomb in the Republic of Ireland dating from around 3,300 to 2,900 BC For a few days around the Winter Solstice light shines along the central passageway into the heart of the tomb. What makes this notable is not that light shines in the passageway, but that it does not do so through the main entrance. Instead it enters via a hollow box above the main doorway discovered by Michael O'Kelly. It is this roofbox which strongly indicates that the tomb was built with an astronomical aspect in mind. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles gave Newgrange as an example of a Generally accepted site, the highest of their four levels of credibility. Clive Ruggles notes:
Egypt
Since the first modern measurements of the precise cardinal orientations of the pyramids by Flinders Petrie, various astronomical methods have been proposed for the original establishment of these orientations. It was recently proposed that this was done by observing the positions of two stars in the Plough / Big Dipper which was known to Egyptians as the thigh. It is thought that a vertical alignment between these two stars checked with a plumb bob was used to ascertain where north lay. The deviations from true north using this model reflect the accepted dates of construction.
Some have argued that the pyramids were laid out as a map of the three stars in the belt of Orion, although this theory has been criticized by reputable astronomers. The site was instead probably governed by a spectacular hierophany which occurs at the summer solstice, when the Sun, viewed from the Sphinx terrace, forms—together with the two giant pyramids—the symbol Akhet, which was also the name of the Great Pyramid. Further, the south east corners of all the three pyramids align towards the temple of Heliopolis, as first discovered by the Egyptologist Mark Lehner.
The astronomical ceiling of the tomb of Senenmut (BC) contains the Celestial Diagram depicting circumpolar constellations in the form of discs. Each disc is divided into 24 sections suggesting a 24-hour time period. Constellations are portrayed as sacred deities of Egypt. The observation of lunar cycles is also evident.
El Castillo
El Castillo, also known as Kukulcán's Pyramid, is a Mesoamerican step-pyramid built in the centre of Mayan center of Chichen Itza in Mexico. Several architectural features have suggested astronomical elements. Each of the stairways built into the sides of the pyramid has 91 steps. Along with the extra one for the platform at the top, this totals 365 steps, which is possibly one for each day of the year (365.25) or the number of lunar orbits in 10,000 rotations (365.01).
A visually striking effect is seen every March and September as an unusual shadow occurs around the equinoxes. Light and shadow phenomena have been proposed to explain a possible architectural hierophany involving the sun at Chichén Itzá in a Maya Toltec structure dating to about 1000 CE. A shadow appears to descend the west balustrade of the northern stairway. The visual effect is of a serpent descending the stairway, with its head at the base in light. Additionally the western face points to sunset around 25 May, traditionally the date of transition from the dry to the rainy season. The intended alignment was, however, likely incorporated in the northern (main) facade of the temple, as it corresponds to sunsets on May 20 and July 24, recorded also by the central axis of Castillo at Tulum. The two dates are separated by 65 and 300 days, and it has been shown that the solar orientations in Mesoamerica regularly correspond to dates separated by calendrically significant intervals (multiples of 13 and 20 days). In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles used the "equinox hierophany" at Chichén Itzá as an example of an Unproven site, the third of their four levels of credibility.
Stonehenge
Many astronomical alignments have been claimed for Stonehenge, a complex of megaliths and earthworks in the Salisbury Plain of England. The most famous of these is the midsummer alignment, where the Sun rises over the Heel Stone. However, this interpretation has been challenged by some archaeologists who argue that the midwinter alignment, where the viewer is outside Stonehenge and sees the Sun setting in the henge, is the more significant alignment, and the midsummer alignment may be a coincidence due to local topography. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles gave Stonehenge as an example of a Generally accepted site, the highest of their four levels of credibility.
As well as solar alignments, there are proposed lunar alignments. The four station stones mark out a rectangle. The short sides point towards the midsummer sunrise and midwinter sunset. The long sides if viewed towards the south-east, face the most southerly rising of the Moon. Aveni notes that these lunar alignments have never gained the acceptance that the solar alignments have received. The Heel Stone azimuth is one-seventh of circumference, matching the latitude of Avebury, while summer solstice sunrise azimuth is no longer equal to the construction era direction.
Maeshowe
This is an architecturally outstanding Neolithic chambered tomb on the mainland of Orkney, Scotland—probably dating to the early 3rd millennium BC, and where the setting Sun at midwinter shines down the entrance passage into the central chamber (see Newgrange). In the 1990s further investigations were carried out to discover whether this was an accurate or an approximate solar alignment. Several new aspects of the site were discovered. In the first place the entrance passage faces the hills of the island Hoy, about 10 miles away. Secondly, it consists of two straight lengths, angled at a few degrees to each other. Thirdly, the outer part is aligned towards the midwinter sunset position on a level horizon just to the left of Ward Hill on Hoy. Fourthly the inner part points directly at the Barnhouse standing stone about 400m away and then to the right end of the summit of Ward Hill, just before it dips down to the notch between it at Cuilags to the right. This indicated line points to sunset on the first Sixteenths of the solar year (according to A. Thom) before and after the winter solstice and the notch at the base of the right slope of the Hill is at the same declination. Fourthly a similar 'double sunset' phenomenon is seen at the right end of Cuilags, also on Hoy; here the date is the first Eighth of the year before and after the winter solstice, at the beginning of November and February respectively—the Old Celtic festivals of Samhain and Imbolc. This alignment is not indicated by an artificial structure but gains plausibility from the other two indicated lines. Maeshowe is thus an extremely sophisticated calendar site which must have been positioned carefully in order to use the horizon foresights in the ways described.
Uxmal
Uxmal is a Mayan city in the Puuc Hills of Yucatán Peninsula, Mexico. The Governor's Palace at Uxmal is often used as an exemplar of why it is important to combine ethnographic and alignment data. The palace is aligned with an azimuth of 118° on the pyramid of Cehtzuc. This alignment corresponds approximately to the southernmost rising and, with a much greater precision, to the northernmost setting of Venus; both phenomena occur once every eight years. By itself this would not be sufficient to argue for a meaningful connection between the two events. The palace has to be aligned in one direction or another and why should the rising of Venus be any more important than the rising of the Sun, Moon, other planets, Sirius et cetera? The answer given is that not only does the palace point towards significant points of Venus, it is also covered in glyphs which stand for Venus and Mayan zodiacal constellations. Moreover, the great northerly extremes of Venus always occur in late April or early May, coinciding with the onset of the rainy season. The Venus glyphs placed in the cheeks of the Maya rain god Chac, most likely referring to the concomitance of these phenomena, support the west-working orientation scheme.
Chaco Canyon
In Chaco Canyon, the center of the ancient Pueblo culture in the American Southwest, numerous solar and lunar light markings and architectural and road alignments have been documented. These findings date to the 1977 discovery of the Sun Dagger site by Anna Sofaer. Three large stone slabs leaning against a cliff channel light and shadow markings onto two spiral petroglyphs on the cliff wall, marking the solstices, equinoxes and the lunar standstills of the 18.6 year cycle of the moon. Subsequent research by the Solstice Project and others demonstrated that numerous building and interbuilding alignments of the great houses of Chaco Canyon are oriented to solar, lunar and cardinal directions. In addition, research shows that the Great North Road, a thirty-five mile engineered "road", was constructed not for utilitarian purposes but rather to connect the ceremonial center of Chaco Canyon with the direction north.
Lascaux Cave
In recent years, new research has suggested that the Lascaux cave paintings in France may incorporate prehistoric star charts. Michael Rappenglueck of the University of Munich argues that some of the non-figurative dot clusters and dots within some of the figurative images correlate with the constellations of Taurus, the Pleiades and the grouping known as the "Summer Triangle". Based on her own study of the astronomical significance of Bronze Age petroglyphs in the Vallée des Merveilles and her extensive survey of other prehistoric cave painting sites in the region—most of which appear to have been selected because the interiors are illuminated by the setting Sun on the day of the winter solstice—French researcher Chantal Jègues-Wolkiewiez has further proposed that the gallery of figurative images in the Great Hall represents an extensive star map and that key points on major figures in the group correspond to stars in the main constellations as they appeared in the Paleolithic. Appliying phylogenetics to myths of the Cosmic Hunt, Julien d'Huy suggested that the palaeolithic version of this story could be the following: there is an animal that is a horned herbivore, especially an elk. One human pursues this ungulate. The hunt locates or gets to the sky. The animal is alive when it is transformed into a constellation. It forms the Big Dipper. This story may be represented in the famous Lascaux shaft 'scene'
Fringe archaeoastronomy
Archaeoastronomy owes something of this poor reputation among scholars to its occasional misuse to advance a range of pseudo-historical accounts. During the 1930s, Otto S. Reuter compiled a study entitled Germanische Himmelskunde, or "Teutonic Skylore". The astronomical orientations of ancient monuments claimed by Reuter and his followers would place the ancient Germanic peoples ahead of the Ancient Near East in the field of astronomy, demonstrating the intellectual superiority of the "Aryans" (Indo-Europeans) over the Semites.
More recently Gallagher, Pyle, and Fell interpreted inscriptions in West Virginia as a description in Celtic Ogham alphabet of the supposed winter solstitial marker at the site. The controversial translation was supposedly validated by a problematic archaeoastronomical indication in which the winter solstice Sun shone on an inscription of the Sun at the site. Subsequent analyses criticized its cultural inappropriateness, as well as its linguistic and archaeoastronomical claims, to describe it as an example of "cult archaeology".
Archaeoastronomy is sometimes related to the fringe discipline of Archaeocryptography, when its followers attempt to find underlying mathematical orders beneath the proportions, size, and placement of archaeoastronomical sites such as Stonehenge and the Pyramid of Kukulcán at Chichen Itza.
India
Since the 19th century, numerous scholars have sought to use archaeoastronomical calculations to demonstrate the antiquity of Ancient Indian Vedic culture, computing the dates of astronomical observations ambiguously described in ancient poetry to as early as 4000 BC. David Pingree, a historian of Indian astronomy, condemned "the scholars who perpetrate wild theories of prehistoric science and call themselves archaeoastronomers."
Organisations and publications
There are currently three academic organisations for scholars of archaeoastronomy. ISAACthe International Society for Archaeoastronomy and Astronomy in Culturewas founded in 1995 and now sponsors the Oxford conferences and Archaeoastronomy – the Journal of Astronomy in Culture. SEAC – La Société Européenne pour l'Astronomie dans la Culture
is slightly older; it was created in 1992. SEAC holds annual conferences in Europe and publishes refereed conference proceedings on an annual basis. There is also SIACLa Sociedad Interamericana de Astronomía en la Cultura, primarily a Latin American organisation which was founded in 2003. In 2009, the Society for Cultural Astronomy in the American Southwest (SCAAS) was founded, a regional organisation focusing on the astronomies of the native peoples of the Southwestern United States; it has since held seven meetings and workshops. Two new organisations focused on regional archaeoastronomy were founded in 2013: ASIA – the Australian Society for Indigenous Astronomy in Australia and SMART – the Society of Māori Astronomy Research and Traditions in New Zealand. Additionally, in 2017, the Romanian Society for Cultural Astronomy ex was founded. It holds an annual international conference and has published the first monograph on archaeo- and ethnoastronomy in Romania (2019).
Additionally the Journal for the History of Astronomy publishes many archaeoastronomical papers. For twenty-seven volumes (from 1979 to 2002) it published an annual supplement Archaeoastronomy. The Journal of Astronomical History and Heritage (National Astronomical Research Institute of Thailand), Culture & Cosmos (University of Wales, UK) and Mediterranean Archaeology and Archaeometry (University of Aegean, Greece) also publish papers on archaeoastronomy.
Various national archaeoastronomical projects have been undertaken. Among them is the program at the Tata Institute of Fundamental Research named "Archaeo Astronomy in Indian Context" that has made interesting findings in this field.
See also
References
Citations
Bibliography
. reprinted in Michael H. Shank, ed., The Scientific Enterprise in Antiquity and the Middle Ages (Chicago: Univ. of Chicago Pr., 2000), pp. 30–39.
Three volumes; 217 articles.
Šprajc, Ivan (2015). Governor's Palace at Uxmal. In: Handbook of Archaeoastronomy and Ethnoastronomy, ed. by Clive L. N. Ruggles, New York: Springer, pp. 773–81
Šprajc, Ivan, and Pedro Francisco Sánchez Nava (2013). Astronomía en la arquitectura de Chichén Itzá: una reevaluación. Estudios de Cultura Maya XLI: 31–60.
Further reading
External links
Astronomy before History - A chapter from The Cambridge Concise History of Astronomy, Michael Hoskin ed., 1999.
Clive Ruggles: images, bibliography, software, and synopsis of his course at the University of Leicester.
Traditions of the Sun – NASA and others exploring the world's ancient observatories.
Ancient Observatories: Timeless Knowledge NASA Poster on ancient (and modern) observatories.
Astronomy is the most ancient of the sciences. (About Kazakh folk astronomy)
Ancient astronomy
Astronomical sub-disciplines
Archaeological sub-disciplines
Traditional knowledge
Articles containing video clips
|
https://en.wikipedia.org/wiki/Ammeter
|
An ammeter (abbreviation of Ampere meter) is an instrument used to measure the current in a circuit. Electric currents are measured in amperes (A), hence the name. For direct measurement, the ammeter is connected in series with the circuit in which the current is to be measured. An ammeter usually has low resistance so that it does not cause a significant voltage drop in the circuit being measured.
Instruments used to measure smaller currents, in the milliampere or microampere range, are designated as milliammeters or microammeters. Early ammeters were laboratory instruments that relied on the Earth's magnetic field for operation. By the late 19th century, improved instruments were designed which could be mounted in any position and allowed accurate measurements in electric power systems. It is generally represented by letter 'A' in a circuit.
History
The relation between electric current, magnetic fields and physical forces was first noted by Hans Christian Ørsted in 1820, who observed a compass needle was deflected from pointing North when a current flowed in an adjacent wire. The tangent galvanometer was used to measure currents using this effect, where the restoring force returning the pointer to the zero position was provided by the Earth's magnetic field. This made these instruments usable only when aligned with the Earth's field. Sensitivity of the instrument was increased by using additional turns of wire to multiply the effect – the instruments were called "multipliers".
The word rheoscope as a detector of electrical currents was coined by Sir Charles Wheatstone about 1840 but is no longer used to describe electrical instruments. The word makeup is similar to that of rheostat (also coined by Wheatstone) which was a device used to adjust the current in a circuit. Rheostat is a historical term for a variable resistance, though unlike rheoscope may still be encountered.
Types
Some instruments are panel meters, meant to be mounted on some sort of control panel. Of these, the flat, horizontal or vertical type is often called an edgewise meter.
Moving-coil
The D'Arsonval galvanometer is a moving coil ammeter. It uses magnetic deflection, where current passing through a coil placed in the magnetic field of a permanent magnet causes the coil to move. The modern form of this instrument was developed by Edward Weston, and uses two spiral springs to provide the restoring force. The uniform air gap between the iron core and the permanent magnet poles make the deflection of the meter linearly proportional to current. These meters have linear scales. Basic meter movements can have full-scale deflection for currents from about 25 microamperes to 10 milliamperes.
Because the magnetic field is polarised, the meter needle acts in opposite directions for each direction of current. A DC ammeter is thus sensitive to which polarity it is connected in; most are marked with a positive terminal, but some have centre-zero mechanisms
and can display currents in either direction. A moving coil meter indicates the average (mean) of a varying current through it,
which is zero for AC. For this reason, moving-coil meters are only usable directly for DC, not AC.
This type of meter movement is extremely common for both ammeters and other meters derived from them, such as voltmeters and ohmmeters.
Moving magnet
Moving magnet ammeters operate on essentially the same principle as moving coil, except that the coil is mounted in the meter case, and a permanent magnet moves the needle. Moving magnet Ammeters are able to carry larger currents than moving coil instruments, often several tens of Amperes, because the coil can be made of thicker wire and the current does not have to be carried by the hairsprings. Indeed, some Ammeters of this type do not have hairsprings at all, instead using a fixed permanent magnet to provide the restoring force.
Electrodynamic
An electrodynamic ammeter uses an electromagnet instead of the permanent magnet of the d'Arsonval movement. This instrument can respond to both alternating and direct current and also indicates true RMS for AC. See Wattmeter for an alternative use for this instrument.
Moving-iron
Moving iron ammeters use a piece of iron which moves when acted upon by the electromagnetic force of a fixed coil of wire. The moving-iron meter was invented by Austrian engineer Friedrich Drexler in 1884.
This type of meter responds to both direct and alternating currents (as opposed to the moving-coil ammeter, which works on direct current only). The iron element consists of a moving vane attached to a pointer, and a fixed vane, surrounded by a coil. As alternating or direct current flows through the coil and induces a magnetic field in both vanes, the vanes repel each other and the moving vane deflects against the restoring force provided by fine helical springs. The deflection of a moving iron meter is proportional to the square of the current. Consequently, such meters would normally have a nonlinear scale, but the iron parts are usually modified in shape to make the scale fairly linear over most of its range. Moving iron instruments indicate the RMS value of any AC waveform applied. Moving iron ammeters are commonly used to measure current in industrial frequency AC circuits.
Hot-wire
In a hot-wire ammeter, a current passes through a wire which expands as it heats. Although these instruments have slow response time and low accuracy, they were sometimes used in measuring radio-frequency current.
These also measure true RMS for an applied AC.
Digital
In much the same way as the analogue ammeter formed the basis for a wide variety of derived meters, including voltmeters, the basic mechanism for a digital meter is a digital voltmeter mechanism, and other types of meter are built around this.
Digital ammeter designs use a shunt resistor to produce a calibrated voltage proportional to the current flowing. This voltage is then measured by a digital voltmeter, through use of an analog-to-digital converter (ADC); the digital display is calibrated to display the current through the shunt. Such instruments are often calibrated to indicate the RMS value for a sine wave only, but many designs will indicate true RMS within limitations of the wave crest factor.
Integrating
There is also a range of devices referred to as integrating ammeters.
In these ammeters the current is summed over time, giving as a result the product of current and time; which is proportional to the electrical charge transferred with that current. These can be used for metering energy (the charge needs to be multiplied by the voltage to give energy) or for estimating the charge of a battery or capacitor.
Picoammeter
A picoammeter, or pico ammeter, measures very low electric current, usually from the picoampere range at the lower end to the milliampere range at the upper end. Picoammeters are used where the current being measured is below the limits of sensitivity of other devices, such as multimeters.
Most picoammeters use a "virtual short" technique and have several different measurement ranges that must be switched between to cover multiple decades of measurement. Other modern picoammeters use log compression and a "current sink" method that eliminates range switching and associated voltage spikes.
Special design and usage considerations must be observed in order to reduce leakage current which may swamp measurements such as special insulators and driven shields. Triaxial cable is often used for probe connections.
Application
Ammeters must be connected in series with the circuit to be measured. For relatively small currents (up to a few amperes), an ammeter may pass the whole of the circuit current. For larger direct currents, a shunt resistor carries most of the circuit current and a small, accurately-known fraction of the current passes through the meter movement. For alternating current circuits, a current transformer may be used to provide a convenient small current to drive an instrument, such as 1 or 5 amperes, while the primary current to be measured is much larger (up to thousands of amperes). The use of a shunt or current transformer also allows convenient location of the indicating meter without the need to run heavy circuit conductors up to the point of observation. In the case of alternating current, the use of a current transformer also isolates the meter from the high voltage of the primary circuit. A shunt provides no such isolation for a direct-current ammeter, but where high voltages are used it may be possible to place the ammeter in the "return" side of the circuit which may be at low potential with respect to earth.
Ammeters must not be connected directly across a voltage source since their internal resistance is very low and excess current would flow. Ammeters are designed for a low voltage drop across their terminals, much less than one volt; the extra circuit losses produced by the ammeter are called its "burden" on the measured circuit(I).
Ordinary Weston-type meter movements can measure only milliamperes at most, because the springs and practical coils can carry only limited currents. To measure larger currents, a resistor called a shunt is placed in parallel with the meter. The resistances of shunts is in the integer to fractional milliohm range. Nearly all of the current flows through the shunt, and only a small fraction flows through the meter. This allows the meter to measure large currents. Traditionally, the meter used with a shunt has a full-scale deflection (FSD) of , so shunts are typically designed to produce a voltage drop of when carrying their full rated current.
To make a multi-range ammeter, a selector switch can be used to connect one of a number of shunts across the meter. It must be a make-before-break switch to avoid damaging current surges through the meter movement when switching ranges.
A better arrangement is the Ayrton shunt or universal shunt, invented by William E. Ayrton, which does not require a make-before-break switch. It also avoids any inaccuracy because of contact resistance. In the figure, assuming for example, a movement with a full-scale voltage of 50 mV and desired current ranges of 10 mA, 100 mA, and 1 A, the resistance values would be: R1 = 4.5 ohms, R2 = 0.45 ohm, R3 = 0.05 ohm. And if the movement resistance is 1000 ohms, for example, R1 must be adjusted to 4.525 ohms.
Switched shunts are rarely used for currents above 10 amperes.
Zero-center ammeters are used for applications requiring current to be measured with both polarities, common in scientific and industrial equipment. Zero-center ammeters are also commonly placed in series with a battery. In this application, the charging of the battery deflects the needle to one side of the scale (commonly, the right side) and the discharging of the battery deflects the needle to the other side. A special type of zero-center ammeter for testing high currents in cars and trucks has a pivoted bar magnet that moves the pointer, and a fixed bar magnet to keep the pointer centered with no current. The magnetic field around the wire carrying current to be measured deflects the moving magnet.
Since the ammeter shunt has a very low resistance, mistakenly wiring the ammeter in parallel with a voltage source will cause a short circuit, at best blowing a fuse, possibly damaging the instrument and wiring, and exposing an observer to injury.
In AC circuits, a current transformer can be used to convert the large current in the main circuit into a smaller current more suited to a meter. Some designs of transformer are able to directly convert the magnetic field around a conductor into a small AC current, typically either or at full rated current, that can be easily read by a meter. In a similar way, accurate AC/DC non-contact ammeters have been constructed using Hall effect magnetic field sensors. A portable hand-held clamp-on ammeter is a common tool for maintenance of industrial and commercial electrical equipment, which is temporarily clipped over a wire to measure current. Some recent types have a parallel pair of magnetically soft probes that are placed on either side of the conductor.
See also
Clamp meter
Class of accuracy in electrical measurements
Electric circuit
Electrical measurements
Electrical current#Measurement
Electronics
List of electronics topics
Measurement category
Multimeter
Ohmmeter
Rheoscope
Voltmeter
Notes
References
External links
— from Lessons in Electric Circuits series main page
Electrical meters
Electronic test equipment
Flow meters
|
https://en.wikipedia.org/wiki/Amoxicillin
|
Amoxicillin is an antibiotic medication belonging to the aminopenicillin class of the penicillin family. The drug is used to treat bacterial infections such as middle ear infection, strep throat, pneumonia, skin infections, odontogenic infections, and urinary tract infections. It is taken by mouth, or less commonly by injection.
Common adverse effects include nausea and rash. It may also increase the risk of yeast infections and, when used in combination with clavulanic acid, diarrhea. It should not be used in those who are allergic to penicillin. While usable in those with kidney problems, the dose may need to be decreased. Its use in pregnancy and breastfeeding does not appear to be harmful. Amoxicillin is in the β-lactam family of antibiotics.
Amoxicillin was discovered in 1958 and came into medical use in 1972. Amoxil was approved for medical use in the United States in 1974, and in the United Kingdom in 1977. It is on the (WHO) World Health Organization's List of Essential Medicines. It is one of the most commonly prescribed antibiotics in children. Amoxicillin is available as a generic medication. In 2020, it was the 40th most commonly prescribed medication in the United States, with more than 15million prescriptions.
Medical uses
Amoxicillin is used in the treatment of a number of infections, including acute otitis media, streptococcal pharyngitis, pneumonia, skin infections, urinary tract infections, Salmonella infections, Lyme disease, and chlamydia infections.
Acute otitis media
Children with acute otitis media who are younger than six months of age are generally treated with amoxicillin or other antibiotics. Although most children with acute otitis media who are older than two years old do not benefit from treatment with amoxicillin or other antibiotics, such treatment may be helpful in children younger than two years old with acute otitis media that is bilateral or accompanied by ear drainage. In the past, amoxicillin was dosed three times daily when used to treat acute otitis media, which resulted in missed doses in routine ambulatory practice. There is now evidence that two times daily dosing or once daily dosing has similar effectiveness.
Respiratory infections
Amoxicillin and amoxicillin-clavulanate have been recommended by guidelines as the drug of choice for bacterial sinusitis and other respiratory infections. Most sinusitis infections are caused by viruses, for which amoxicillin and amoxicillin-clavulanate are ineffective, and the small benefit gained by amoxicillin may be overridden by the adverse effects.
Amoxicillin is recommended as the preferred first-line treatment for community-acquired pneumonia in adults by the National Institute for Health and Care Excellence, either alone (mild to moderate severity disease) or in combination with a macrolide. The World Health Organization (WHO) recommends amoxicillin as first-line treatment for pneumonia that is not "severe". Amoxicillin is used in post-exposure inhalation of anthrax to prevent disease progression and for prophylaxis.
H. pylori
It is effective as one part of a multi-drug regimen for treatment of stomach infections of Helicobacter pylori. It is typically combined with a proton-pump inhibitor (such as omeprazole) and a macrolide antibiotic (such as clarithromycin); other drug combinations are also effective.
Lyme borreliosis
Amoxicillin is effective for treatment of early cutaneous Lyme borreliosis; the effectiveness and safety of oral amoxicillin is neither better nor worse than common alternatively-used antibiotics.
Odontogenic infections
Amoxicillin is used to treat odontogenic infections, infections of the tongue, lips, and other oral tissues. It may be prescribed following a tooth extraction, particularly in those with compromised immune systems.
Skin infections
Amoxicillin is occasionally used for the treatment of skin infections, such as acne vulgaris. It is often an effective treatment for cases of acne vulgaris that have responded poorly to other antibiotics, such as doxycycline and minocycline.
Infections in infants in resource-limited settings
Amoxicillin is recommended by the World Health Organization for the treatment of infants with signs and symptoms of pneumonia in resource-limited situations when the parents are unable or unwilling to accept hospitalization of the child. Amoxicillin in combination with gentamicin is recommended for the treatment of infants with signs of other severe infections when hospitalization is not an option.
Prevention of bacterial endocarditis
It is also used to prevent bacterial endocarditis and as a pain-reliever in high-risk people having dental work done, to prevent Streptococcus pneumoniae and other encapsulated bacterial infections in those without spleens, such as people with sickle-cell disease, and for both the prevention and the treatment of anthrax. The United Kingdom recommends against its use for infectious endocarditis prophylaxis. These recommendations do not appear to have changed the rates of infection for infectious endocarditis.
Combination treatment
Amoxicillin is susceptible to degradation by β-lactamase-producing bacteria, which are resistant to most β-lactam antibiotics, such as penicillin. For this reason, it may be combined with clavulanic acid, a β-lactamase inhibitor. This drug combination is commonly called co-amoxiclav.
Spectrum of activity
It is a moderate-spectrum, bacteriolytic, β-lactam antibiotic in the aminopenicillin family used to treat susceptible Gram-positive and Gram-negative bacteria. It is usually the drug of choice within the class because it is better-absorbed, following oral administration, than other β-lactam antibiotics.
In general, Streptococcus, Bacillus subtilis, Enterococcus, Haemophilus, Helicobacter, and Moraxella are susceptible to amoxicillin, whereas Citrobacter, Klebsiella and Pseudomonas aeruginosa are resistant to it. Some E. coli and most clinical strains of Staphylococcus aureus have developed resistance to amoxicillin to varying degrees.
Adverse effects
Adverse effects are similar to those for other β-lactam antibiotics, including nausea, vomiting, rashes, and antibiotic-associated colitis. Loose bowel movements (diarrhea) may also occur. Rarer adverse effects include mental changes, lightheadedness, insomnia, confusion, anxiety, sensitivity to lights and sounds, and unclear thinking. Immediate medical care is required upon the first signs of these adverse effects.
The onset of an allergic reaction to amoxicillin can be very sudden and intense; emergency medical attention must be sought as quickly as possible. The initial phase of such a reaction often starts with a change in mental state, skin rash with intense itching (often beginning in fingertips and around groin area and rapidly spreading), and sensations of fever, nausea, and vomiting. Any other symptoms that seem even remotely suspicious must be taken very seriously. However, more mild allergy symptoms, such as a rash, can occur at any time during treatment, even up to a week after treatment has ceased. For some people allergic to amoxicillin, the adverse effects can be fatal due to anaphylaxis.
Use of the amoxicillin/clavulanic acid combination for more than one week has caused a drug-induced immunoallergic-type hepatitis in some patients. Young children having ingested acute overdoses of amoxicillin manifested lethargy, vomiting, and renal dysfunction.
There is poor reporting of adverse effects of amoxicillin from clinical trials. For this reason, the severity and frequency of adverse effects from amoxicillin is probably higher than reported from clinical trials.
Nonallergic rash
Between 3 and 10% of children taking amoxicillin (or ampicillin) show a late-developing (>72 hours after beginning medication and having never taken penicillin-like medication previously) rash, which is sometimes referred to as the "amoxicillin rash". The rash can also occur in adults and may rarely be a component of the DRESS syndrome.
The rash is described as maculopapular or morbilliform (measles-like; therefore, in medical literature, it is called "amoxicillin-induced morbilliform rash".). It starts on the trunk and can spread from there. This rash is unlikely to be a true allergic reaction and is not a contraindication for future amoxicillin usage, nor should the current regimen necessarily be stopped. However, this common amoxicillin rash and a dangerous allergic reaction cannot easily be distinguished by inexperienced persons, so a healthcare professional is often required to distinguish between the two.
A nonallergic amoxicillin rash may also be an indicator of infectious mononucleosis. Some studies indicate about 80–90% of patients with acute Epstein–Barr virus infection treated with amoxicillin or ampicillin develop such a rash.
Interactions
Amoxicillin may interact with these drugs:
Anticoagulants (dabigatran, warfarin).
Methotrexate (chemotherapy and immunosuppressant).
Typhoid, Cholera and BCG vaccines.
Probenecid reduces renal excretion and increases blood levels of amoxicillin.
Oral contraceptives potentially become less effective.
Allopurinol (gout treatment).
Mycophenolate (immunosuppressant)
Pharmacology
Amoxicillin (α-amino-p-hydroxybenzyl penicillin) is a semisynthetic derivative of penicillin with a structure similar to ampicillin but with better absorption when taken by mouth, thus yielding higher concentrations in blood and in urine. Amoxicillin diffuses easily into tissues and body fluids. It will cross the placenta and is excreted into breastmilk in small quantities. It is metabolized by the liver and excreted into the urine. It has an onset of 30 minutes and a half-life of 3.7 hours in newborns and 1.4 hours in adults.
Amoxicillin attaches to the cell wall of susceptible bacteria and results in their death. It also is a bactericidal compound. It is effective against streptococci, pneumococci, enterococci, Haemophilus influenzae, Escherichia coli, Proteus mirabilis, Neisseria meningitidis, Neisseria gonorrhoeae, Shigella, Chlamydia trachomatis, Salmonella, Borrelia burgdorferi, and Helicobacter pylori. As a derivative of ampicillin, amoxicillin is a member of the penicillin family and, like penicillins, is a β-lactam antibiotic. It inhibits cross-linkage between the linear peptidoglycan polymer chains that make up a major component of the bacterial cell wall.
It has two ionizable groups in the physiological range (the amino group in alpha-position to the amide carbonyl group and the carboxyl group).
History
Amoxicillin was one of several semisynthetic derivatives of 6-aminopenicillanic acid (6-APA) developed by the Beecham Group in the 1960s. It was invented by Anthony Alfred Walter Long and John Herbert Charles Nayler, two British scientists. It became available in 1972 and was the second aminopenicillin to reach the market (after ampicillin in 1961). Co-amoxiclav became available in 1981.
Society and culture
Economics
Amoxicillin is relatively inexpensive. In 2022, a survey of 8 generic antibiotics commonly prescribed in the United States found their average cost to be about $42.67, while amoxicillin was sold for $12.14 on average.
Modes of delivery
Pharmaceutical manufacturers make amoxicillin in trihydrate form, for oral use available as capsules, regular, chewable and dispersible tablets, syrup and pediatric suspension for oral use, and as the sodium salt for intravenous administration.
An extended-release is available. The intravenous form of amoxicillin is not sold in the United States. When an intravenous aminopenicillin is required in the United States, ampicillin is typically used. When there is an adequate response to ampicillin, the course of antibiotic therapy may often be completed with oral amoxicillin.
Research with mice indicated successful delivery using intraperitoneally injected amoxicillin-bearing microparticles.
Names
"Amoxicillin" is the International Nonproprietary Name (INN), British Approved Name (BAN), and United States Adopted Name (USAN), while "amoxycillin" is the Australian Approved Name (AAN).
Amoxicillin is one of the semisynthetic penicillins discovered by former pharmaceutical company Beecham Group. The patent for amoxicillin has expired, thus amoxicillin and co-amoxiclav preparations are marketed under various brand names across the world.
Veterinary uses
Amoxicillin is also sometimes used as an antibiotic for animals. The use of amoxicillin for animals intended for human consumption (chickens, cattle, and swine for example) has been approved.
References
Further reading
External links
Carboxylic acids
Enantiopure drugs
GSK plc brands
Lyme disease
Penicillins
Phenethylamines
Phenols
Wikipedia medicine articles ready to translate
World Health Organization essential medicines
|
https://en.wikipedia.org/wiki/ArgoUML
|
ArgoUML is an UML diagramming application written in Java and released under the open source Eclipse Public License. By virtue of being a Java application, it is available on any platform supported by Java SE.
History
ArgoUML was originally developed at UC Irvine by Jason E. Robbins, leading to his Ph.D. It was an open source project hosted by Tigris.org and moved in 2019 to GitHub. The ArgoUML project included more than 19,000 registered users and over 150 developers.
In 2003, ArgoUML won the Software Development Magazine's annual Readers' Choice Award in the “Design and Analysis Tools” category.
ArgoUML development has suffered from lack of manpower. For example, Undo has been a perpetually requested feature since 2003 but has not been implemented yet.
Features
According to the official feature list, ArgoUML is capable of the following:
All 9 UML 1.4 diagrams are supported.
Closely follows the UML standard.
Platform independent – Java 1.5+ and C++.
Click and Go! with Java Web Start (no setup required, starts from your web browser).
Standard UML 1.4 Metamodel.
XMI support.
Export diagrams as GIF, PNG, PS, EPS, PGML and SVG.
Available in ten languages: EN, EN-GB, DE, ES, IT, RU, FR, NB, PT, ZH.
Advanced diagram editing and zoom.
Built-in design critics provide unobtrusive review of design and suggestions for improvements.
Extensible modules interface.
OCL support.
Forward engineering (code generation supports C++ and C#, Java, PHP 4, PHP 5, Ruby and, with less mature modules, Ada, Delphi and SQL).
Reverse engineering / JAR/class file import.
Weaknesses
ArgoUML does not yet completely implement the UML standard.
Partial undo feature (working for graphics edits )
Java Web Start launching may no longer work reliably. See Java Web Start.
See also
List of UML tools
MetaCASE tool
References
External links
Java platform software
Free UML tools
1999 software
Software using the Eclipse license
|
https://en.wikipedia.org/wiki/Alkali
|
In chemistry, an alkali (; from ) is a basic, ionic salt of an alkali metal or an alkaline earth metal. An alkali can also be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline, and less often, alkalescent, is commonly used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases.
Etymology
The word "alkali" is derived from Arabic al qalīy (or alkali), meaning the calcined ashes (see calcination), referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name Kalium), which ultimately derived from alkali.
Common properties of alkalis and bases
Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include:
Moderately concentrated solutions (over 10−3 M) have a pH of 10 or greater. This means that they will turn phenolphthalein from colorless to pink.
Concentrated solutions are caustic (causing chemical burns).
Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin.
Alkalis are normally water-soluble, although some like barium carbonate are only soluble when reacting with an acidic aqueous solution.
Difference between alkali and base
The terms "base" and "alkali" are often used interchangeably, particularly outside the context of chemistry and chemical engineering.
There are various, more specific definitions for the concept of an alkali. Alkalis are usually defined as a subset of the bases. One of two subsets is commonly chosen.
A basic salt of an alkali metal or alkaline earth metal (this includes Mg(OH)2 (magnesium hydroxide) but excludes NH3 (ammonia)).
Any base that is soluble in water and forms hydroxide ions or the solution of a base in water. (This includes both Mg(OH)2 and NH3, which forms NH4OH.)
The second subset of bases is also called an "Arrhenius base".
Alkali salts
Alkali salts are soluble hydroxides of alkali metals and alkaline earth metals, of which common examples are:
Sodium hydroxide (NaOH) – often called "caustic soda"
Potassium hydroxide (KOH) – commonly called "caustic potash"
Lye – generic term for either of two previous salts or their mixture
Calcium hydroxide (Ca(OH)2) – saturated solution known as "limewater"
Magnesium hydroxide (Mg(OH)2) – an atypical alkali since it has low solubility in water (although the dissolved portion is considered a strong base due to complete dissociation of its ions)
Alkaline soil
Soils with pH values that are higher than 7.3 are usually defined as being alkaline. These soils can occur naturally, due to the presence of alkali salts. Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalo grass), most plants prefer a mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems.
Alkali lakes
In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake.
Examples of alkali lakes:
Alkali Lake, Lake County, Oregon
Baldwin Lake, San Bernardino County, California
Bear Lake on the Utah–Idaho border
Lake Magadi in Kenya
Lake Turkana in Kenya
Mono Lake, near Owens Valley in California
Redberry Lake, Saskatchewan
Summer Lake, Lake County, Oregon
Tramping Lake, Saskatchewan
See also
Alkali metals
Alkaline earth metals
Base (chemistry)
References
Inorganic chemistry
|
https://en.wikipedia.org/wiki/Anemometer
|
In meteorology, an anemometer () is a device that measures wind speed and direction. It is a common instrument used in weather stations. The earliest known description of an anemometer was by Italian architect and author Leon Battista Alberti (1404–1472) in 1450.
History
The anemometer has changed little since its development in the 15th century. Alberti is said to have invented it around 1450. In the ensuing centuries numerous others, including Robert Hooke
(1635–1703), developed their own versions, with some mistakenly credited as its inventor. In 1846, Thomas Romney Robinson (1792–1882) improved the design by using four hemispherical cups and mechanical wheels. In 1926, Canadian meteorologist John Patterson (1872–1956) developed a three-cup anemometer, which was improved by Brevoort and Joiner in 1935. In 1991, Derek Weston added the ability to measure wind direction. In 1994, Andreas Pflitsch developed the sonic anemometer.
Velocity anemometers
Cup anemometers
A simple type of anemometer was invented in 1845 by Rev Dr John Thomas Romney Robinson of Armagh Observatory. It consisted of four hemispherical cups on horizontal arms mounted on a vertical shaft. The air flow past the cups in any horizontal direction turned the shaft at a rate roughly proportional to the wind's speed. Therefore, counting the shaft's revolutions over a set time interval produced a value proportional to the average wind speed for a wide range of speeds. This type of instrument is also called a rotational anemometer.
With a four-cup anemometer, the wind always has the hollow of one cup presented to it, and is blowing on the back of the opposing cup. Since a hollow hemisphere has a drag coefficient of .38 on the spherical side and 1.42 on the hollow side, more force is generated on the cup that presenting its hollow side to the wind. Because of this asymmetrical force, torque is generated on the anemometer's axis, causing it to spin.
Theoretically, the anemometer's speed of rotation should be proportional to the wind speed because the force produced on an object is proportional to the speed of the gas or fluid flowing past it. However, in practice, other factors influence the rotational speed, including turbulence produced by the apparatus, increasing drag in opposition to the torque produced by the cups and support arms, and friction on the mount point. When Robinson first designed his anemometer, he asserted that the cups moved one-third of the speed of the wind, unaffected by cup size or arm length. This was apparently confirmed by some early independent experiments, but it was incorrect. Instead, the ratio of the speed of the wind and that of the cups, the anemometer factor, depends on the dimensions of the cups and arms, and can have a value between two and a little over three. Once the error was discovered, all previous experiment involving anemometers had to be repeated.
The three-cup anemometer developed by Canadian John Patterson in 1926, and subsequent cup improvements by Brevoort & Joiner of the United States in 1935, led to a cupwheel design with a nearly linear response and an error of less than 3% up to . Patterson found that each cup produced maximum torque when it was at 45° to the wind flow. The three-cup anemometer also had a more constant torque and responded more quickly to gusts than the four-cup anemometer.
The three-cup anemometer was further modified by Australian Dr. Derek Weston in 1991 to also measure wind direction. He added a tag to one cup, causing the cupwheel speed to increase and decrease as the tag moved alternately with and against the wind. Wind direction is calculated from these cyclical changes in speed, while wind speed is determined from the average cupwheel speed.
Three-cup anemometers are currently the industry standard for wind resource assessment studies and practice.
Vane anemometers
One of the other forms of mechanical velocity anemometer is the vane anemometer. It may be described as a windmill or a propeller anemometer. Unlike the Robinson anemometer, whose axis of rotation is vertical, the vane anemometer must have its axis parallel to the direction of the wind and is therefore horizontal. Furthermore, since the wind varies in direction and the axis has to follow its changes, a wind vane or some other contrivance to fulfill the same purpose must be employed.
A vane anemometer thus combines a propeller and a tail on the same axis to obtain accurate and precise wind speed and direction measurements from the same instrument. The speed of the fan is measured by a rev counter and converted to a windspeed by an electronic chip. Hence, volumetric flow rate may be calculated if the cross-sectional area is known.
In cases where the direction of the air motion is always the same, as in ventilating shafts of mines and buildings, wind vanes known as air meters are employed, and give satisfactory results.
Hot-wire anemometers
Hot wire anemometers use a fine wire (on the order of several micrometres) electrically heated to some temperature above the ambient. Air flowing past the wire cools the wire. As the electrical resistance of most metals is dependent upon the temperature of the metal (tungsten is a popular choice for hot-wires), a relationship can be obtained between the resistance of the wire and the speed of the air. In most cases, they cannot be used to measure the direction of the airflow, unless coupled with a wind vane.
Several ways of implementing this exist, and hot-wire devices can be further classified as CCA (constant current anemometer), CVA (constant voltage anemometer) and CTA (constant-temperature anemometer). The voltage output from these anemometers is thus the result of some sort of circuit within the device trying to maintain the specific variable (current, voltage or temperature) constant, following Ohm's law.
Additionally, PWM (pulse-width modulation) anemometers are also used, wherein the velocity is inferred by the time length of a repeating pulse of current that brings the wire up to a specified resistance and then stops until a threshold "floor" is reached, at which time the pulse is sent again.
Hot-wire anemometers, while extremely delicate, have extremely high frequency-response and fine spatial resolution compared to other measurement methods, and as such are almost universally employed for the detailed study of turbulent flows, or any flow in which rapid velocity fluctuations are of interest.
An industrial version of the fine-wire anemometer is the thermal flow meter, which follows the same concept, but uses two pins or strings to monitor the variation in temperature. The strings contain fine wires, but encasing the wires makes them much more durable and capable of accurately measuring air, gas, and emissions flow in pipes, ducts, and stacks. Industrial applications often contain dirt that will damage the classic hot-wire anemometer.
Laser Doppler anemometers
In laser Doppler velocimetry, laser Doppler anemometers use a beam of light from a laser that is divided into two beams, with one propagated out of the anemometer. Particulates (or deliberately introduced seed material) flowing along with air molecules near where the beam exits reflect, or backscatter, the light back into a detector, where it is measured relative to the original laser beam. When the particles are in great motion, they produce a Doppler shift for measuring wind speed in the laser light, which is used to calculate the speed of the particles, and therefore the air around the anemometer.
Ultrasonic anemometers
Ultrasonic anemometers, first developed in the 1950s, use ultrasonic sound waves to measure wind velocity. They measure wind speed based on the time of flight of sonic pulses between pairs of transducers. Measurements from pairs of transducers can be combined to yield a measurement of velocity in 1-, 2-, or 3-dimensional flow. The spatial resolution is given by the path length between transducers, which is typically 10 to 20 cm. Ultrasonic anemometers can take measurements with very fine temporal resolution, 20 Hz or better, which makes them well suited for turbulence measurements. The lack of moving parts makes them appropriate for long-term use in exposed automated weather stations and weather buoys where the accuracy and reliability of traditional cup-and-vane anemometers are adversely affected by salty air or dust. Their main disadvantage is the distortion of the air flow by the structure supporting the transducers, which requires a correction based upon wind tunnel measurements to minimize the effect. An international standard for this process, ISO 16622 Meteorology—Ultrasonic anemometers/thermometers—Acceptance test methods for mean wind measurements is in general circulation. Another disadvantage is lower accuracy due to precipitation, where rain drops may vary the speed of sound.
Since the speed of sound varies with temperature, and is virtually stable with pressure change, ultrasonic anemometers are also used as thermometers.
Two-dimensional (wind speed and wind direction) sonic anemometers are used in applications such as weather stations, ship navigation, aviation, weather buoys and wind turbines. Monitoring wind turbines usually requires a refresh rate of wind speed measurements of 3 Hz, easily achieved by sonic anemometers. Three-dimensional sonic anemometers are widely used to measure gas emissions and ecosystem fluxes using the eddy covariance method when used with fast-response infrared gas analyzers or laser-based analyzers.
Two-dimensional wind sensors are of two types:
Two ultrasounds paths: These sensors have four arms. The disadvantage of this type of sensor is that when the wind comes in the direction of an ultrasound path, the arms disturb the airflow, reducing the accuracy of the resulting measurement.
Three ultrasounds paths: These sensors have three arms. They give one path redundancy of the measurement which improves the sensor accuracy and reduces aerodynamic turbulence.
Acoustic resonance anemometers
Acoustic resonance anemometers are a more recent variant of sonic anemometer. The technology was invented by Savvas Kapartis and patented in 1999. Whereas conventional sonic anemometers rely on time of flight measurement, acoustic resonance sensors use resonating acoustic (ultrasonic) waves within a small purpose-built cavity in order to perform their measurement.
Built into the cavity is an array of ultrasonic transducers, which are used to create the separate standing-wave patterns at ultrasonic frequencies. As wind passes through the cavity, a change in the wave's property occurs (phase shift). By measuring the amount of phase shift in the received signals by each transducer, and then by mathematically processing the data, the sensor is able to provide an accurate horizontal measurement of wind speed and direction.
Because acoustic resonance technology enables measurement within a small cavity, the sensors tend to be typically smaller in size than other ultrasonic sensors. The small size of acoustic resonance anemometers makes them physically strong and easy to heat, and therefore resistant to icing. This combination of features means that they achieve high levels of data availability and are well suited to wind turbine control and to other uses that require small robust sensors such as battlefield meteorology. One issue with this sensor type is measurement accuracy when compared to a calibrated mechanical sensor. For many end uses, this weakness is compensated for by the sensor's longevity and the fact that it does not require recalibration once installed.
Ping-pong ball anemometers
A common anemometer for basic use is constructed from a ping-pong ball attached to a string. When the wind blows horizontally, it presses on and moves the ball; because ping-pong balls are very lightweight, they move easily in light winds. Measuring the angle between the string-ball apparatus and the vertical gives an estimate of the wind speed.
This type of anemometer is mostly used for middle-school level instruction, which most students make on their own, but a similar device was also flown on the Phoenix Mars Lander.
Pressure anemometers
The first designs of anemometers that measure the pressure were divided into plate and tube classes.
Plate anemometers
These are the first modern anemometers. They consist of a flat plate suspended from the top so that the wind deflects the plate. In 1450, the Italian art architect Leon Battista Alberti invented the first mechanical anemometer; in 1664 it was re-invented by Robert Hooke (who is often mistakenly considered the inventor of the first anemometer). Later versions of this form consisted of a flat plate, either square or circular, which is kept normal to the wind by a wind vane. The pressure of the wind on its face is balanced by a spring. The compression of the spring determines the actual force which the wind is exerting on the plate, and this is either read off on a suitable gauge, or on a recorder. Instruments of this kind do not respond to light winds, are inaccurate for high wind readings, and are slow at responding to variable winds. Plate anemometers have been used to trigger high wind alarms on bridges.
Tube anemometers
James Lind's anemometer of 1775 consisted of a vertically mounted glass U tube containing a liquid manometer (pressure gauge), with one end bent out in a horizontal direction to face the wind flow and the other vertical end capped. Though the Lind was not the first it was the most practical and best known anemometer of this type. If the wind blows into the mouth of a tube it causes an increase of pressure on one side of the manometer. The wind over the open end of a vertical tube causes little change in pressure on the other side of the manometer. The resulting elevation difference in the two legs of the U tube is an indication of the wind speed. However, an accurate measurement requires that the wind speed be directly into the open end of the tube; small departures from the true direction of the wind causes large variations in the reading.
The successful metal pressure tube anemometer of William Henry Dines in 1892 utilized the same pressure difference between the open mouth of a straight tube facing the wind and a ring of small holes in a vertical tube which is closed at the upper end. Both are mounted at the same height. The pressure differences on which the action depends are very small, and special means are required to register them. The recorder consists of a float in a sealed chamber partially filled with water. The pipe from the straight tube is connected to the top of the sealed chamber and the pipe from the small tubes is directed into the bottom inside the float. Since the pressure difference determines the vertical position of the float this is a measure of the wind speed.
The great advantage of the tube anemometer lies in the fact that the exposed part can be mounted on a high pole, and requires no oiling or attention for years; and the registering part can be placed in any convenient position. Two connecting tubes are required. It might appear at first sight as though one connection would serve, but the differences in pressure on which these instruments depend are so minute, that the pressure of the air in the room where the recording part is placed has to be considered. Thus if the instrument depends on the pressure or suction effect alone, and this pressure or suction is measured against the air pressure in an ordinary room, in which the doors and windows are carefully closed and a newspaper is then burnt up the chimney, an effect may be produced equal to a wind of 10 mi/h (16 km/h); and the opening of a window in rough weather, or the opening of a door, may entirely alter the registration.
While the Dines anemometer had an error of only 1% at , it did not respond very well to low winds due to the poor response of the flat plate vane required to turn the head into the wind. In 1918 an aerodynamic vane with eight times the torque of the flat plate overcame this problem.
Pitot tube static anemometers
Modern tube anemometers use the same principle as in the Dines anemometer but using a different design. The implementation uses a pitot-static tube which is a pitot tube with two ports, pitot and static, that is normally used in measuring the airspeed of aircraft. The pitot port measures the dynamic pressure of the open mouth of a tube with pointed head facing wind, and the static port measures the static pressure from small holes along the side on that tube. The pitot tube is connected to a tail so that it always makes the tube's head to face the wind. Additionally, the tube is heated to prevent rime ice formation on the tube. There are two lines from the tube down to the devices to measure the difference in pressure of the two lines. The measurement devices can be manometers, pressure transducers, or analog chart recorders.
Effect of density on measurements
In the tube anemometer the dynamic pressure is actually being measured, although the scale is usually graduated as a velocity scale. If the actual air density differs from the calibration value, due to differing temperature, elevation or barometric pressure, a correction is required to obtain the actual wind speed. Approximately 1.5% (1.6% above 6,000 feet) should be added to the velocity recorded by a tube anemometer for each 1000 ft (5% for each kilometer) above sea-level.
Effect of icing
At airports, it is essential to have accurate wind data under all conditions, including freezing precipitation. Anemometry is also required in monitoring and controlling the operation of wind turbines, which in cold environments are prone to in-cloud icing. Icing alters the aerodynamics of an anemometer and may entirely block it from operating. Therefore, anemometers used in these applications must be internally heated. Both cup anemometers and sonic anemometers are presently available with heated versions.
Instrument location
In order for wind speeds to be comparable from location to location, the effect of the terrain needs to be considered, especially in regard to height. Other considerations are the presence of trees, and both natural canyons and artificial canyons (urban buildings). The standard anemometer height in open rural terrain is 10 meters.
See also
Air flow meter
Anemoi, for the ancient origin of the name of this technology
Anemoscope, ancient device for measuring or predicting wind direction or weather
Automated airport weather station
Night of the Big Wind
Particle image velocimetry
Savonius wind turbine
Wind power forecasting
Wind run
Windsock, a simple high-visibility indicator of approximate wind speed and direction
Notes
References
Meteorological Instruments, W.E. Knowles Middleton and Athelstan F. Spilhaus, Third Edition revised, University of Toronto Press, Toronto, 1953
Invention of the Meteorological Instruments, W. E. Knowles Middleton, The Johns Hopkins Press, Baltimore, 1969
External links
Description of the development and the construction of an ultrasonic anemometer
Animation Showing Sonic Principle of Operation (Time of Flight Theory) – Gill Instruments
Collection of historical anemometer
Principle of Operation: Acoustic Resonance measurement – FT Technologies
Thermopedia, "Anemometers (laser doppler)"
Thermopedia, "Anemometers (pulsed thermal)"
Thermopedia, "Anemometers (vane)"
The Rotorvane Anemometer. Measuring both wind speed and direction using a tagged three-cup sensor
Italian inventions
Measuring instruments
Meteorological instrumentation and equipment
Navigational equipment
Wind power
15th-century inventions
|
https://en.wikipedia.org/wiki/Arcturus
|
|- bgcolor="#FFFAFA"
| Note (category: variability): || H and K emission vary.
Arcturus is the brightest star in the northern constellation of Boötes. With an apparent visual magnitude of −0.05, it is the fourth-brightest star in the night sky, and the brightest in the northern celestial hemisphere. The name Arcturus originated from ancient Greece; it was then cataloged as α Boötis by Johann Bayer in 1603, which is Latinized to Alpha Boötis. Arcturus forms one corner of the Spring Triangle asterism.
Located relatively close at 36.7 light-years from the Sun, Arcturus is a single red giant of spectral type K1.5III—an aging star around 7.1 billion years old that has used up its core hydrogen and evolved off the main sequence. It is about the same mass as the Sun, but has expanded to 25 times its size and is around 170 times as luminous. Its diameter is 35 million kilometres. Thus far no companion has been detected.
Nomenclature
The traditional name Arcturus is Latinised from the ancient Greek Ἀρκτοῦρος (Arktouros) and means "Guardian of the Bear", ultimately from ἄρκτος (arktos), "bear" and οὖρος (ouros), "watcher, guardian".
The designation of Arcturus as α Boötis (Latinised to Alpha Boötis) was made by Johann Bayer in 1603. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Arcturus for α Boötis.
Observation
With an apparent visual magnitude of −0.05, Arcturus is the brightest star in the northern celestial hemisphere and the fourth-brightest star in the night sky, after Sirius (−1.46 apparent magnitude), Canopus (−0.72) and α Centauri (combined magnitude of −0.27). However, α Centauri AB is a binary star, whose components are both fainter than Arcturus. This makes Arcturus the third-brightest individual star, just ahead of α Centauri A (officially named Rigil Kentaurus), whose apparent magnitude . The French mathematician and astronomer Jean-Baptiste Morin observed Arcturus in the daytime with a telescope in 1635, a first for any star other than the Sun and supernovae. Arcturus has been seen at or just before sunset with the naked eye.
Arcturus is visible from both of Earth's hemispheres as it is located 19° north of the celestial equator. The star culminates at midnight on 27 April, and at 9 p.m. on June 10 being visible during the late northern spring or the southern autumn. From the northern hemisphere, an easy way to find Arcturus is to follow the arc of the handle of the Big Dipper (or Plough in the UK). By continuing in this path, one can find Spica, "Arc to Arcturus, then spike (or speed on) to Spica". Together with the bright stars Spica and Denebola (or Regulus, depending on the source), Arcturus is part of the Spring Triangle asterism. With Cor Caroli, these four stars form the Great Diamond asterism.
Ptolemy described Arcturus as subrufa ("slightly red"): it has a B-V color index of +1.23, roughly midway between Pollux (B-V +1.00) and Aldebaran (B-V +1.54).
η Boötis, or Muphrid, is only 3.3 light-years distant from Arcturus, and would have a visual magnitude −2.5, about as bright as Jupiter at its brightest from Earth, whereas an observer on the former system would find Arcturus with a magnitude -5.0, slightly brighter than Venus as seen from Earth, but with an orangish color.
Physical characteristics
Based upon an annual parallax shift of 88.83 milliarcseconds as measured by the Hipparcos satellite, Arcturus is from the Sun. The parallax margin of error is 0.54 milliarcseconds, translating to a distance margin of error of ±. Because of its proximity, Arcturus has a high proper motion, two arcseconds a year, greater than any first magnitude star other than α Centauri.
Arcturus is moving rapidly () relative to the Sun, and is now almost at its closest point to the Sun. Closest approach will happen in about 4,000 years, when the star will be a few hundredths of a light-year closer to Earth than it is today. (In antiquity, Arcturus was closer to the centre of the constellation.) Arcturus is thought to be an old-disk star, and appears to be moving with a group of 52 other such stars, known as the Arcturus stream.
With an absolute magnitude of −0.30, Arcturus is, together with Vega and Sirius, one of the most luminous stars in the Sun's neighborhood. It is about 110 times brighter than the Sun in visible light wavelengths, but this underestimates its strength as much of the light it gives off is in the infrared; total (bolometric) power output is about 180 times that of the Sun. With a near-infrared J band magnitude of −2.2, only Betelgeuse (−2.9) and R Doradus (−2.6) are brighter. The lower output in visible light is due to a lower efficacy as the star has a lower surface temperature than the Sun.
As a single star, the mass of Arcturus cannot be measured directly, but models suggest it is slightly greater than that of the Sun. Evolutionary matching to the observed physical parameters gives a mass of , while the oxygen isotope ratio for a first dredge-up star gives a mass of . Given the star's evolutionary state, it is expected to have undergone significant mass loss in the past. The star displays magnetic activity that is heating the coronal structures, and it undergoes a solar-type magnetic cycle with a duration that is probably less than 14 years. A weak magnetic field has been detected in the photosphere with a strength of around half a gauss. The magnetic activity appears to lie along four latitudes and is rotationally modulated.
Arcturus is estimated to be around 6 to 8.5 billion years old, but there is some uncertainty about its evolutionary status. Based upon the color characteristics of Arcturus, it is currently ascending the red-giant branch and will continue to do so until it accumulates a large enough degenerate helium core to ignite the helium flash. It has likely exhausted the hydrogen from its core and is now in its active hydrogen shell burning phase. However, Charbonnel et al. (1998) placed it slightly above the horizontal branch, and suggested it has already completed the helium flash stage.
Spectrum
Arcturus has evolved off the main sequence to the red giant branch, reaching an early K-type stellar classification. It is frequently assigned the spectral type of K0III, but in 1989 was used as the spectral standard for type K1.5III Fe−0.5, with the suffix notation indicating a mild underabundance of iron compared to typical stars of its type. As the brightest K-type giant in the sky, it has been the subject of multiple atlases with coverage from the ultraviolet to infrared.
The spectrum shows a dramatic transition from emission lines in the ultraviolet to atomic absorption lines in the visible range and molecular absorption lines in the infrared. This is due to the optical depth of the atmosphere varying with wavelength. The spectrum shows very strong absorption in some molecular lines that are not produced in the photosphere but in a surrounding shell. Examination of carbon monoxide lines show the molecular component of the atmosphere extending outward to 2–3 times the radius of the star, with the chromospheric wind steeply accelerating to 35–40 km/s in this region.
Astronomers term "metals" those elements with higher atomic numbers than helium. The atmosphere of Arcturus has an enrichment of alpha elements relative to iron but only about a third of solar metallicity. Arcturus is possibly a Population II star.
Oscillations
As one of the brightest stars in the sky, Arcturus has been the subject of a number of studies in the emerging field of asteroseismology. Belmonte and colleagues carried out a radial velocity (Doppler shift of spectral lines) study of the star in April and May 1988, which showed variability with a frequency of the order of a few microhertz (μHz), the highest peak corresponding to 4.3 μHz (2.7 days) with an amplitude of 60 ms−1, with a frequency separation of c. 5 μHz. They suggested that the most plausible explanation for the variability of Arcturus is stellar oscillations.
Asteroseismological measurements allow direct calculation of the mass and radius, giving values of and . This form of modelling is still relatively inaccurate, but a useful check on other models.
Possible planetary system
Hipparcos satellite astrometry suggested that Arcturus is a binary star, with the companion about twenty times dimmer than the primary and orbiting close enough to be at the very limits of humans' current ability to make it out. Recent results remain inconclusive, but do support the marginal Hipparcos detection of a binary companion.
In 1993, radial velocity measurements of Aldebaran, Arcturus and Pollux showed that Arcturus exhibited a long-period radial velocity oscillation, which could be interpreted as a substellar companion. This substellar object would be nearly 12 times the mass of Jupiter and be located roughly at the same orbital distance from Arcturus as the Earth is from the Sun, at 1.1 astronomical units. However, all three stars surveyed showed similar oscillations yielding similar companion masses, and the authors concluded that the variation was likely to be intrinsic to the star rather than due to the gravitational effect of a companion. So far no substellar companion has been confirmed.
Mythology
One astronomical tradition associates Arcturus with the mythology around Arcas, who was about to shoot and kill his own mother Callisto who had been transformed into a bear. Zeus averted their imminent tragic fate by transforming the boy into the constellation Boötes, called Arctophylax "bear guardian" by the Greeks, and his mother into Ursa Major (Greek: Arctos "the bear"). The account is given in Hyginus's Astronomy.
Aratus in his Phaenomena said that the star Arcturus lay below the belt of Arctophylax, and according to Ptolemy in the Almagest it lay between his thighs.
An alternative lore associates the name with the legend around Icarius, who gave the gift of wine to other men, but was murdered by them, because they had had no experience with intoxication and mistook the wine for poison. It is stated this Icarius, became Arcturus, while his dog, Maira, became Canicula (Procyon), although "Arcturus" here may be used in the sense of the constellation rather than the star.
Cultural significance
As one of the brightest stars in the sky, Arcturus has been significant to observers since antiquity.
In ancient Mesopotamia, it was linked to the god Enlil, and also known as Shudun, "yoke", or SHU-PA of unknown derivation in the Three Stars Each Babylonian star catalogues and later MUL.APIN around 1100 BC.
In ancient Greek the star is found in ancient astronomical literature, e.g. Hesiod's Work and Days, circa 700 BC, as well as Hipparchus's and Ptolemy's star catalogs. The folk-etymology connecting the star name with the bears (Greek: ἄρκτος, arktos) was probably invented much later. It fell out of use in favour of Arabic names until it was revived in the Renaissance.
In Arabic, Arcturus is one of two stars called al-simāk "the uplifted ones" (the other is Spica). Arcturus is specified as السماك الرامح as-simāk ar-rāmiħ "the uplifted one of the lancer". The term Al Simak Al Ramih has appeared in Al Achsasi Al Mouakket catalogue (translated into Latin as Al Simak Lanceator). This has been variously romanized in the past, leading to obsolete variants such as Aramec and Azimech. For example, the name Alramih is used in Geoffrey Chaucer's A Treatise on the Astrolabe (1391). Another Arabic name is Haris-el-sema, from حارس السماء ħāris al-samā’ "the keeper of heaven". or حارس الشمال ħāris al-shamāl’ "the keeper of north".
In Indian astronomy, Arcturus is called Swati or Svati (Devanagari स्वाति, Transliteration IAST svāti, svātī́), possibly 'su' + 'ati' ("great goer", in reference to its remoteness) meaning very beneficent. It has been referred to as "the real pearl" in Bhartṛhari's kāvyas.
In Chinese astronomy, Arcturus is called Da Jiao (), because it is the brightest star in the Chinese constellation called Jiao Xiu (). Later it became a part of another constellation Kang Xiu ().
The Wotjobaluk Koori people of southeastern Australia knew Arcturus as Marpean-kurrk, mother of Djuit (Antares) and another star in Boötes, Weet-kurrk (Muphrid). Its appearance in the north signified the arrival of the larvae of the wood ant (a food item) in spring. The beginning of summer was marked by the star's setting with the Sun in the west and the disappearance of the larvae. The people of Milingimbi Island in Arnhem Land saw Arcturus and Muphrid as man and woman, and took the appearance of Arcturus at sunrise as a sign to go and harvest rakia or spikerush. The Weilwan of northern New South Wales knew Arcturus as Guembila "red".
Prehistoric Polynesian navigators knew Arcturus as Hōkūleʻa, the "Star of Joy". Arcturus is the zenith star of the Hawaiian Islands. Using Hōkūleʻa and other stars, the Polynesians launched their double-hulled canoes from Tahiti and the Marquesas Islands. Traveling east and north they eventually crossed the equator and reached the latitude at which Arcturus would appear directly overhead in the summer night sky. Knowing they had arrived at the exact latitude of the island chain, they sailed due west on the trade winds to landfall. If Hōkūleʻa could be kept directly overhead, they landed on the southeastern shores of the Big Island of Hawaii. For a return trip to Tahiti the navigators could use Sirius, the zenith star of that island. Since 1976, the Polynesian Voyaging Society's Hōkūleʻa has crossed the Pacific Ocean many times under navigators who have incorporated this wayfinding technique in their non-instrument navigation.
Arcturus had several other names that described its significance to indigenous Polynesians. In the Society Islands, Arcturus, called Ana-tahua-taata-metua-te-tupu-mavae ("a pillar to stand by"), was one of the ten "pillars of the sky", bright stars that represented the ten heavens of the Tahitian afterlife. In Hawaii, the pattern of Boötes was called Hoku-iwa, meaning "stars of the frigatebird". This constellation marked the path for Hawaiʻiloa on his return to Hawaii from the South Pacific Ocean. The Hawaiians called Arcturus Hoku-leʻa. It was equated to the Tuamotuan constellation Te Kiva, meaning "frigatebird", which could either represent the figure of Boötes or just Arcturus. However, Arcturus may instead be the Tuamotuan star called Turu. The Hawaiian name for Arcturus as a single star was likely Hoku-leʻa, which means "star of gladness", or "clear star". In the Marquesas Islands, Arcturus was probably called Tau-tou and was the star that ruled the month approximating January. The Māori and Moriori called it Tautoru, a variant of the Marquesan name and a name shared with Orion's Belt.
In Inuit astronomy, Arcturus is called the Old Man (Uttuqalualuk in Inuit languages) and The First Ones (Sivulliik in Inuit languages).
The Miꞌkmaq of eastern Canada saw Arcturus as Kookoogwéss, the owl.
Early-20th-century Armenian scientist Nazaret Daghavarian theorized that the star commonly referred to in Armenian folklore as Gutani astgh (Armenian: Գութանի աստղ; lit. star of the plow) was in fact Arcturus, as the constellation of Boötes was called "Ezogh" (Armenian: Եզող; lit. the person who is plowing) by Armenians.
In popular culture
In Ancient Rome, the star's celestial activity was supposed to portend tempestuous weather, and a personification of the star acts as narrator of the prologue to Plautus' comedy Rudens (circa 211 BC).
The Kāraṇḍavyūha Sūtra, compiled at the end of the 4th century or beginning of the 5th century, names one of Avalokiteśvara's meditative absorptions as "The face of Arcturus".
One of the possible etymologies offered for the name "Arthur" assumes that it is derived from "Arcturus" and that the late 5th to early 6th-century figure on whom the myth of King Arthur is based was originally named for the star.
In the Middle Ages, Arcturus was considered a Behenian fixed star and attributed to the stone Jasper and the plantain herb. Cornelius Agrippa listed its kabbalistic sign under the alternate name Alchameth.
Arcturus's light was employed in the mechanism used to open the 1933 Chicago World's Fair. The star was chosen as it was thought that light from Arcturus had started its journey at about the time of the previous Chicago World's Fair in 1893 (at 36.7 light-years away, the light actually started in 1896).
At the height of the American Civil War, President Abraham Lincoln observed Arcturus through a 9.6-inch refractor telescope when he visited the Naval Observatory in Washington, DC, in August, 1863.
References
Further reading
</ref>
External links
SolStation.com entry
K-type giants
Suspected variables
Hypothetical planetary systems
Arcturus moving group
Boötes
Bootis, Alpha
BD+19 2777
Bootis, 16
0541
124897
069673
5340
TIC objects
|
https://en.wikipedia.org/wiki/Antares
|
Antares is the brightest star in the constellation of Scorpius. It has the Bayer designation α Scorpii, which is Latinised to Alpha Scorpii. Often referred to as "the heart of the scorpion", Antares is flanked by σ Scorpii and τ Scorpii near the center of the constellation. Distinctly reddish when viewed with the naked eye, Antares is a slow irregular variable star that ranges in brightness from an apparent visual magnitude of +0.6 down to +1.6. It is on average the fifteenth-brightest star in the night sky. Antares is the brightest and most evolved stellar member of the Scorpius–Centaurus association, the nearest OB association to the Sun It is located about from Earth at the rim of the Upper Scorpius subgroup, and is illuminating the Rho Ophiuchi cloud complex in its foreground.
Classified as spectral type M1.5Iab-Ib, Antares is a red supergiant, a large evolved massive star and one of the largest stars visible to the naked eye. Its exact size remains uncertain, but if placed at the center of the Solar System, it would extend out to somewhere between the orbits of Mars and Jupiter. Its mass is calculated to be around 12 times that of the Sun. Antares appears as a single star when viewed with the naked eye, but it is actually a binary star system, with its two components called α Scorpii A and α Scorpii B. The brighter of the pair is the red supergiant, while the fainter is a hot main sequence star of magnitude 5.5. They have a projected separation of about .
Its traditional name Antares derives from the Ancient Greek , meaning "rival to-Ares" ("opponent to-Mars"), due to the similarity of its reddish hue to the appearance of the planet Mars.
Nomenclature
α Scorpii (Latinised to Alpha Scorpii) is the star's Bayer designation. Antares has the Flamsteed designation 21 Scorpii, as well as catalogue designations such as HR 6134 in the Bright Star Catalogue and HD 148478 in the Henry Draper Catalogue. As a prominent infrared source, it appears in the Two Micron All-Sky Survey catalogue as 2MASS J16292443-2625549 and the Infrared Astronomical Satellite (IRAS) Sky Survey Atlas catalogue as IRAS 16262–2619. It is also catalogued as a double star WDS J16294-2626 and CCDM J16294-2626. Antares is a variable star and is listed in the General Catalogue of Variable Stars, but as a Bayer-designated star it does not have a separate variable star designation.
Its traditional name Antares derives from the Ancient Greek , meaning "rival to-Ares" ("opponent to-Mars"), due to the similarity of its reddish hue to the appearance of the planet Mars. The comparison of Antares with Mars may have originated with early Mesopotamian astronomers which is considered an outdated speculation, because the name of this star in Mesopotamian astronomy has always been "heart of Scorpion" and it was associated with the goddess Lisin. Some scholars have speculated that the star may have been named after Antar, or Antarah ibn Shaddad, the Arab warrior-hero celebrated in the pre-Islamic poems Mu'allaqat. However, the name "Antares" is already proven in the Greek culture, e.g. in Ptolemy's Almagest and Tetrabiblos. In 2016, the International Astronomical Union organised a Working Group on Star Names (WGSN) to catalog and standardise proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Antares for the star α Scorpii A. It is now so entered in the IAU Catalog of Star Names.
Observation
Antares is visible all night around May 31 of each year, when the star is at opposition to the Sun. Antares then rises at dusk and sets at dawn as seen at the equator.
For two to three weeks on either side of November 30, Antares is not visible in the night sky from mid-northern latitudes, because it is near conjunction with the Sun. In higher northern latitudes, Antares is only visible low in the south in summertime. Higher than 64° northern latitude, the star does not rise at all.
Antares is easier to see from the southern hemisphere due to its southerly declination. In the whole of Antarctica, the star is circumpolar as the whole continent is above 64° S latitude.
History
Radial velocity variations were observed in the spectrum of Antares in the early 20th century and attempts were made to derive spectroscopic orbits. It became apparent that the small variations could not be due to orbital motion, and were actually caused by pulsation of the star's atmosphere. Even in 1928, it was calculated that the size of the star must vary by about 20%.
Antares was first reported to have a companion star by Johann Tobias Bürg during an occultation on April 13, 1819, although this was not widely accepted and dismissed as a possible atmospheric effect. It was then observed by Scottish astronomer James William Grant FRSE while in India on 23 July 1844. It was rediscovered by Ormsby M. Mitchel in 1846, and measured by William Rutter Dawes in April 1847.
In 1952, Antares was reported to vary in brightness. A photographic magnitude range from 3.00 to 3.16 was described. The brightness has been monitored by the American Association of Variable Star Observers since 1945, and it has been classified as an LC slow irregular variable star, whose apparent magnitude slowly varies between extremes of +0.6 and +1.6, although usually near magnitude +1.0. There is no obvious periodicity, but statistical analyses have suggested periods of 1,733 days or days. No separate long secondary period has been detected, although it has been suggested that primary periods longer than a thousand days are analogous to long secondary periods.
Research published in 2018 demonstrated that Ngarrindjeri Aboriginal people from South Australia observed the variability of Antares and incorporated it into their oral traditions as Waiyungari (meaning 'red man').
Occultations and conjunctions
Antares is 4.57 degrees south of the ecliptic, one of four first magnitude stars within 6° of the ecliptic (the others are Spica, Regulus and Aldebaran), so it can be occulted by the Moon. The occultation of 31 July 2009 was visible in much of southern Asia and the Middle East. Every year around December 2 the Sun passes 5° north of Antares. Lunar occultations of Antares are fairly common, depending on the 18.6-year cycle of the lunar nodes. The last cycle ended in 2010 and the next begins in 2023. Shown at right is a video of a reappearance event, clearly showing events for both components.
Antares can also be occulted by the planets, e.g. Venus, but these events are rare. The last occultation of Antares by Venus took place on September 17, 525 BC; the next one will be November 17, 2400. Other planets have been calculated not to have occulted Antares over the last millennium, nor will they in the next millennium, as most planets stay near the ecliptic and pass north of Antares. Venus will be extremely near Antares on October 19, 2117 and every eight years thereafter through to October 29, 2157 it will pass south of the star.
Illumination of Rho Ophiuchi cloud complex
Antares is the brightest and most evolved stellar member of the Scorpius–Centaurus association, the nearest OB association to the Sun. It is a member of the Upper Scorpius subgroup of the association, which contains thousands of stars with a mean age of 11 million years. Antares is located about from Earth at the rim of the Upper Scorpius subgroup, and is illuminating the Rho Ophiuchi cloud complex in its foreground. The illuminated cloud is sometimes referred to as the Antares Nebula or is otherwise identified as VdB 107.
Stellar system
α Scorpii is a double star that is thought to form a binary system. The best calculated orbit for the stars is still considered to be unreliable. It describes an almost circular orbit seen nearly edge-on, with a period of 1,218 years and a semi-major axis of about . Other recent estimates of the period have ranged from 880 years for a calculated orbit, to 2,562 years for a simple Kepler's Law estimate.
Early measurements of the pair found them to be about apart in 1847–49, or apart in 1848. More modern observations consistently give separations around . The variations in the separation are often interpreted as evidence of orbital motion, but are more likely to be simply observational inaccuracies with very little true relative motion between the two components.
The pair have a projected separation of about 529 astronomical units (AU) (≈ 80 billion km) at the estimated distance of Antares, giving a minimum value for the distance between them. Spectroscopic examination of the energy states in the outflow of matter from the companion star suggests that the latter is over beyond the primary (about 33 billion km).
Antares
Antares is a red supergiant star with a stellar classification of M1.5Iab-Ib, and is indicated to be a spectral standard for that class. Due to the nature of the star, the derived parallax measurements have large errors, so that the true distance of Antares is approximately from the Sun.
The brightness of Antares at visual wavelengths is about 10,000 times that of the Sun, but because the star radiates a considerable part of its energy in the infrared part of the spectrum, the true bolometric luminosity is around 100,000 times that of the Sun. There is a large margin of error assigned to values for the bolometric luminosity, typically 30% or more. There is also considerable variation between values published by different authors, for example and published in 2012 and 2013.
The mass of the star has been calculated to be about , or . Comparison of the effective temperature and luminosity of Antares to theoretical evolutionary tracks for massive stars suggest a progenitor mass of and an age of 12 million years (MYr), or an initial mass of and an age of 11 to 15 MYr. Massive stars like Antares are expected to explode as supernovae.
Like most cool supergiants, Antares's size has much uncertainty due to the tenuous and translucent nature of the extended outer regions of the star. Defining an effective temperature is difficult due to spectral lines being generated at different depths in the atmosphere, and linear measurements produce different results depending on the wavelength observed. In addition, Antares appears to pulsate, varying its radius by 19%. It also varies in temperature by 150 K, lagging 70 days behind radial velocity changes which are likely to be caused by the pulsations.
The diameter of Antares can be measured most accurately using interferometry or observing lunar occultations events. An apparent diameter from occultations 41.3 ± 0.1 milliarcseconds has been published. Interferometry allows synthesis of a view of the stellar disc, which is then represented as a limb-darkened disk surrounded by an extended atmosphere. The diameter of the limb-darkened disk was measured as in 2009 and in 2010. The linear radius of the star can be calculated from its angular diameter and distance. However, the distance to Antares is not known with the same accuracy as modern measurements of its diameter.
The Hipparcos satellite's trigonometric parallax of leads to a radius of about . Older radii estimates exceeding were derived from older measurements of the diameter, but those measurements are likely to have been affected by asymmetry of the atmosphere and the narrow range of infrared wavelengths observed; Antares has an extended shell which radiates strongly at those particular wavelengths. Despite its large size compared to the Sun, Antares is dwarfed by even larger red supergiants, such as VY Canis Majoris or VV Cephei A and Mu Cephei.
Antares, like the similarly sized red supergiant Betelgeuse in the constellation Orion, will almost certainly explode as a supernova, probably in million years. For a few months, the Antares supernova could be as bright as the full moon and be visible in daytime.
Antares B
Antares B is a magnitude 5.5 blue-white main-sequence star of spectral type B2.5V; it also has numerous unusual spectral lines suggesting it has been polluted by matter ejected by Antares. It is assumed to be a relatively normal early-B main sequence star with a mass around , a temperature around , and a radius of about .
Antares B is normally difficult to see in small telescopes due to glare from Antares, but can sometimes be seen in apertures over . It is often described as green, but this is probably either a contrast effect, or the result of the mixing of light from the two stars when they are seen together through a telescope and are too close to be completely resolved. Antares B can sometimes be observed with a small telescope for a few seconds during lunar occultations while Antares is hidden by the Moon. Antares B appears a profound blue or bluish-green color, in contrast to the orange-red Antares.
Etymology and mythology
In the Babylonian star catalogues dating from at least 1100 BCE, Antares was called GABA GIR.TAB, "the Breast of the Scorpion". In MUL.APIN, which dates between 1100 and 700 BC, it is one of the stars of Ea in the southern sky and denotes the breast of the Scorpion goddess Ishhara. Later names that translate as "the Heart of Scorpion" include from the Arabic قَلْبُ ٱلْعَقْرَبِ . This had been directly translated from the Ancient Greek . was a calque of the Greek name rendered in Latin.
In ancient Mesopotamia, Antares may have been known by various names: Urbat, Bilu-sha-ziri ("the Lord of the Seed"), Kak-shisa ("the Creator of Prosperity"), Dar Lugal ("The King"), Masu Sar ("the Hero and the King"), and Kakkab Bir ("the Vermilion Star"). In ancient Egypt, Antares represented the scorpion goddess Serket (and was the symbol of Isis in the pyramidal ceremonies). It was called "the red one of the prow".
In Persia Antares was known as Satevis, one of the four "royal stars". In India, it with σ Scorpii and τ Scorpii were Jyeshthā (the eldest or biggest, probably attributing its huge size), one of the nakshatra (Hindu lunar mansions).
The ancient Chinese called Antares 心宿二 (Xīnxiù'èr, "second star of the Heart"), because it was the second star of the mansion Xin (心). It was the national star of the Shang Dynasty, and it was sometimes referred to as () because of its reddish appearance.
The Māori people of New Zealand call Antares Rēhua, and regard it as the chief of all the stars. Rēhua is father of Puanga/Puaka (Rigel), an important star in the calculation of the Māori calendar. The Wotjobaluk Koori people of Victoria, Australia, knew Antares as Djuit, son of Marpean-kurrk (Arcturus); the stars on each side represented his wives. The Kulin Kooris saw Antares (Balayang) as the brother of Bunjil (Altair).
In culture
Antares appears in the flag of Brazil, which displays 27 stars, each representing a federated unit of Brazil. Antares represents the state of Piauí.
The 1995 Oldsmobile Antares concept car is named after the star.
References
Further reading
External links
Best Ever Image of a Star’s Surface and Atmosphere – First map of motion of material on a star other than the Sun
M-type supergiants
B-type main-sequence stars
Binary stars
Slow irregular variables
Upper Scorpius
Scorpius
6134
Scorpii, Alpha
CD-26 11359
Scorpii, 21
148478 9
080763
TIC objects
Population I stars
|
https://en.wikipedia.org/wiki/Altair
|
Altair is the brightest star in the constellation of Aquila and the twelfth-brightest star in the night sky. It has the Bayer designation Alpha Aquilae, which is Latinised from α Aquilae and abbreviated Alpha Aql or α Aql. Altair is an A-type main-sequence star with an apparent visual magnitude of 0.77 and is one of the vertices of the Summer Triangle asterism; the other two vertices are marked by Deneb and Vega. It is located at a distance of from the Sun. Altair is currently in the G-cloud—a nearby interstellar cloud, an accumulation of gas and dust.
Altair rotates rapidly, with a velocity at the equator of approximately 286 km/s. This is a significant fraction of the star's estimated breakup speed of 400 km/s. A study with the Palomar Testbed Interferometer revealed that Altair is not spherical, but is flattened at the poles due to its high rate of rotation. Other interferometric studies with multiple telescopes, operating in the infrared, have imaged and confirmed this phenomenon.
Nomenclature
α Aquilae (Latinised to Alpha Aquilae) is the star's Bayer designation. The traditional name Altair has been used since medieval times. It is an abbreviation of the Arabic phrase Al-Nisr Al-Ṭa'ir, "".
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Altair for this star. It is now so entered in the IAU Catalog of Star Names.
Physical characteristics
Along with β Aquilae and γ Aquilae, Altair forms the well-known line of stars sometimes referred to as the Family of Aquila or Shaft of Aquila.
Altair is a type-A main-sequence star with about 1.8 times the mass of the Sun and 11 times its luminosity. It is thought to be a young star close to the zero age main sequence at about 100 million years old, although previous estimates gave an age closer to one billion years old. Altair rotates rapidly, with a rotational period of under eight hours; for comparison, the equator of the Sun makes a complete rotation in a little more than 25 days, but Altair's rotation is similar to, and slightly faster than, those of Jupiter and Saturn. Like those two planets, its rapid rotation causes the star to be oblate; its equatorial diameter is over 20 percent greater than its polar diameter.
Satellite measurements made in 1999 with the Wide Field Infrared Explorer showed that the brightness of Altair fluctuates slightly, varying by just a few thousandths of a magnitude with several different periods less than 2 hours. As a result, it was identified in 2005 as a Delta Scuti variable star. Its light curve can be approximated by adding together a number of sine waves, with periods that range between 0.8 and 1.5 hours. It is a weak source of coronal X-ray emission, with the most active sources of emission being located near the star's equator. This activity may be due to convection cells forming at the cooler equator.
Rotational effects
The angular diameter of Altair was measured interferometrically by R. Hanbury Brown and his co-workers at Narrabri Observatory in the 1960s. They found a diameter of 3milliarcseconds. Although Hanbury Brown et al. realized that Altair would be rotationally flattened, they had insufficient data to experimentally observe its oblateness. Later, using infrared interferometric measurements made by the Palomar Testbed Interferometer in 1999 and 2000, Altair was found to be flattened. This work was published by G. T. van Belle, David R. Ciardi and their co-authors in 2001.
Theory predicts that, owing to Altair's rapid rotation, its surface gravity and effective temperature should be lower at the equator, making the equator less luminous than the poles. This phenomenon, known as gravity darkening or the von Zeipel effect, was confirmed for Altair by measurements made by the Navy Precision Optical Interferometer in 2001, and analyzed by Ohishi et al. (2004) and Peterson et al. (2006). Also, A. Domiciano de Souza et al. (2005) verified gravity darkening using the measurements made by the Palomar and Navy interferometers, together with new measurements made by the VINCI instrument at the VLTI.
Altair is one of the few stars for which a direct image has been obtained. In 2006 and 2007, J. D. Monnier and his coworkers produced an image of Altair's surface from 2006 infrared observations made with the MIRC instrument on the CHARA array interferometer; this was the first time the surface of any main-sequence star, apart from the Sun, had been imaged. The false-color image was published in 2007. The equatorial radius of the star was estimated to be 2.03 solar radii, and the polar radius 1.63 solar radii—a 25% increase of the stellar radius from pole to equator. The polar axis is inclined by about 60° to the line of sight from the Earth.
Etymology, mythology and culture
The term Al Nesr Al Tair appeared in Al Achsasi al Mouakket's catalogue, which was translated into Latin as Vultur Volans. This name was applied by the Arabs to the asterism of Altair, β Aquilae and γ Aquilae and probably goes back to the ancient Babylonians and Sumerians, who called Altair "the eagle star". The spelling Atair has also been used. Medieval astrolabes of England and Western Europe depicted Altair and Vega as birds.
The Koori people of Victoria also knew Altair as Bunjil, the wedge-tailed eagle, and β and γ Aquilae are his two wives the black swans. The people of the Murray River knew the star as Totyerguil. The Murray River was formed when Totyerguil the hunter speared Otjout, a giant Murray cod, who, when wounded, churned a channel across southern Australia before entering the sky as the constellation Delphinus.
In Chinese belief, the asterism consisting of Altair, β Aquilae and γ Aquilae is known as Hé Gǔ (; lit. "river drum"). The Chinese name for Altair is thus Hé Gǔ èr (; lit. "river drum two", meaning the "second star of the drum at the river"). However, Altair is better known by its other names: Qiān Niú Xīng ( / ) or Niú Láng Xīng (), translated as the cowherd star. These names are an allusion to a love story, The Cowherd and the Weaver Girl, in which Niulang (represented by Altair) and his two children (represented by β Aquilae and γ Aquilae) are separated from respectively their wife and mother Zhinu (represented by Vega) by the Milky Way. They are only permitted to meet once a year, when magpies form a bridge to allow them to cross the Milky Way.
The people of Micronesia called Altair Mai-lapa, meaning "big/old breadfruit", while the Māori people called this star Poutu-te-rangi, meaning "pillar of heaven".
In Western astrology, the star was ill-omened, portending danger from reptiles.
This star is one of the asterisms used by Bugis sailors for navigation, called bintoéng timoro, meaning "eastern star".
A group of Japanese scientists sent a radio signal to Altair in 1983 with the hopes of contacting extraterrestrial life.
NASA announced Altair as the name of the Lunar Surface Access Module (LSAM) on December 13, 2007. The Russian-made Beriev Be-200 Altair seaplane is also named after the star.
Visual companions
The bright primary star has the multiple star designation WDS 19508+0852A and has several faint visual companion stars, WDS 19508+0852B, C, D, E, F and G. All are much more distant than Altair and not physically associated.
See also
Lists of stars
List of brightest stars
List of nearest bright stars
Historical brightest stars
List of most luminous stars
Notes
References
External links
Star with Midriff Bulge Eyed by Astronomers, JPL press release, July 25, 2001.
Spectrum of Altair
Imaging the Surface of Altair, University of Michigan news release detailing the CHARA array direct imaging of the stellar surface in 2007.
PIA04204: Altair, NASA. Image of Altair from the Palomar Testbed Interferometer.
Altair, SolStation.
Secrets of Sun-like star probed, BBC News, June 1, 2007.
Astronomers Capture First Images of the Surface Features of Altair , Astromart.com
Image of Altair from Aladin.
Aquila (constellation)
A-type main-sequence stars
4
Aquilae, 53
Aquilae, Alpha
187642
097649
7557
Delta Scuti variables
Altair
BD+08 4236
G-Cloud
Astronomical objects known since antiquity
0768
TIC objects
|
https://en.wikipedia.org/wiki/Asymptote
|
In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity.
The word asymptote is derived from the Greek ἀσύμπτωτος (asumptōtos) which means "not falling together", from ἀ priv. + σύν "together" + πτωτ-ός "fallen". The term was introduced by Apollonius of Perga in his work on conic sections, but in contrast to its modern meaning, he used it to mean any line that does not intersect the given curve.
There are three kinds of asymptotes: horizontal, vertical and oblique. For curves given by the graph of a function , horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to Vertical asymptotes are vertical lines near which the function grows without bound. An oblique asymptote has a slope that is non-zero but finite, such that the graph of the function approaches it as x tends to
More generally, one curve is a curvilinear asymptote of another (as opposed to a linear asymptote) if the distance between the two curves tends to zero as they tend to infinity, although the term asymptote by itself is usually reserved for linear asymptotes.
Asymptotes convey information about the behavior of curves in the large, and determining the asymptotes of a function is an important step in sketching its graph. The study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis.
Introduction
The idea that a curve may come arbitrarily close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a computer screen have a positive width. So if they were to be extended far enough they would seem to merge, at least as far as the eye could discern. But these are physical representations of the corresponding mathematical entities; the line and the curve are idealized concepts whose width is 0 (see Line). Therefore, the understanding of the idea of an asymptote requires an effort of reason rather than experience.
Consider the graph of the function shown in this section. The coordinates of the points on the curve are of the form where x is a number other than 0. For example, the graph contains the points (1, 1), (2, 0.5), (5, 0.2), (10, 0.1), ... As the values of become larger and larger, say 100, 1,000, 10,000 ..., putting them far to the right of the illustration, the corresponding values of , .01, .001, .0001, ..., become infinitesimal relative to the scale shown. But no matter how large becomes, its reciprocal is never 0, so the curve never actually touches the x-axis. Similarly, as the values of become smaller and smaller, say .01, .001, .0001, ..., making them infinitesimal relative to the scale shown, the corresponding values of , 100, 1,000, 10,000 ..., become larger and larger. So the curve extends farther and farther upward as it comes closer and closer to the y-axis. Thus, both the x and y-axis are asymptotes of the curve. These ideas are part of the basis of concept of a limit in mathematics, and this connection is explained more fully below.
Asymptotes of functions
The asymptotes most commonly encountered in the study of calculus are of curves of the form . These can be computed using limits and classified into horizontal, vertical and oblique asymptotes depending on their orientation. Horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to +∞ or −∞. As the name indicates they are parallel to the x-axis. Vertical asymptotes are vertical lines (perpendicular to the x-axis) near which the function grows without bound. Oblique asymptotes are diagonal lines such that the difference between the curve and the line approaches 0 as x tends to +∞ or −∞.
Vertical asymptotes
The line x = a is a vertical asymptote of the graph of the function if at least one of the following statements is true:
where is the limit as x approaches the value a from the left (from lesser values), and is the limit as x approaches a from the right.
For example, if ƒ(x) = x/(x–1), the numerator approaches 1 and the denominator approaches 0 as x approaches 1. So
and the curve has a vertical asymptote x = 1.
The function ƒ(x) may or may not be defined at a, and its precise value at the point x = a does not affect the asymptote. For example, for the function
has a limit of +∞ as , ƒ(x) has the vertical asymptote , even though ƒ(0) = 5. The graph of this function does intersect the vertical asymptote once, at (0, 5). It is impossible for the graph of a function to intersect a vertical asymptote (or a vertical line in general) in more than one point. Moreover, if a function is continuous at each point where it is defined, it is impossible that its graph does intersect any vertical asymptote.
A common example of a vertical asymptote is the case of a rational function at a point x such that the denominator is zero and the numerator is non-zero.
If a function has a vertical asymptote, then it isn't necessarily true that the derivative of the function has a vertical asymptote at the same place. An example is
at .
This function has a vertical asymptote at because
and
.
The derivative of is the function
.
For the sequence of points
for
that approaches both from the left and from the right, the values are constantly . Therefore, both one-sided limits of at can be neither nor . Hence doesn't have a vertical asymptote at .
Horizontal asymptotes
Horizontal asymptotes are horizontal lines that the graph of the function approaches as . The horizontal line y = c is a horizontal asymptote of the function y = ƒ(x) if
or .
In the first case, ƒ(x) has y = c as asymptote when x tends to , and in the second ƒ(x) has y = c as an asymptote as x tends to .
For example, the arctangent function satisfies
and
So the line is a horizontal asymptote for the arctangent when x tends to , and is a horizontal asymptote for the arctangent when x tends to .
Functions may lack horizontal asymptotes on either or both sides, or may have one horizontal asymptote that is the same in both directions. For example, the function has a horizontal asymptote at y = 0 when x tends both to and because, respectively,
Other common functions that have one or two horizontal asymptotes include (that has an hyperbola as it graph), the Gaussian function the error function, and the logistic function.
Oblique asymptotes
When a linear asymptote is not parallel to the x- or y-axis, it is called an oblique asymptote or slant asymptote. A function ƒ(x) is asymptotic to the straight line (m ≠ 0) if
In the first case the line is an oblique asymptote of ƒ(x) when x tends to +∞, and in the second case the line is an oblique asymptote of ƒ(x) when x tends to −∞.
An example is ƒ(x) = x + 1/x, which has the oblique asymptote y = x (that is m = 1, n = 0) as seen in the limits
Elementary methods for identifying asymptotes
The asymptotes of many elementary functions can be found without the explicit use of limits (although the derivations of such methods typically use limits).
General computation of oblique asymptotes for functions
The oblique asymptote, for the function f(x), will be given by the equation y = mx + n. The value for m is computed first and is given by
where a is either or depending on the case being studied. It is good practice to treat the two cases separately. If this limit doesn't exist then there is no oblique asymptote in that direction.
Having m then the value for n can be computed by
where a should be the same value used before. If this limit fails to exist then there is no oblique asymptote in that direction, even should the limit defining m exist. Otherwise is the oblique asymptote of ƒ(x) as x tends to a.
For example, the function has
and then
so that is the asymptote of ƒ(x) when x tends to +∞.
The function has
and then
, which does not exist.
So does not have an asymptote when x tends to +∞.
Asymptotes for rational functions
A rational function has at most one horizontal asymptote or oblique (slant) asymptote, and possibly many vertical asymptotes.
The degree of the numerator and degree of the denominator determine whether or not there are any horizontal or oblique asymptotes. The cases are tabulated below, where deg(numerator) is the degree of the numerator, and deg(denominator) is the degree of the denominator.
The vertical asymptotes occur only when the denominator is zero (If both the numerator and denominator are zero, the multiplicities of the zero are compared). For example, the following function has vertical asymptotes at x = 0, and x = 1, but not at x = 2.
Oblique asymptotes of rational functions
When the numerator of a rational function has degree exactly one greater than the denominator, the function has an oblique (slant) asymptote. The asymptote is the polynomial term after dividing the numerator and denominator. This phenomenon occurs because when dividing the fraction, there will be a linear term, and a remainder. For example, consider the function
shown to the right. As the value of x increases, f approaches the asymptote y = x. This is because the other term, 1/(x+1), approaches 0.
If the degree of the numerator is more than 1 larger than the degree of the denominator, and the denominator does not divide the numerator, there will be a nonzero remainder that goes to zero as x increases, but the quotient will not be linear, and the function does not have an oblique asymptote.
Transformations of known functions
If a known function has an asymptote (such as y=0 for f(x)=ex), then the translations of it also have an asymptote.
If x=a is a vertical asymptote of f(x), then x=a+h is a vertical asymptote of f(x-h)
If y=c is a horizontal asymptote of f(x), then y=c+k is a horizontal asymptote of f(x)+k
If a known function has an asymptote, then the scaling of the function also have an asymptote.
If y=ax+b is an asymptote of f(x), then y=cax+cb is an asymptote of cf(x)
For example, f(x)=ex-1+2 has horizontal asymptote y=0+2=2, and no vertical or oblique asymptotes.
General definition
Let be a parametric plane curve, in coordinates A(t) = (x(t),y(t)). Suppose that the curve tends to infinity, that is:
A line ℓ is an asymptote of A if the distance from the point A(t) to ℓ tends to zero as t → b. From the definition, only open curves that have some infinite branch can have an asymptote. No closed curve can have an asymptote.
For example, the upper right branch of the curve y = 1/x can be defined parametrically as x = t, y = 1/t (where t > 0). First, x → ∞ as t → ∞ and the distance from the curve to the x-axis is 1/t which approaches 0 as t → ∞. Therefore, the x-axis is an asymptote of the curve. Also, y → ∞ as t → 0 from the right, and the distance between the curve and the y-axis is t which approaches 0 as t → 0. So the y-axis is also an asymptote. A similar argument shows that the lower left branch of the curve also has the same two lines as asymptotes.
Although the definition here uses a parameterization of the curve, the notion of asymptote does not depend on the parameterization. In fact, if the equation of the line is then the distance from the point A(t) = (x(t),y(t)) to the line is given by
if γ(t) is a change of parameterization then the distance becomes
which tends to zero simultaneously as the previous expression.
An important case is when the curve is the graph of a real function (a function of one real variable and returning real values). The graph of the function y = ƒ(x) is the set of points of the plane with coordinates (x,ƒ(x)). For this, a parameterization is
This parameterization is to be considered over the open intervals (a,b), where a can be −∞ and b can be +∞.
An asymptote can be either vertical or non-vertical (oblique or horizontal). In the first case its equation is x = c, for some real number c. The non-vertical case has equation , where m and are real numbers. All three types of asymptotes can be present at the same time in specific examples. Unlike asymptotes for curves that are graphs of functions, a general curve may have more than two non-vertical asymptotes, and may cross its vertical asymptotes more than once.
Curvilinear asymptotes
Let be a parametric plane curve, in coordinates A(t) = (x(t),y(t)), and B be another (unparameterized) curve. Suppose, as before, that the curve A tends to infinity. The curve B is a curvilinear asymptote of A if the shortest distance from the point A(t) to a point on B tends to zero as t → b. Sometimes B is simply referred to as an asymptote of A, when there is no risk of confusion with linear asymptotes.
For example, the function
has a curvilinear asymptote , which is known as a parabolic asymptote because it is a parabola rather than a straight line.
Asymptotes and curve sketching
Asymptotes are used in procedures of curve sketching. An asymptote serves as a guide line to show the behavior of the curve towards infinity. In order to get better approximations of the curve, curvilinear asymptotes have also been used although the term asymptotic curve seems to be preferred.
Algebraic curves
The asymptotes of an algebraic curve in the affine plane are the lines that are tangent to the projectivized curve through a point at infinity. For example, one may identify the asymptotes to the unit hyperbola in this manner. Asymptotes are often considered only for real curves, although they also make sense when defined in this way for curves over an arbitrary field.
A plane curve of degree n intersects its asymptote at most at n−2 other points, by Bézout's theorem, as the intersection at infinity is of multiplicity at least two. For a conic, there are a pair of lines that do not intersect the conic at any complex point: these are the two asymptotes of the conic.
A plane algebraic curve is defined by an equation of the form P(x,y) = 0 where P is a polynomial of degree n
where Pk is homogeneous of degree k. Vanishing of the linear factors of the highest degree term Pn defines the asymptotes of the curve: setting , if , then the line
is an asymptote if and are not both zero. If and , there is no asymptote, but the curve has a branch that looks like a branch of parabola. Such a branch is called a , even when it does not have any parabola that is a curvilinear asymptote. If the curve has a singular point at infinity which may have several asymptotes or parabolic branches.
Over the complex numbers, Pn splits into linear factors, each of which defines an asymptote (or several for multiple factors). Over the reals, Pn splits in factors that are linear or quadratic factors. Only the linear factors correspond to infinite (real) branches of the curve, but if a linear factor has multiplicity greater than one, the curve may have several asymptotes or parabolic branches. It may also occur that such a multiple linear factor corresponds to two complex conjugate branches, and does not corresponds to any infinite branch of the real curve. For example, the curve has no real points outside the square , but its highest order term gives the linear factor x with multiplicity 4, leading to the unique asymptote x=0.
Asymptotic cone
The hyperbola
has the two asymptotes
The equation for the union of these two lines is
Similarly, the hyperboloid
is said to have the asymptotic cone
The distance between the hyperboloid and cone approaches 0 as the distance from the origin approaches infinity.
More generally, consider a surface that has an implicit equation
where the are homogeneous polynomials of degree and . Then the equation defines a cone which is centered at the origin. It is called an asymptotic cone, because the distance to the cone of a point of the surface tends to zero when the point on the surface tends to infinity.
See also
Big O notation
References
General references
Specific references
External links
Hyperboloid and Asymptotic Cone, string surface model, 1872 from the Science Museum
Mathematical analysis
Analytic geometry
|
https://en.wikipedia.org/wiki/Arithmetic
|
Arithmetic () is an elementary part of mathematics that consists of the study of the properties of the traditional operations on numbers—addition, subtraction, multiplication, division, exponentiation, and extraction of roots. In the 19th century, Italian mathematician Giuseppe Peano formalized arithmetic with his Peano axioms, which are highly important to the field of mathematical logic today.
History
The prehistory of arithmetic is limited to a small number of artifacts that may indicate the conception of addition and subtraction; the best-known is the Ishango bone from central Africa, dating from somewhere between 20,000 and 18,000 BC, although its interpretation is disputed.
The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations: addition, subtraction, multiplication, and division, as early as 2000 BC. These artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, descended from tally marks used for counting. In both cases, this origin resulted in values that used a decimal base but did not include positional notation. Complex calculations with Roman numerals required the assistance of a counting board (or the Roman abacus) to obtain the results.
Early number systems that included positional notation were not decimal; these include the sexagesimal (base 60) system for Babylonian numerals and the vigesimal (base 20) system that defined Maya numerals. Because of the place-value concept, the ability to reuse the same digits for different values contributed to simpler and more efficient methods of calculation.
The continuous historical development of modern arithmetic starts with the Hellenistic period of ancient Greece; it originated much later than the Babylonian and Egyptian examples. Prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. Nicomachus is an example of this viewpoint, using the earlier Pythagorean approach to numbers and their relationships to each other in his work, Introduction to Arithmetic.
Greek numerals were used by Archimedes, Diophantus, and others in a positional notation not very different from modern notation. The ancient Greeks lacked a symbol for zero until the Hellenistic period, and they used three separate sets of symbols as digits: one set for the units place, one for the tens place, and one for the hundreds. For the thousands place, they would reuse the symbols for the units place, and so on. Their addition algorithm was identical to the modern method, and their multiplication algorithm was only slightly different. Their long division algorithm was the same, and the digit-by-digit square root algorithm, popularly used as recently as the 20th century, was known to Archimedes (who may have invented it). He preferred it to Hero's method of successive approximation because, once computed, a digit does not change, and the square roots of perfect squares, such as 7485696, terminate immediately as 2736. For numbers with a fractional part, such as 546.934, they used negative powers of 60 instead of negative powers of 10 for the fractional part 0.934.
The ancient Chinese had advanced arithmetic studies dating from the Shang Dynasty and continuing through the Tang Dynasty, from basic numbers to advanced algebra. The ancient Chinese used a positional notation similar to that of the Greeks. Since they also lacked a symbol for zero, they had one set of symbols for the units place and a second set for the tens place. For the hundreds place, they then reused the symbols for the units place, and so on. Their symbols were based on ancient counting rods. The exact time when the Chinese started calculating with positional representation is unknown, though it is known that the adoption started before 400 BC. The ancient Chinese were the first to meaningfully discover, understand, and apply negative numbers. This is explained in the Nine Chapters on the Mathematical Art (Jiuzhang Suanshu), which was written by Liu Hui and dates back to the 2nd century BC.
The gradual development of the Hindu–Arabic numeral system independently devised the place-value concept and positional notation, which combined the simpler methods for computations with a decimal base and the use of a digit representing 0. This allowed the system to consistently represent both large and small integers—an approach that eventually replaced all other systems. In the early the Indian mathematician Aryabhata incorporated an existing version of this system into his work and experimented with different notations. In the 7th century, Brahmagupta established the use of 0 as a separate number and determined the results for multiplication, division, addition, and subtraction of zero and all other numbers—except for the result of division by zero. His contemporary, the Syriac bishop Severus Sebokht (650 AD) said, "Indians possess a method of calculation that no word can praise enough. Their rational system of mathematics, or of their method of calculation. I mean the system using nine symbols." The Arabs also learned this new method and called it hesab.
Although the Codex Vigilanus described an early form of Arabic numerals (omitting 0) by 976 AD, Leonardo of Pisa (Fibonacci) was primarily responsible for spreading their use throughout Europe after the publication of his book in 1202. He wrote, "The method of the Indians (Latin Modus Indorum) surpasses any known method to compute. It's a marvelous method. They do their computations using nine figures and symbol zero".
In the Middle Ages, arithmetic was one of the seven liberal arts taught in universities.
The flourishing of algebra in the medieval Islamic world, and also in Renaissance Europe, was an outgrowth of the enormous simplification of computation through decimal notation.
Various types of tools have been invented and widely used to assist in numeric calculations. Before Renaissance, they were various types of abaci. More recent examples include slide rules, nomograms and mechanical calculators, such as Pascal's calculator. At present, they have been supplanted by electronic calculators and computers.
Arithmetic operations
The basic arithmetic operations are addition, subtraction, multiplication and division, although arithmetic also includes more advanced operations, such as manipulations of percentages, square roots, exponentiation, logarithmic functions, and even trigonometric functions, in the same vein as logarithms (prosthaphaeresis). Arithmetic expressions must be evaluated according to the intended sequence of operations. There are several methods to specify this, either—most common, together with infix notation—explicitly using parentheses and relying on precedence rules, or using a prefix or postfix notation, which uniquely fix the order of execution by themselves. Any set of objects upon which all four arithmetic operations (except division by zero) can be performed, and where these four operations obey the usual laws (including distributivity), is called a field.
Addition
Addition, denoted by the symbol , is the most basic operation of arithmetic. In its simple form, addition combines two numbers, the addends or terms, into a single number, the sum of the numbers (such as or ).
Adding finitely many numbers can be viewed as repeated simple addition; this procedure is known as summation, a term also used to denote the definition for "adding infinitely many numbers" in an infinite series. Repeated addition of the number 1 is the most basic form of counting; the result of adding is usually called the successor of the original number.
Addition is commutative and associative, so the order in which finitely many terms are added does not matter.
The number has the property that, when added to any number, it yields that same number; so, it is the identity element of addition, or the additive identity.
For every number , there is a number denoted , called the opposite of , such that and . So, the opposite of is the inverse of with respect to addition, or the additive inverse of . For example, the opposite of is , since .
Addition can also be interpreted geometrically, as in the following example.
If we have two sticks of lengths 2 and 5, then, if the sticks are aligned one after the other, the length of the combined stick becomes 7, since .
Subtraction
Subtraction, denoted by the symbol , is the inverse operation to addition. Subtraction finds the difference between two numbers, the minuend minus the subtrahend: Resorting to the previously established addition, this is to say that the difference is the number that, when added to the subtrahend, results in the minuend:
For positive arguments and holds:
If the minuend is larger than the subtrahend, the difference is positive.
If the minuend is smaller than the subtrahend, the difference is negative.
In any case, if minuend and subtrahend are equal, the difference
Subtraction is neither commutative nor associative. For that reason, the construction of this inverse operation in modern algebra is often discarded in favor of introducing the concept of inverse elements (as sketched under ), where subtraction is regarded as adding the additive inverse of the subtrahend to the minuend, that is, . The immediate price of discarding the binary operation of subtraction is the introduction of the (trivial) unary operation, delivering the additive inverse for any given number, and losing the immediate access to the notion of difference, which is potentially misleading when negative arguments are involved.
For any representation of numbers, there are methods for calculating results, some of which are particularly advantageous in exploiting procedures, existing for one operation, by small alterations also for others. For example, digital computers can reuse existing adding-circuitry and save additional circuits for implementing a subtraction, by employing the method of two's complement for representing the additive inverses, which is extremely easy to implement in hardware (negation). The trade-off is the halving of the number range for a fixed word length.
A formerly widespread method to achieve a correct change amount, knowing the due and given amounts, is the counting up method, which does not explicitly generate the value of the difference. Suppose an amount P is given in order to pay the required amount Q, with P greater than Q. Rather than explicitly performing the subtraction P − Q = C and counting out that amount C in change, money is counted out starting with the successor of Q, and continuing in the steps of the currency, until P is reached. Although the amount counted out must equal the result of the subtraction P − Q, the subtraction was never really done and the value of P − Q is not supplied by this method.
Multiplication
Multiplication, denoted by the symbols or , is the second basic operation of arithmetic. Multiplication also combines two numbers into a single number, the product. The two original numbers are called the multiplier and the multiplicand, mostly both are called factors.
Multiplication may be viewed as a scaling operation. If the numbers are imagined as lying in a line, multiplication by a number greater than 1, say x, is the same as stretching everything away from 0 uniformly, in such a way that the number 1 itself is stretched to where x was. Similarly, multiplying by a number less than 1 can be imagined as squeezing towards 0, in such a way that 1 goes to the multiplicand.
Another view on multiplication of integer numbers (extendable to rationals but not very accessible for real numbers) is by considering it as repeated addition. For example. corresponds to either adding times a , or times a , giving the same result. There are different opinions on the advantageousness of these paradigmata in math education.
Multiplication is commutative and associative; further, it is distributive over addition and subtraction. The multiplicative identity is 1, since multiplying any number by 1 yields that same number. The multiplicative inverse for any number except is the reciprocal of this number, because multiplying the reciprocal of any number by the number itself yields the multiplicative identity . is the only number without a multiplicative inverse, and the result of multiplying any number and is again One says that is not contained in the multiplicative group of the numbers.
The product of a and b is written as or . It can also written by simple juxtaposition: ab. In computer programming languages and software packages (in which one can only use characters normally found on a keyboard), it is often written with an asterisk: a * b.
Algorithms implementing the operation of multiplication for various representations of numbers are by far more costly and laborious than those for addition. Those accessible for manual computation either rely on breaking down the factors to single place values and applying repeated addition, or on employing tables or slide rules, thereby mapping multiplication to addition and vice versa. These methods are outdated and are gradually replaced by mobile devices. Computers use diverse sophisticated and highly optimized algorithms, to implement multiplication and division for the various number formats supported in their system.
Division
Division, denoted by the symbols or , is essentially the inverse operation to multiplication. Division finds the quotient of two numbers, the dividend divided by the divisor. Under common rules, dividend divided by zero is undefined. For distinct positive numbers, if the dividend is larger than the divisor, the quotient is greater than 1, otherwise it is less than or equal to 1 (a similar rule applies for negative numbers). The quotient multiplied by the divisor always yields the dividend.
Division is neither commutative nor associative. So as explained in , the construction of the division in modern algebra is discarded in favor of constructing the inverse elements with respect to multiplication, as introduced in . Hence division is the multiplication of the dividend with the reciprocal of the divisor as factors, that is,
Within the natural numbers, there is also a different but related notion called Euclidean division, which outputs two numbers after "dividing" a natural (numerator) by a natural (denominator): first a natural (quotient), and second a natural (remainder) such that and
In some contexts, including computer programming and advanced arithmetic, division is extended with another output for the remainder. This is often treated as a separate operation, the Modulo operation, denoted by the symbol or the word , though sometimes a second output for one "divmod" operation. In either case, Modular arithmetic has a variety of use cases. Different implementations of division (floored, truncated, Euclidean, etc.) correspond with different implementations of modulus.
Fundamental theorem of arithmetic
The fundamental theorem of arithmetic states that any integer greater than 1 has a unique prime factorization (a representation of a number as the product of prime factors), excluding the order of the factors. For example, 252 only has one prime factorization:
252 = 2 × 3 × 7
Euclid's Elements first introduced this theorem, and gave a partial proof (which is called Euclid's lemma). The fundamental theorem of arithmetic was first proven by Carl Friedrich Gauss.
The fundamental theorem of arithmetic is one of the reasons why 1 is not considered a prime number. Other reasons include the sieve of Eratosthenes, and the definition of a prime number itself (a natural number greater than 1 that cannot be formed by multiplying two smaller natural numbers.).
Decimal arithmetic
refers exclusively, in common use, to the written numeral system employing arabic numerals as the digits for a radix 10 ("decimal") positional notation; however, any numeral system based on powers of 10, e.g., Greek, Cyrillic, Roman, or Chinese numerals may conceptually be described as "decimal notation" or "decimal representation".
Modern methods for four fundamental operations (addition, subtraction, multiplication and division) were first devised by Brahmagupta of India. This was known during medieval Europe as "Modus Indorum" or Method of the Indians. Positional notation (also known as "place-value notation") refers to the representation or encoding of numbers using the same symbol for the different orders of magnitude (e.g., the "ones place", "tens place", "hundreds place") and, with a radix point, using those same symbols to represent fractions (e.g., the "tenths place", "hundredths place"). For example, 507.36 denotes 5 hundreds (102), plus 0 tens (101), plus 7 units (100), plus 3 tenths (10−1) plus 6 hundredths (10−2).
The concept of 0 as a number comparable to the other basic digits is essential to this notation, as is the concept of 0's use as a placeholder, and as is the definition of multiplication and addition with 0. The use of 0 as a placeholder and, therefore, the use of a positional notation is first attested to in the Jain text from India entitled the Lokavibhâga, dated 458 AD and it was only in the early 13th century that these concepts, transmitted via the scholarship of the Arabic world, were introduced into Europe by Fibonacci using the Hindu–Arabic numeral system.
Algorism comprises all of the rules for performing arithmetic computations using this type of written numeral. For example, addition produces the sum of two arbitrary numbers. The result is calculated by the repeated addition of single digits from each number that occupies the same position, proceeding from right to left. An addition table with ten rows and ten columns displays all possible values for each sum. If an individual sum exceeds the value 9, the result is represented with two digits. The rightmost digit is the value for the current position, and the result for the subsequent addition of the digits to the left increases by the value of the second (leftmost) digit, which is always one (if not zero). This adjustment is termed a carry of the value 1.
The process for multiplying two arbitrary numbers is similar to the process for addition. A multiplication table with ten rows and ten columns lists the results for each pair of digits. If an individual product of a pair of digits exceeds 9, the carry adjustment increases the result of any subsequent multiplication from digits to the left by a value equal to the second (leftmost) digit, which is any value from (). Additional steps define the final result.
Similar techniques exist for subtraction and division.
The creation of a correct process for multiplication relies on the relationship between values of adjacent digits. The value for any single digit in a numeral depends on its position. Also, each position to the left represents a value ten times larger than the position to the right. In mathematical terms, the exponent for the radix (base) of 10 increases by 1 (to the left) or decreases by 1 (to the right). Therefore, the value for any arbitrary digit is multiplied by a value of the form 10n with integer n. The list of values corresponding to all possible positions for a single digit is written
Repeated multiplication of any value in this list by 10 produces another value in the list. In mathematical terminology, this characteristic is defined as closure, and the previous list is described as . It is the basis for correctly finding the results of multiplication using the previous technique. This outcome is one example of the uses of number theory.
Compound unit arithmetic
Compound unit arithmetic is the application of arithmetic operations to mixed radix quantities such as feet and inches; gallons and pints; pounds, shillings and pence; and so on. Before decimal-based systems of money and units of measure, compound unit arithmetic was widely used in commerce and industry.
Basic arithmetic operations
The techniques used in compound unit arithmetic were developed over many centuries and are well documented in many textbooks in many different languages. In addition to the basic arithmetic functions encountered in decimal arithmetic, compound unit arithmetic employs three more functions:
, in which a compound quantity is reduced to a single quantity—for example, conversion of a distance expressed in yards, feet and inches to one expressed in inches.
, the inverse function to reduction, is the conversion of a quantity that is expressed as a single unit of measure to a compound unit, such as expanding 24 oz to .
is the conversion of a set of compound units to a standard form—for example, rewriting "" as "".
Knowledge of the relationship between the various units of measure, their multiples and their submultiples forms an essential part of compound unit arithmetic.
Principles of compound unit arithmetic
There are two basic approaches to compound unit arithmetic:
where all the compound unit variables are reduced to single unit variables, the calculation performed and the result expanded back to compound units. This approach is suited for automated calculations. A typical example is the handling of time by Microsoft Excel where all time intervals are processed internally as days and decimal fractions of a day.
in which each unit is treated separately and the problem is continuously normalized as the solution develops. This approach, which is widely described in classical texts, is best suited for manual calculations. An example of the ongoing normalization method as applied to addition is shown below.
The addition operation is carried out from right to left; in this case, pence are processed first, then shillings followed by pounds. The numbers below the "answer line" are intermediate results.
The total in the pence column is 25. Since there are 12 pennies in a shilling, 25 is divided by 12 to give 2 with a remainder of 1. The value "1" is then written to the answer row and the value "2" carried forward to the shillings column. This operation is repeated using the values in the shillings column, with the additional step of adding the value that was carried forward from the pennies column. The intermediate total is divided by 20 as there are 20 shillings in a pound. The pound column is then processed, but as pounds are the largest unit that is being considered, no values are carried forward from the pounds column.
For the sake of simplicity, the example chosen did not have farthings.
Operations in practice
During the 19th and 20th centuries various aids were developed to aid the manipulation of compound units, particularly in commercial applications. The most common aids were mechanical tills which were adapted in countries such as the United Kingdom to accommodate pounds, shillings, pence and farthings, and ready reckoners, which are books aimed at traders that catalogued the results of various routine calculations such as the percentages or multiples of various sums of money. One typical booklet that ran to 150 pages tabulated multiples "from one to ten thousand at the various prices from one farthing to one pound".
The cumbersome nature of compound unit arithmetic has been recognized for many years—in 1586, the Flemish mathematician Simon Stevin published a small pamphlet called De Thiende ("the tenth") in which he declared the universal introduction of decimal coinage, measures, and weights to be merely a question of time. In the modern era, many conversion programs, such as that included in the Microsoft Windows 7 operating system calculator, display compound units in a reduced decimal format rather than using an expanded format (e.g., "2.5 ft" is displayed rather than ).
Number theory
Until the 19th century, number theory was a synonym of "arithmetic". The addressed problems were directly related to the basic operations and concerned primality, divisibility, and the solution of equations in integers, such as Fermat's Last Theorem. It appeared that most of these problems, although very elementary to state, are very difficult and may not be solved without very deep mathematics involving concepts and methods from many other branches of mathematics. This led to new branches of number theory such as analytic number theory, algebraic number theory, Diophantine geometry and arithmetic algebraic geometry. Wiles' proof of Fermat's Last Theorem is a typical example of the necessity of sophisticated methods, which go far beyond the classical methods of arithmetic, for solving problems that can be stated in elementary arithmetic.
Arithmetic in education
Primary education in mathematics often places a strong focus on algorithms for the arithmetic of natural numbers, integers, fractions, and decimals (using the decimal place-value system). This study is sometimes known as algorism.
The difficulty and unmotivated appearance of these algorithms has long led educators to question this curriculum, advocating the early teaching of more central and intuitive mathematical ideas. One notable movement in this direction was the New Math of the 1960s and 1970s, which attempted to teach arithmetic in the spirit of axiomatic development from set theory, an echo of the prevailing trend in higher mathematics.
Also, arithmetic was used by Islamic Scholars in order to teach application of the rulings related to Zakat and Irth. This was done in a book entitled The Best of Arithmetic by Abd-al-Fattah-al-Dumyati. The book begins with the foundations of mathematics and proceeds to its application in the later chapters.
See also
Lists of mathematics topics
Outline of arithmetic
Slide rule
Related topics
Addition of natural numbers
Additive inverse
Arithmetic coding
Arithmetic mean
Arithmetic number
Arithmetic progression
Arithmetic properties
Associativity
Commutativity
Distributivity
Elementary arithmetic
Finite field arithmetic
Geometric progression
Integer
List of important publications in mathematics
Lunar arithmetic
Mental calculation
Number line
Plant arithmetic
Notes
References
Cunnington, Susan, The Story of Arithmetic: A Short History of Its Origin and Development, Swan Sonnenschein, London, 1904
Dickson, Leonard Eugene, History of the Theory of Numbers (3 volumes), reprints: Carnegie Institute of Washington, Washington, 1932; Chelsea, New York, 1952, 1966
Euler, Leonhard, Elements of Algebra, Tarquin Press, 2007
Fine, Henry Burchard (1858–1928), The Number System of Algebra Treated Theoretically and Historically, Leach, Shewell & Sanborn, Boston, 1891
Karpinski, Louis Charles (1878–1956), The History of Arithmetic, Rand McNally, Chicago, 1925; reprint: Russell & Russell, New York, 1965
Ore, Øystein, Number Theory and Its History, McGraw–Hill, New York, 1948
Weil, André, Number Theory: An Approach through History, Birkhauser, Boston, 1984; reviewed: Mathematical Reviews 85c:01004
External links
MathWorld article about arithmetic
The New Student's Reference Work/Arithmetic (historical)
The Great Calculation According to the Indians, of Maximus Planudes – an early Western work on arithmetic at Convergence
Mathematics education
|
https://en.wikipedia.org/wiki/Afterglow
|
An afterglow in meteorology consists of several atmospheric optical phenomena, with a general definition as a broad arch of whitish or pinkish sunlight in the twilight sky, consisting of the bright segment and the purple light. Purple light mainly occurs when the Sun is 2–6° below the horizon, from civil to nautical twilight, while the bright segment lasts until the end of the nautical twilight. Afterglow is often in cases of volcanic eruptions discussed, while its purple light is discussed as a different particular volcanic purple light. Specifically in volcanic occurrences it is light scattered by fine particulates, like dust, suspended in the atmosphere. In the case of alpenglow, which is similar to the Belt of Venus, afterglow is used in general for the golden-red glowing light from the sunset and sunrise reflected in the sky, and in particularly for its last stage, when the purple light is reflected. The opposite of an afterglow is a foreglow, which occurs before sunrise.
Sunlight reaches Earth around civil twilight during golden hour intensely in its low-energy and low-frequency red component.
During this part of civil twilight after sunset and before sundawn the red sunlight remains visible by scattering through particles in the air. Backscattering, possibly after being reflected off clouds or high snowfields in mountain regions, furthermore creates a reddish to pinkish light. The high-energy and high-frequency components of light towards blue are scattered out broadly, producing the broader blue light of nautical twilight before or after the reddish light of civil twilight, while in combination with the reddish light producing the purple light. This period of blue dominating is referred to as the blue hour and is, like the golden hour, widely treasured by photographers and painters.
After the 1883 eruption of the volcano Krakatoa, a remarkable series of red sunsets appeared worldwide. An enormous amount of exceedingly fine dust were blown to a great height by the volcano's explosion, and then globally diffused by the high atmospheric winds. Edvard Munch's painting The Scream possibly depicts an afterglow during this period.
See also
Airglow
Belt of Venus
Earth's shadow
Gegenschein
Red sky at morning
Sunset
References
External links
Atmospheric optical phenomena
es:Arrebol
fi:Purppuravalo
|
https://en.wikipedia.org/wiki/Amygdalin
|
Amygdalin (from Ancient Greek: 'almond') is a naturally occurring chemical compound found in many plants, most notably in the seeds (kernels) of apricots, bitter almonds, apples, peaches, cherries and plums, and in the roots of manioc.
Amygdalin is classified as a cyanogenic glycoside, because each amygdalin molecule includes a nitrile group, which can be released as the toxic cyanide anion by the action of a beta-glucosidase. Eating amygdalin will cause it to release cyanide in the human body, and may lead to cyanide poisoning.
Since the early 1950s, both amygdalin and a chemical derivative named laetrile have been promoted as alternative cancer treatments, often under the misnomer vitamin B17 (neither amygdalin nor laetrile is a vitamin). Scientific study has found them to not only be clinically ineffective in treating cancer, but also potentially toxic or lethal when taken by mouth due to cyanide poisoning. The promotion of laetrile to treat cancer has been described in the medical literature as a canonical example of quackery, and as "the slickest, most sophisticated, and certainly the most remunerative cancer quack promotion in medical history".
Chemistry
Amygdalin is a cyanogenic glycoside derived from the aromatic amino acid phenylalanine. Amygdalin and prunasin are common among plants of the family Rosaceae, particularly the genus Prunus, Poaceae (grasses), Fabaceae (legumes), and in other food plants, including flaxseed and manioc. Within these plants, amygdalin and the enzymes necessary to hydrolyze it are stored in separate locations, and only mix as a result of tissue damage. This provides a natural defense system.
Amygdalin is contained in stone fruit kernels, such as almonds, apricot (14 g/kg), peach (6.8 g/kg), and plum (4–17.5 g/kg depending on variety), and also in the seeds of the apple (3 g/kg). Benzaldehyde released from amygdalin provides a bitter flavor. Because of a difference in a recessive gene called Sweet kernal [Sk], much less amygdalin is present in nonbitter (or sweet) almond than bitter almond. In one study, bitter almond amygdalin concentrations ranged from 33 to 54 g/kg depending on variety; semibitter varieties averaged 1 g/kg and sweet varieties averaged 0.063 g/kg with significant variability based on variety and growing region.
For one method of isolating amygdalin, the stones are removed from the fruit and cracked to obtain the kernels, which are dried in the sun or in ovens. The kernels are boiled in ethanol; on evaporation of the solution and the addition of diethyl ether, amygdalin is precipitated as minute white crystals. Natural amygdalin has the (R)-configuration at the chiral phenyl center. Under mild basic conditions, this stereogenic center isomerizes; the (S)-epimer is called neoamygdalin. Although the synthesized version of amygdalin is the (R)-epimer, the stereogenic center attached to the nitrile and phenyl groups easily epimerizes if the manufacturer does not store the compound correctly.
Amygdalin is hydrolyzed by intestinal β-glucosidase (emulsin) and amygdalin beta-glucosidase (amygdalase) to give gentiobiose and L-mandelonitrile. Gentiobiose is further hydrolyzed to give glucose, whereas mandelonitrile (the cyanohydrin of benzaldehyde) decomposes to give benzaldehyde and hydrogen cyanide. Hydrogen cyanide in sufficient quantities (allowable daily intake: ~0.6 mg) causes cyanide poisoning which has a fatal oral dose range of 0.6–1.5 mg/kg of body weight.
Laetrile
Laetrile (patented 1961) is a simpler semisynthetic derivative of amygdalin. Laetrile is synthesized from amygdalin by hydrolysis. The usual preferred commercial source is from apricot kernels (Prunus armeniaca). The name is derived from the separate words "laevorotatory" and "mandelonitrile". Laevorotatory describes the stereochemistry of the molecule, while mandelonitrile refers to the portion of the molecule from which cyanide is released by decomposition.
A 500 mg laetrile tablet may contain between 2.5 and 25 mg of hydrogen cyanide.
Like amygdalin, laetrile is hydrolyzed in the duodenum (alkaline) and in the intestine (enzymatically) to D-glucuronic acid and L-mandelonitrile; the latter hydrolyzes to benzaldehyde and hydrogen cyanide, that in sufficient quantities causes cyanide poisoning.
Claims for laetrile were based on three different hypotheses: The first hypothesis proposed that cancerous cells contained copious beta-glucosidases, which release HCN from laetrile via hydrolysis. Normal cells were reportedly unaffected, because they contained low concentrations of beta-glucosidases and high concentrations of rhodanese, which converts HCN to the less toxic thiocyanate. Later, however, it was shown that both cancerous and normal cells contain only trace amounts of beta-glucosidases and similar amounts of rhodanese.
The second proposed that, after ingestion, amygdalin was hydrolyzed to mandelonitrile, transported intact to the liver and converted to a beta-glucuronide complex, which was then carried to the cancerous cells, hydrolyzed by beta-glucuronidases to release mandelonitrile and then HCN. Mandelonitrile, however, dissociates to benzaldehyde and hydrogen cyanide, and cannot be stabilized by glycosylation.
Finally, the third asserted that laetrile is the discovered vitamin B-17, and further suggests that cancer is a result of "B-17 deficiency". It postulated that regular dietary administration of this form of laetrile would, therefore, actually prevent all incidences of cancer. There is no evidence supporting this conjecture in the form of a physiologic process, nutritional requirement, or identification of any deficiency syndrome. The term "vitamin B-17" is not recognized by Committee on Nomenclature of the American Institute of Nutrition Vitamins. Ernst T. Krebs (not to be confused with Hans Adolf Krebs, the discoverer of the citric acid cycle) branded laetrile as a vitamin in order to have it classified as a nutritional supplement rather than as a pharmaceutical.
History of laetrile
Early usage
Amygdalin was first isolated in 1830 from bitter almond seeds (Prunus dulcis) by Pierre-Jean Robiquet and Antoine Boutron-Charlard. Liebig and Wöhler found three hydrolysis products of amygdalin: sugar, benzaldehyde, and prussic acid (hydrogen cyanide, HCN). Later research showed that sulfuric acid hydrolyzes it into D-glucose, benzaldehyde, and prussic acid; while hydrochloric acid gives mandelic acid, D-glucose, and ammonia.
In 1845 amygdalin was used as a cancer treatment in Russia, and in the 1920s in the United States, but it was considered too poisonous. In the 1950s, a purportedly non-toxic, synthetic form was patented for use as a meat preservative, and later marketed as laetrile for cancer treatment.
The U.S. Food and Drug Administration prohibited the interstate shipment of amygdalin and laetrile in 1977. Thereafter, 27 U.S. states legalized the use of amygdalin within those states.
Subsequent results
In a 1977 controlled, blinded trial, laetrile showed no more activity than placebo.
Subsequently, laetrile was tested on 14 tumor systems without evidence of effectiveness. The Memorial Sloan–Kettering Cancer Center (MSKCC) concluded that "laetrile showed no beneficial effects." Mistakes in an earlier MSKCC press release were highlighted by a group of laetrile proponents led by Ralph Moss, former public affairs official of MSKCC who had been fired following his appearance at a press conference accusing the hospital of covering up the benefits of laetrile. These mistakes were considered scientifically inconsequential, but Nicholas Wade in Science stated that "even the appearance of a departure from strict objectivity is unfortunate." The results from these studies were published all together.
A 2015 systematic review from the Cochrane Collaboration found:
The authors also recommended, on ethical and scientific grounds, that no further clinical research into laetrile or amygdalin be conducted.
Given the lack of evidence, laetrile has not been approved by the U.S. Food and Drug Administration or the European Commission.
The U.S. National Institutes of Health evaluated the evidence separately and concluded that clinical trials of amygdalin showed little or no effect against cancer. For example, a 1982 trial by the Mayo Clinic of 175 patients found that tumor size had increased in all but one patient. The authors reported that "the hazards of amygdalin therapy were evidenced in several patients by symptoms of cyanide toxicity or by blood cyanide levels approaching the lethal range."
The study concluded "Patients exposed to this agent should be instructed about the danger of cyanide poisoning, and their blood cyanide levels should be carefully monitored. Amygdalin (Laetrile) is a toxic drug that is not effective as a cancer treatment".
Additionally, "No controlled clinical trials (trials that compare groups of patients who receive the new treatment to groups who do not) of laetrile have been reported."
The side effects of laetrile treatment are the symptoms of cyanide poisoning. These symptoms include: nausea and vomiting, headache, dizziness, cherry red skin color, liver damage, abnormally low blood pressure, droopy upper eyelid, trouble walking due to damaged nerves, fever, mental confusion, coma, and death.
The European Food Safety Agency's Panel on Contaminants in the Food Chain has studied the potential toxicity of the amygdalin in apricot kernels. The Panel reported, "If consumers follow the recommendations of websites that promote consumption of apricot kernels, their exposure to cyanide will greatly exceed" the dose expected to be toxic. The Panel also reported that acute cyanide toxicity had occurred in adults who had consumed 20 or more kernels and that in children "five or more kernels appear to be toxic".
Advocacy and legality of laetrile
Advocates for laetrile assert that there is a conspiracy between the US Food and Drug Administration, the pharmaceutical industry and the medical community, including the American Medical Association and the American Cancer Society, to exploit the American people, and especially cancer patients.
Advocates of the use of laetrile have also changed the rationale for its use, first as a treatment of cancer, then as a vitamin, then as part of a "holistic" nutritional regimen, or as treatment for cancer pain, among others, none of which have any significant evidence supporting its use.
Despite the lack of evidence for its use, laetrile developed a significant following due to its wide promotion as a "pain-free" treatment of cancer as an alternative to surgery and chemotherapy that have significant side effects. The use of laetrile led to a number of deaths.
The FDA and AMA crackdown, begun in the 1970s, effectively escalated prices on the black market, played into the conspiracy narrative and enabled unscrupulous profiteers to foster multimillion-dollar smuggling empires.
Some American cancer patients have traveled to Mexico for treatment with the substance, for example at the Oasis of Hope Hospital in Tijuana. The actor Steve McQueen died in Mexico following surgery to remove a stomach tumor, having previously undergone extended treatment for pleural mesothelioma (a cancer associated with asbestos exposure) under the care of William D. Kelley, a de-licensed dentist and orthodontist who claimed to have devised a cancer treatment involving pancreatic enzymes, 50 daily vitamins and minerals, frequent body shampoos, enemas, and a specific diet as well as laetrile.
Laetrile advocates in the United States include Dean Burk, a former chief chemist of the National Cancer Institute cytochemistry laboratory, and national arm wrestling champion Jason Vale, who falsely claimed that his kidney and pancreatic cancers were cured by eating apricot seeds. Vale was convicted in 2004 for, among other things, fraudulently marketing laetrile as a cancer cure. The court also found that Vale had made at least $500,000 from his fraudulent sales of laetrile.
In the 1970s, court cases in several states challenged the FDA's authority to restrict access to what they claimed are potentially lifesaving drugs. More than twenty states passed laws making the use of laetrile legal. After the unanimous Supreme Court ruling in United States v. Rutherford which established that interstate transport of the compound was illegal, usage fell off dramatically. The US Food and Drug Administration continues to seek jail sentences for vendors marketing laetrile for cancer treatment, calling it a "highly toxic product that has not shown any effect on treating cancer."
In popular culture
The Law & Order episode "Second Opinion" is about a nutritional counselor named "Doctor" Haas giving patients laetrile as a cancer treatment for breast cancer as an alternative to getting a mastectomy.
See also
List of ineffective cancer treatments
Alternative cancer treatments
References
External links
Laetrile/Amygdalin information from the National Cancer Institute (U.S.A.)
Food and Drug Administration Commissioner's Decision on Laetrile
The Rise and Fall of Laetrile
Alternative cancer treatments
Cyanogenic glycosides
Plant toxins
Health fraud
B
|
https://en.wikipedia.org/wiki/Agar
|
Agar ( or ), or agar-agar, is a jelly-like substance consisting of polysaccharides obtained from the cell walls of some species of red algae, primarily from "ogonori" (Gracilaria) and "tengusa" (Gelidiaceae). As found in nature, agar is a mixture of two components, the linear polysaccharide agarose and a heterogeneous mixture of smaller molecules called agaropectin. It forms the supporting structure in the cell walls of certain species of algae and is released on boiling. These algae are known as agarophytes, belonging to the Rhodophyta (red algae) phylum. The processing of food-grade agar removes the agaropectin, and the commercial product is essentially pure agarose.
Agar has been used as an ingredient in desserts throughout Asia and also as a solid substrate to contain culture media for microbiological work. Agar can be used as a laxative; an appetite suppressant; a vegan substitute for gelatin; a thickener for soups; in fruit preserves, ice cream, and other desserts; as a clarifying agent in brewing; and for sizing paper and fabrics.
Etymology
The word agar comes from agar-agar, the Malay name for red algae (Gigartina, Eucheuma, Gracilaria) from which the jelly is produced. It is also known as Kanten () (from the phrase kan-zarashi tokoroten () or “cold-exposed agar”), Japanese isinglass, China grass, Ceylon moss or Jaffna moss. Gracilaria edulis or its synonym G. lichenoides is specifically referred to as agal-agal or Ceylon agar.
History
Macroalgae have been used widely as food by coastal cultures, especially in Southeast Asia. In the Philippines, Gracilaria, known as gulaman (or gulaman dagat) in Tagalog, have been harvested and used as food for centuries, eaten both fresh or sun-dried and turned into jellies. The earliest historical attestation is from the Vocabulario de la lengua tagala (1754) by the Jesuit priests Juan de Noceda and Pedro de Sanlucar, where golaman or gulaman was defined as "una yerva, de que se haze conserva a modo de Halea, naze en la mar" ("an herb, from which a jam-like preserve is made, grows in the sea"), with an additional entry for guinolaman to refer to food made with the jelly.
Carrageenan, derived from gusô (Eucheuma spp.), which also congeals into a gel-like texture is also used similarly among the Visayan peoples and have been recorded in the even earlier Diccionario De La Lengua Bisaya, Hiligueina y Haraia de la isla de Panay y Sugbu y para las demas islas (c.1637) of the Augustinian missionary Alonso de Méntrida . In the book, Méntrida describes gusô as being cooked until it melts, and then allowed to congeal into a sour dish.
Jelly seaweeds were also favoured and foraged by Malay communities living on the coasts of the Riau Archipelago and Singapore in Southeast Asia for centuries.
The application of agar as a food additive in Japan is alleged to have been discovered in 1658 by Mino Tarōzaemon (), an innkeeper in current Fushimi-ku, Kyoto who, according to legend, was said to have discarded surplus seaweed soup (Tokoroten) and noticed that it gelled later after a winter night's freezing.
Agar was first subjected to chemical analysis in 1859 by the French chemist Anselme Payen, who had obtained agar from the marine algae Gelidium corneum.
Beginning in the late 19th century, agar began to be used as a solid medium for growing various microbes. Agar was first described for use in microbiology in 1882 by the German microbiologist Walther Hesse, an assistant working in Robert Koch's laboratory, on the suggestion of his wife Fanny Hesse. Agar quickly supplanted gelatin as the base of microbiological media, due to its higher melting temperature, allowing microbes to be grown at higher temperatures without the media liquefying.
With its newfound use in microbiology, agar production quickly increased. This production centered on Japan, which produced most of the world's agar until World War II. However, with the outbreak of World War II, many nations were forced to establish domestic agar industries in order to continue microbiological research. Around the time of World War II, approximately 2,500 tons of agar were produced annually. By the mid-1970s, production worldwide had increased dramatically to approximately 10,000 tons each year. Since then, production of agar has fluctuated due to unstable and sometimes over-utilized seaweed populations.
Chemical composition
Agar consists of a mixture of two polysaccharides: agarose and agaropectin, with agarose making up about 70% of the mixture, while agaropectin makes about 30% of it. Agarose is a linear polymer, made up of repeating units of agarobiose, a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agaropectin is a heterogeneous mixture of smaller molecules that occur in lesser amounts, and is made up of alternating units of D-galactose and L-galactose heavily modified with acidic side-groups, such as sulfate, glucuronate, and pyruvate.
Physical properties
Agar exhibits hysteresis because when mixed with water, it solidifies and forms a gel at about , which is called the gel point, and melts at , which is the melting point. Hysteresis of agar occurs due to the difference between the gel point and melting point temperatures. This property lends a suitable balance between easy melting and good gel stability at relatively high temperatures. Since many scientific applications require incubation at temperatures close to human body temperature (37 °C), agar is more appropriate than other solidifying agents that melt at this temperature, such as gelatin.
Uses
Culinary
Agar-agar is a natural vegetable gelatin counterpart. It is white and semi-translucent when sold in packages as washed and dried strips or in powdered form. It can be used to make jellies, puddings, and custards. When making jelly, it is boiled in water until the solids dissolve. Sweetener, flavoring, coloring, fruits and or vegetables are then added, and the liquid is poured into molds to be served as desserts and vegetable aspics or incorporated with other desserts such as a layer of jelly in a cake.
Agar-agar is approximately 80% dietary fiber, so it can serve as an intestinal regulator. Its bulking quality has been behind fad diets in Asia, for example the kanten (the Japanese word for agar-agar) diet. Once ingested, kanten triples in size and absorbs water. This results in the consumers feeling fuller.
Asian culinary
One use of agar in Japanese cuisine (Wagashi) is anmitsu, a dessert made of small cubes of agar jelly and served in a bowl with various fruits or other ingredients. It is also the main ingredient in mizu yōkan, another popular Japanese food. In Philippine cuisine, it is used to make the jelly bars in the various gulaman refreshments like Sago't Gulaman, Samalamig, or desserts such as buko pandan, agar flan, halo-halo, fruit cocktail jelly, and the black and red gulaman used in various fruit salads. In Vietnamese cuisine, jellies made of flavored layers of agar agar, called thạch, are a popular dessert, and are often made in ornate molds for special occasions. In Indian cuisine, agar is used for making desserts. In Burmese cuisine, a sweet jelly known as kyauk kyaw is made from agar. Agar jelly is widely used in Taiwanese bubble tea.
Other culinary
It can be used as addition to or as a replacement for pectin in jams and marmalades, as a substitute to gelatin for its superior gelling properties, and as a strengthening ingredient in souffles and custards. Another use of agar-agar is in a Russian dish ptich'ye moloko (bird's milk), a rich jellified custard (or soft meringue) used as a cake filling or chocolate-glazed as individual sweets.
Agar-agar may also be used as the gelling agent in gel clarification, a culinary technique used to clarify stocks, sauces, and other liquids. Mexico has traditional candies made out of Agar gelatin, most of them in colorful, half-circle shapes that resemble a melon or watermelon fruit slice, and commonly covered with sugar. They are known in Spanish as Dulce de Agar (Agar sweets)
Agar-agar is an allowed nonorganic/nonsynthetic additive used as a thickener, gelling agent, texturizer, moisturizer, emulsifier, flavor enhancer, and absorbent in certified organic foods.
Microbiology
Agar plate
An agar plate or Petri dish is used to provide a growth medium using a mix of agar and other nutrients in which microorganisms, including bacteria and fungi, can be cultured and observed under the microscope. Agar is indigestible for many organisms so that microbial growth does not affect the gel used and it remains stable. Agar is typically sold commercially as a powder that can be mixed with water and prepared similarly to gelatin before use as a growth medium. Nutrients are typically added to meet the nutritional needs of the microbes organism, the formulations of which may be "undefined" where the precise composition is unknown, or "defined" where the exact chemical composition is known. Agar is often dispensed using a sterile media dispenser.
Different algae produce various types of agar. Each agar has unique properties that suit different purposes. Because of the agarose component, the agar solidifies. When heated, agarose has the potential to melt and then solidify. Because of this property, they are referred to as "physical gels." In contrast, polyacrylamide polymerization is an irreversible process, and the resulting products are known as chemical gels.
There are a variety of different types of agar that support the growth of different microorganisms. A nutrient agar may be permissive, allowing for the cultivation of any non-fastidious microorganisms; a commonly-used nutrient agar for bacteria is the Luria Bertani (LB) agar which contains lysogeny broth, a nutrient-rich medium used for bacterial growth. Other fastidious organisms may require the addition of different biological fluids such as horse or sheep blood, serum, egg yolk, and so on. Agar plates can also be selective, and can be used to promote the growth of bacteria of interest while inhibiting others. A variety of chemicals may be added to create an environment favourable for specific types of bacteria or bacteria with certain properties, but not conducive for growth of others. For example, antibiotics may be added in cloning experiments whereby bacteria with antibiotic-resistant plasmid are selected.
Motility assays
As a gel, an agar or agarose medium is porous and therefore can be used to measure microorganism motility and mobility. The gel's porosity is directly related to the concentration of agarose in the medium, so various levels of effective viscosity (from the cell's "point of view") can be selected, depending on the experimental objectives.
A common identification assay involves culturing a sample of the organism deep within a block of nutrient agar. Cells will attempt to grow within the gel structure. Motile species will be able to migrate, albeit slowly, throughout the gel, and infiltration rates can then be visualized, whereas non-motile species will show growth only along the now-empty path introduced by the invasive initial sample deposition.
Another setup commonly used for measuring chemotaxis and chemokinesis utilizes the under-agarose cell migration assay, whereby a layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient.
Plant biology
Research grade agar is used extensively in plant biology as it is optionally supplemented with a nutrient and/or vitamin mixture that allows for seedling germination in Petri dishes under sterile conditions (given that the seeds are sterilized as well). Nutrient and/or vitamin supplementation for Arabidopsis thaliana is standard across most experimental conditions. Murashige & Skoog (MS) nutrient mix and Gamborg's B5 vitamin mix in general are used. A 1.0% agar/0.44% MS+vitamin dH2O solution is suitable for growth media between normal growth temps.
When using agar, within any growth medium, it is important to know that the solidification of the agar is pH-dependent. The optimal range for solidification is between 5.4 and 5.7. Usually, the application of potassium hydroxide is needed to increase the pH to this range. A general guideline is about 600 µl 0.1M KOH per 250 ml GM. This entire mixture can be sterilized using the liquid cycle of an autoclave.
This medium nicely lends itself to the application of specific concentrations of phytohormones etc. to induce specific growth patterns in that one can easily prepare a solution containing the desired amount of hormone, add it to the known volume of GM, and autoclave to both sterilize and evaporate off any solvent that may have been used to dissolve the often-polar hormones. This hormone/GM solution can be spread across the surface of Petri dishes sown with germinated and/or etiolated seedlings.
Experiments with the moss Physcomitrella patens, however, have shown that choice of the gelling agent – agar or Gelrite – does influence phytohormone sensitivity of the plant cell culture.
Other uses
Agar is used:
As an impression material in dentistry.
As a medium to precisely orient the tissue specimen and secure it by agar pre-embedding (especially useful for small endoscopy biopsy specimens) for histopathology processing
To make salt bridges and gel plugs for use in electrochemistry.
In formicariums as a transparent substitute for sand and a source of nutrition.
As a natural ingredient in forming modeling clay for young children to play with.
As an allowed biofertilizer component in organic farming.
As a substrate for precipitin reactions in immunology.
At different times as a substitute for gelatin in photographic emulsions, arrowroot in preparing silver paper and as a substitute for fish glue in resist etching.
As an MRI elastic gel phantom to mimic tissue mechanical properties in Magnetic Resonance Elastography
Gelidium agar is used primarily for bacteriological plates. Gracilaria agar is used mainly in food applications.
In 2016, AMAM, a Japanese company, developed a prototype for Agar-based commercial packaging system called Agar Plasticity, intended as a replacement for oil-based plastic packaging.
See also
References
External links
Edible thickening agents
Microbiological gelling agent
Dental materials
Algal food ingredients
Red algae
Gels
Polysaccharides
Japanese inventions
Food stabilizers
Jams and jellies
E-number additives
Impression material
|
https://en.wikipedia.org/wiki/Antioxidant
|
Antioxidants are compounds that inhibit oxidation (usually occurring as autoxidation), a chemical reaction that can produce free radicals. Autoxidation leads to degradation of organic compounds, including living matter. Antioxidants are frequently added to industrial products, such as polymers, fuels, and lubricants, to extend their usable lifetimes. Food are also treated with antioxidants to forestall spoilage, in particular the rancidification of oils and fats. In cells, antioxidants such as glutathione, mycothiol or bacillithiol, and enzyme systems like superoxide dismutase, can prevent damage from oxidative stress.
Known dietary antioxidants are vitamins A, C, and E, but the term antioxidant has also been applied to numerous other dietary compounds that only have antioxidant properties in vitro, with little evidence for antioxidant properties in vivo. Dietary supplements marketed as antioxidants have not been shown to maintain health or prevent disease in humans.
History
As part of their adaptation from marine life, terrestrial plants began producing non-marine antioxidants such as ascorbic acid (vitamin C), polyphenols and tocopherols. The evolution of angiosperm plants between 50 and 200 million years ago resulted in the development of many antioxidant pigments – particularly during the Jurassic period – as chemical defences against reactive oxygen species that are byproducts of photosynthesis. Originally, the term antioxidant specifically referred to a chemical that prevented the consumption of oxygen. In the late 19th and early 20th centuries, extensive study concentrated on the use of antioxidants in important industrial processes, such as the prevention of metal corrosion, the vulcanization of rubber, and the polymerization of fuels in the fouling of internal combustion engines.
Early research on the role of antioxidants in biology focused on their use in preventing the oxidation of unsaturated fats, which is the cause of rancidity. Antioxidant activity could be measured simply by placing the fat in a closed container with oxygen and measuring the rate of oxygen consumption. However, it was the identification of vitamins C and E as antioxidants that revolutionized the field and led to the realization of the importance of antioxidants in the biochemistry of living organisms. The possible mechanisms of action of antioxidants were first explored when it was recognized that a substance with anti-oxidative activity is likely to be one that is itself readily oxidized. Research into how vitamin E prevents the process of lipid peroxidation led to the identification of antioxidants as reducing agents that prevent oxidative reactions, often by scavenging reactive oxygen species before they can damage cells.
Uses in technology
Food preservatives
Antioxidants are used as food additives to help guard against food deterioration. Exposure to oxygen and sunlight are the two main factors in the oxidation of food, so food is preserved by keeping in the dark and sealing it in containers or even coating it in wax, as with cucumbers. However, as oxygen is also important for plant respiration, storing plant materials in anaerobic conditions produces unpleasant flavors and unappealing colors. Consequently, packaging of fresh fruits and vegetables contains an ≈8% oxygen atmosphere. Antioxidants are an especially important class of preservatives as, unlike bacterial or fungal spoilage, oxidation reactions still occur relatively rapidly in frozen or refrigerated food. These preservatives include natural antioxidants such as ascorbic acid (AA, E300) and tocopherols (E306), as well as synthetic antioxidants such as propyl gallate (PG, E310), tertiary butylhydroquinone (TBHQ), butylated hydroxyanisole (BHA, E320) and butylated hydroxytoluene (BHT, E321).
Unsaturated fats can be highly susceptible to oxidation, causing rancidification. Oxidized lipids are often discolored and can impart unpleasant tastes and flavors. Thus, these foods are rarely preserved by drying; instead, they are preserved by smoking, salting, or fermenting. Even less fatty foods such as fruits are sprayed with sulfurous antioxidants prior to air drying. Metals catalyse oxidation. Some fatty foods such as olive oil are partially protected from oxidation by their natural content of antioxidants. Fatty foods are sensitive to photooxidation, which forms hydroperoxides by oxidizing unsaturated fatty acids and ester. Exposure to ultraviolet (UV) radiation can cause direct photooxidation and decompose peroxides and carbonyl molecules. These molecules undergo free radical chain reactions, but antioxidants inhibit them by preventing the oxidation processes.
Cosmetics preservatives
Antioxidant stabilizers are also added to fat-based cosmetics such as lipstick and moisturizers to prevent rancidity. Antioxidants in cosmetic products prevent oxidation of active ingredients and lipid content. For example, phenolic antioxidants such as stilbenes, flavonoids, and hydroxycinnamic acid strongly absorb UV radiation due to the presence of chromophores. They reduce oxidative stress from sun exposure by absorbing UV light.
Industrial uses
Antioxidants may be added to industrial products, such as stabilizers in fuels and additives in lubricants, to prevent oxidation and polymerization that leads to the formation of engine-fouling residues.
Antioxidant polymer stabilizers are widely used to prevent the degradation of polymers such as rubbers, plastics and adhesives that causes a loss of strength and flexibility in these materials. Polymers containing double bonds in their main chains, such as natural rubber and polybutadiene, are especially susceptible to oxidation and ozonolysis. They can be protected by antiozonants. Oxidation can be accelerated by UV radiation in natural sunlight to cause photo-oxidation. Various specialised light stabilisers, such as HALS may be added to plastics to prevent this. Synthetic phenolic and aminic antioxidants are increasingly being identified as potential human and environmental health hazards.
Environmental and health hazards
Synthetic phenolic antioxidants (SPAs) and aminic antioxidants have potential human and environmental health hazards. SPAs are common in indoor dust, small air particles, sediment, sewage, river water and wastewater. They are synthesized from phenolic compounds and include 2,6-di-tert-butyl-4-methylphenol (BHT), 2,6-di-tert-butyl-p-benzoquinone (BHT-Q), 2,4-di-tert-butyl-phenol (DBP) and 3-tert-butyl-4-hydroxyanisole (BHA). BHT can cause hepatotoxicity and damage to the endocrine system and may increase tumor development rates due to dimethylhydrazine. BHT-Q can cause DNA damage and mismatches through the cleavage process, generating superoxide radicals. DBP is toxic to marine life if exposed long-term. Phenolic antioxidants have low biodegradability, but they do not have severe toxicity toward aquatic organisms at low concentrations. Another type of antioxidant, diphenylamine (DPA), is commonly used in the production of commercial, industrial lubricants and rubber products and it also acts as a supplement for automotive engine oils.
Oxidative challenge in biology
The vast majority of complex life on Earth requires oxygen for its metabolism, but this same oxygen is a highly reactive element that can damage living organisms. Organisms contain chemicals and enzymes that minimize this oxidative damage without interfering with the beneficial effect of oxygen. In general, antioxidant systems either prevent these reactive species from being formed, or remove them, thus minimizing their damage. Reactive oxygen species can have useful cellular functions, such as redox signaling. Thus, ideally, antioxidant systems do not remove oxidants entirely, but maintain them at some optimum concentration.
Reactive oxygen species produced in cells include hydrogen peroxide (H2O2), hypochlorous acid (HClO), and free radicals such as the hydroxyl radical (·OH) and the superoxide anion (O2−). The hydroxyl radical is particularly unstable and will react rapidly and non-specifically with most biological molecules. This species is produced from hydrogen peroxide in metal-catalyzed redox reactions such as the Fenton reaction. These oxidants can damage cells by starting chemical chain reactions such as lipid peroxidation, or by oxidizing DNA or proteins. Damage to DNA can cause mutations and possibly cancer, if not reversed by DNA repair mechanisms, while damage to proteins causes enzyme inhibition, denaturation and protein degradation.
The use of oxygen as part of the process for generating metabolic energy produces reactive oxygen species. In this process, the superoxide anion is produced as a by-product of several steps in the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, since a highly reactive free radical is formed as an intermediate (Q·−). This unstable intermediate can lead to electron "leakage", when electrons jump directly to oxygen and form the superoxide anion, instead of moving through the normal series of well-controlled reactions of the electron transport chain. Peroxide is also produced from the oxidation of reduced flavoproteins, such as complex I. However, although these enzymes can produce oxidants, the relative importance of the electron transfer chain to other processes that generate peroxide is unclear. In plants, algae, and cyanobacteria, reactive oxygen species are also produced during photosynthesis, particularly under conditions of high light intensity. This effect is partly offset by the involvement of carotenoids in photoinhibition, and in algae and cyanobacteria, by large amount of iodide and selenium, which involves these antioxidants reacting with over-reduced forms of the photosynthetic reaction centres to prevent the production of reactive oxygen species.
Examples of bioactive antioxidant compounds
Physiological antioxidants are classified into two broad divisions, depending on whether they are soluble in water (hydrophilic) or in lipids (lipophilic). In general, water-soluble antioxidants react with oxidants in the cell cytosol and the blood plasma, while lipid-soluble antioxidants protect cell membranes from lipid peroxidation. These compounds may be synthesized in the body or obtained from the diet. The different antioxidants are present at a wide range of concentrations in body fluids and tissues, with some such as glutathione or ubiquinone mostly present within cells, while others such as uric acid are more systemically distributed (see table below). Some antioxidants are only found in a few organisms, and can be pathogens or virulence factors.
The interactions between these different antioxidants may be synergistic and interdependent. The action of one antioxidant may therefore depend on the proper function of other members of the antioxidant system. The amount of protection provided by any one antioxidant will also depend on its concentration, its reactivity towards the particular reactive oxygen species being considered, and the status of the antioxidants with which it interacts.
Some compounds contribute to antioxidant defense by chelating transition metals and preventing them from catalyzing the production of free radicals in the cell. The ability to sequester iron for iron-binding proteins, such as transferrin and ferritin, is one such function. Selenium and zinc are commonly referred to as antioxidant minerals, but these chemical elements have no antioxidant action themselves, but rather are required for the activity of antioxidant enzymes, such as glutathione reductase and superoxide dismutase. (See also selenium in biology and zinc in biology.)
Uric acid
Uric acid (UA) is an antioxidant oxypurine produced from xanthine by the enzyme xanthine oxidase, and is an intermediate product of purine metabolism. In almost all land animals, urate oxidase further catalyzes the oxidation of uric acid to allantoin, but in humans and most higher primates, the urate oxidase gene is nonfunctional, so that UA is not further broken down. The evolutionary reasons for this loss of urate conversion to allantoin remain the topic of active speculation. The antioxidant effects of uric acid have led researchers to suggest this mutation was beneficial to early primates and humans. Studies of high altitude acclimatization support the hypothesis that urate acts as an antioxidant by mitigating the oxidative stress caused by high-altitude hypoxia.
Uric acid has the highest concentration of any blood antioxidant and provides over half of the total antioxidant capacity of human serum. Uric acid's antioxidant activities are also complex, given that it does not react with some oxidants, such as superoxide, but does act against peroxynitrite, peroxides, and hypochlorous acid. Concerns over elevated UA's contribution to gout must be considered one of many risk factors. By itself, UA-related risk of gout at high levels (415–530 μmol/L) is only 0.5% per year with an increase to 4.5% per year at UA supersaturation levels (535+ μmol/L). Many of these aforementioned studies determined UA's antioxidant actions within normal physiological levels, and some found antioxidant activity at levels as high as 285 μmol/L.
Vitamin C
Ascorbic acid or vitamin C is a monosaccharide oxidation-reduction (redox) catalyst found in both animals and plants. As one of the enzymes needed to make ascorbic acid has been lost by mutation during primate evolution, humans must obtain it from their diet; it is therefore a dietary vitamin. Most other animals are able to produce this compound in their bodies and do not require it in their diets. Ascorbic acid is required for the conversion of the procollagen to collagen by oxidizing proline residues to hydroxyproline. In other cells, it is maintained in its reduced form by reaction with glutathione, which can be catalysed by protein disulfide isomerase and glutaredoxins. Ascorbic acid is a redox catalyst which can reduce, and thereby neutralize, reactive oxygen species such as hydrogen peroxide. In addition to its direct antioxidant effects, ascorbic acid is also a substrate for the redox enzyme ascorbate peroxidase, a function that is used in stress resistance in plants. Ascorbic acid is present at high levels in all parts of plants and can reach concentrations of 20 millimolar in chloroplasts.
Glutathione
Glutathione is a cysteine-containing peptide found in most forms of aerobic life. It is not required in the diet and is instead synthesized in cells from its constituent amino acids. Glutathione has antioxidant properties since the thiol group in its cysteine moiety is a reducing agent and can be reversibly oxidized and reduced. In cells, glutathione is maintained in the reduced form by the enzyme glutathione reductase and in turn reduces other metabolites and enzyme systems, such as ascorbate in the glutathione-ascorbate cycle, glutathione peroxidases and glutaredoxins, as well as reacting directly with oxidants. Due to its high concentration and its central role in maintaining the cell's redox state, glutathione is one of the most important cellular antioxidants. In some organisms glutathione is replaced by other thiols, such as by mycothiol in the Actinomycetes, bacillithiol in some gram-positive bacteria, or by trypanothione in the Kinetoplastids.
Vitamin E
Vitamin E is the collective name for a set of eight related tocopherols and tocotrienols, which are fat-soluble vitamins with antioxidant properties. Of these, α-tocopherol has been most studied as it has the highest bioavailability, with the body preferentially absorbing and metabolising this form.
It has been claimed that the α-tocopherol form is the most important lipid-soluble antioxidant, and that it protects membranes from oxidation by reacting with lipid radicals produced in the lipid peroxidation chain reaction. This removes the free radical intermediates and prevents the propagation reaction from continuing. This reaction produces oxidised α-tocopheroxyl radicals that can be recycled back to the active reduced form through reduction by other antioxidants, such as ascorbate, retinol or ubiquinol. This is in line with findings showing that α-tocopherol, but not water-soluble antioxidants, efficiently protects glutathione peroxidase 4 (GPX4)-deficient cells from cell death. GPx4 is the only known enzyme that efficiently reduces lipid-hydroperoxides within biological membranes.
However, the roles and importance of the various forms of vitamin E are presently unclear, and it has even been suggested that the most important function of α-tocopherol is as a signaling molecule, with this molecule having no significant role in antioxidant metabolism. The functions of the other forms of vitamin E are even less well understood, although γ-tocopherol is a nucleophile that may react with electrophilic mutagens, and tocotrienols may be important in protecting neurons from damage.
Pro-oxidant activities
Antioxidants that are reducing agents can also act as pro-oxidants. For example, vitamin C has antioxidant activity when it reduces oxidizing substances such as hydrogen peroxide; however, it will also reduce metal ions such as iron and copper that generate free radicals through the Fenton reaction. While ascorbic acid is effective antioxidant, it can also oxidatively change the flavor and color of food. With the presence of transition metals, there are low concentrations of ascorbic acid that can act as a radical scavenger in the Fenton reaction.
2 Fe3+ + Ascorbate → 2 Fe2+ + Dehydroascorbate
2 Fe2+ + 2 H2O2 → 2 Fe3+ + 2 OH· + 2 OH−
The relative importance of the antioxidant and pro-oxidant activities of antioxidants is an area of current research, but vitamin C, which exerts its effects as a vitamin by oxidizing polypeptides, appears to have a mostly antioxidant action in the human body.
Enzyme systems
As with the chemical antioxidants, cells are protected against oxidative stress by an interacting network of antioxidant enzymes. Here, the superoxide released by processes such as oxidative phosphorylation is first converted to hydrogen peroxide and then further reduced to give water. This detoxification pathway is the result of multiple enzymes, with superoxide dismutases catalysing the first step and then catalases and various peroxidases removing hydrogen peroxide. As with antioxidant metabolites, the contributions of these enzymes to antioxidant defenses can be hard to separate from one another, but the generation of transgenic mice lacking just one antioxidant enzyme can be informative.
Superoxide dismutase, catalase, and peroxiredoxins
Superoxide dismutases (SODs) are a class of closely related enzymes that catalyze the breakdown of the superoxide anion into oxygen and hydrogen peroxide. SOD enzymes are present in almost all aerobic cells and in extracellular fluids. Superoxide dismutase enzymes contain metal ion cofactors that, depending on the isozyme, can be copper, zinc, manganese or iron. In humans, the copper/zinc SOD is present in the cytosol, while manganese SOD is present in the mitochondrion. There also exists a third form of SOD in extracellular fluids, which contains copper and zinc in its active sites. The mitochondrial isozyme seems to be the most biologically important of these three, since mice lacking this enzyme die soon after birth. In contrast, the mice lacking copper/zinc SOD (Sod1) are viable but have numerous pathologies and a reduced lifespan (see article on superoxide), while mice without the extracellular SOD have minimal defects (sensitive to hyperoxia). In plants, SOD isozymes are present in the cytosol and mitochondria, with an iron SOD found in chloroplasts that is absent from vertebrates and yeast.
Catalases are enzymes that catalyse the conversion of hydrogen peroxide to water and oxygen, using either an iron or manganese cofactor. This protein is localized to peroxisomes in most eukaryotic cells. Catalase is an unusual enzyme since, although hydrogen peroxide is its only substrate, it follows a ping-pong mechanism. Here, its cofactor is oxidised by one molecule of hydrogen peroxide and then regenerated by transferring the bound oxygen to a second molecule of substrate. Despite its apparent importance in hydrogen peroxide removal, humans with genetic deficiency of catalase — "acatalasemia" — or mice genetically engineered to lack catalase completely, experience few ill effects.
Peroxiredoxins are peroxidases that catalyze the reduction of hydrogen peroxide, organic hydroperoxides, as well as peroxynitrite. They are divided into three classes: typical 2-cysteine peroxiredoxins; atypical 2-cysteine peroxiredoxins; and 1-cysteine peroxiredoxins. These enzymes share the same basic catalytic mechanism, in which a redox-active cysteine (the peroxidatic cysteine) in the active site is oxidized to a sulfenic acid by the peroxide substrate. Over-oxidation of this cysteine residue in peroxiredoxins inactivates these enzymes, but this can be reversed by the action of sulfiredoxin. Peroxiredoxins seem to be important in antioxidant metabolism, as mice lacking peroxiredoxin 1 or 2 have shortened lifespans and develop hemolytic anaemia, while plants use peroxiredoxins to remove hydrogen peroxide generated in chloroplasts.
Thioredoxin and glutathione systems
The thioredoxin system contains the 12-kDa protein thioredoxin and its companion thioredoxin reductase. Proteins related to thioredoxin are present in all sequenced organisms. Plants, such as Arabidopsis thaliana, have a particularly great diversity of isoforms. The active site of thioredoxin consists of two neighboring cysteines, as part of a highly conserved CXXC motif, that can cycle between an active dithiol form (reduced) and an oxidized disulfide form. In its active state, thioredoxin acts as an efficient reducing agent, scavenging reactive oxygen species and maintaining other proteins in their reduced state. After being oxidized, the active thioredoxin is regenerated by the action of thioredoxin reductase, using NADPH as an electron donor.
The glutathione system includes glutathione, glutathione reductase, glutathione peroxidases, and glutathione S-transferases. This system is found in animals, plants and microorganisms. Glutathione peroxidase is an enzyme containing four selenium-cofactors that catalyzes the breakdown of hydrogen peroxide and organic hydroperoxides. There are at least four different glutathione peroxidase isozymes in animals. Glutathione peroxidase 1 is the most abundant and is a very efficient scavenger of hydrogen peroxide, while glutathione peroxidase 4 is most active with lipid hydroperoxides. Surprisingly, glutathione peroxidase 1 is dispensable, as mice lacking this enzyme have normal lifespans, but they are hypersensitive to induced oxidative stress. In addition, the glutathione S-transferases show high activity with lipid peroxides. These enzymes are at particularly high levels in the liver and also serve in detoxification metabolism.
Health research
Relation to diet
The dietary antioxidant vitamins A, C, and E are essential and required in specific daily amounts to prevent diseases. Polyphenols, which have antioxidant properties in vitro due to their free hydroxy groups, are extensively metabolized by catechol-O-methyltransferase which methylates free hydroxyl groups, and thereby prevents them from acting as antioxidants in vivo.
Interactions
Common pharmaceuticals (and supplements) with antioxidant properties may interfere with the efficacy of certain anticancer medication and radiation therapy. Pharmaceuticals and supplements that have antioxidant properties suppress the formation of free radicals by inhibiting oxidation processes. Radiation therapy induce oxidative stress that damages essential components of cancer cells, such as proteins, nucleic acids, and lipids that comprise cell membranes.
Adverse effects
Relatively strong reducing acids can have antinutrient effects by binding to dietary minerals such as iron and zinc in the gastrointestinal tract and preventing them from being absorbed. Examples are oxalic acid, tannins and phytic acid, which are high in plant-based diets. Calcium and iron deficiencies are not uncommon in diets in developing countries where less meat is eaten and there is high consumption of phytic acid from beans and unleavened whole grain bread. However, germination, soaking, or microbial fermentation are all household strategies that reduce the phytate and polyphenol content of unrefined cereal. Increases in Fe, Zn and Ca absorption have been reported in adults fed dephytinized cereals compared with cereals containing their native phytate.
High doses of some antioxidants may have harmful long-term effects. The Beta-Carotene and Retinol Efficacy Trial (CARET) study of lung cancer patients found that smokers given supplements containing beta-carotene and vitamin A had increased rates of lung cancer. Subsequent studies confirmed these adverse effects. These harmful effects may also be seen in non-smokers, as one meta-analysis including data from approximately 230,000 patients showed that β-carotene, vitamin A or vitamin E supplementation is associated with increased mortality, but saw no significant effect from vitamin C. No health risk was seen when all the randomized controlled studies were examined together, but an increase in mortality was detected when only high-quality and low-bias risk trials were examined separately. As the majority of these low-bias trials dealt with either elderly people, or people with disease, these results may not apply to the general population. This meta-analysis was later repeated and extended by the same authors, confirming the previous results. These two publications are consistent with some previous meta-analyses that also suggested that vitamin E supplementation increased mortality, and that antioxidant supplements increased the risk of colon cancer. Beta-carotene may also increase lung cancer. Overall, the large number of clinical trials carried out on antioxidant supplements suggest that either these products have no effect on health, or that they cause a small increase in mortality in elderly or vulnerable populations.
Exercise and muscle soreness
A 2017 review showed that taking antioxidant dietary supplements before or after exercise does not likely lead to a noticeable reduction in muscle soreness after a person exercises.
Levels in food
Antioxidant vitamins are found in vegetables, fruits, eggs, legumes and nuts. Vitamins A, C, and E can be destroyed by long-term storage or prolonged cooking. The effects of cooking and food processing are complex, as these processes can also increase the bioavailability of antioxidants, such as some carotenoids in vegetables. Processed food contains fewer antioxidant vitamins than fresh and uncooked foods, as preparation exposes food to heat and oxygen.
Other antioxidants are not obtained from the diet, but instead are made in the body. For example, ubiquinol (coenzyme Q) is poorly absorbed from the gut and is made through the mevalonate pathway. Another example is glutathione, which is made from amino acids. As any glutathione in the gut is broken down to free cysteine, glycine and glutamic acid before being absorbed, even large oral intake has little effect on the concentration of glutathione in the body. Although large amounts of sulfur-containing amino acids such as acetylcysteine can increase glutathione, no evidence exists that eating high levels of these glutathione precursors is beneficial for healthy adults.
Measurement and invalidation of ORAC
Measurement of polyphenol and carotenoid content in food is not a straightforward process, as antioxidants collectively are a diverse group of compounds with different reactivities to various reactive oxygen species. In food science analyses in vitro, the oxygen radical absorbance capacity (ORAC) was once an industry standard for estimating antioxidant strength of whole foods, juices and food additives, mainly from the presence of polyphenols. Earlier measurements and ratings by the United States Department of Agriculture were withdrawn in 2012 as biologically irrelevant to human health, referring to an absence of physiological evidence for polyphenols having antioxidant properties in vivo. Consequently, the ORAC method, derived only from in vitro experiments, is no longer considered relevant to human diets or biology, as of 2010.
Alternative in vitro measurements of antioxidant content in foods – also based on the presence of polyphenols – include the Folin-Ciocalteu reagent, and the Trolox equivalent antioxidant capacity assay.
References
Further reading
External links
Anti-aging substances
Physiology
Process chemicals
Redox
|
https://en.wikipedia.org/wiki/Brass
|
Brass is an alloy of copper (Cu) and zinc (Zn), in proportions which can be varied to achieve different colours and mechanical, electrical, acoustic, and chemical properties, but copper typically has the larger proportion. In use since prehistoric times, it is a substitutional alloy: atoms of the two constituents may replace each other within the same crystal structure.
Brass is similar to bronze, another copper alloy that uses tin instead of zinc. Both bronze and brass may include small proportions of a range of other elements including arsenic (As), lead (Pb), phosphorus (P), aluminium (Al), manganese (Mn), and silicon (Si). Historically, the distinction between the two alloys has been less consistent and clear, and increasingly museums use the more general term "copper alloy."
Brass has long been a popular material for its bright gold-like appearance and is still used for drawer pulls and doorknobs. It has also been widely used to make sculpture and utensils because of its low melting point, high workability (both with hand tools and with modern turning and milling machines), durability, and electrical and thermal conductivity. Brasses with higher copper content are softer and more golden in colour; conversely those with less copper and thus more zinc are harder and more silvery in colour.
Brass is still commonly used in applications where corrosion resistance and low friction are required, such as locks, hinges, gears, bearings, ammunition casings, zippers, plumbing, hose couplings, valves, and electrical plugs and sockets. It is used extensively for musical instruments such as horns and bells. The composition of brass, generally 66% copper and 34% zinc, makes it a favorable substitute for copper in costume jewelry and fashion jewelry, as it exhibits greater resistance to corrosion. Brass is not as hard as bronze, and so is not suitable for most weapons and tools. Nor is it suitable for marine uses, because the zinc reacts with minerals in salt water, leaving porous copper behind; marine brass, with added tin, avoids this, as does bronze.
Brass is often used in situations in which it is important that sparks not be struck, such as in fittings and tools used near flammable or explosive materials.
Properties
Brass is more malleable than bronze or zinc. The relatively low melting point of brass (, depending on composition) and its flow characteristics make it a relatively easy material to cast. By varying the proportions of copper and zinc, the properties of the brass can be changed, allowing hard and soft brasses. The density of brass is .
Today, almost 90% of all brass alloys are recycled. Because brass is not ferromagnetic, ferrous scrap can be separated from it by passing the scrap near a powerful magnet. Brass scrap is melted and recast into billets that are extruded into the desired form and size. The general softness of brass means that it can often be machined without the use of cutting fluid, though there are exceptions to this.
Aluminium makes brass stronger and more corrosion-resistant. Aluminium also causes a highly beneficial hard layer of aluminium oxide (Al2O3) to be formed on the surface that is thin, transparent, and self-healing. Tin has a similar effect and finds its use especially in seawater applications (naval brasses). Combinations of iron, aluminium, silicon, and manganese make brass wear- and tear-resistant. The addition of as little as 1% iron to a brass alloy will result in an alloy with a noticeable magnetic attraction.
Brass will corrode in the presence of moisture, chlorides, acetates, ammonia, and certain acids. This often happens when the copper reacts with sulfur to form a brown and eventually black surface layer of copper sulfide which, if regularly exposed to slightly acidic water such as urban rainwater, can then oxidize in air to form a patina of green-blue copper carbonate. Depending on how the patina layer was formed, it may protect the underlying brass from further damage.
Although copper and zinc have a large difference in electrical potential, the resulting brass alloy does not experience internalized galvanic corrosion because of the absence of a corrosive environment within the mixture. However, if brass is placed in contact with a more noble metal such as silver or gold in such an environment, the brass will corrode galvanically; conversely, if brass is in contact with a less-noble metal such as zinc or iron, the less noble metal will corrode and the brass will be protected.
Lead content
To enhance the machinability of brass, lead is often added in concentrations of about 2%. Since lead has a lower melting point than the other constituents of the brass, it tends to migrate towards the grain boundaries in the form of globules as it cools from casting. The pattern the globules form on the surface of the brass increases the available lead surface area which, in turn, affects the degree of leaching. In addition, cutting operations can smear the lead globules over the surface. These effects can lead to significant lead leaching from brasses of comparatively low lead content.
In October 1999, the California State Attorney General sued 13 key manufacturers and distributors over lead content. In laboratory tests, state researchers found the average brass key, new or old, exceeded the California Proposition 65 limits by an average factor of 19, assuming handling twice a day. In April 2001 manufacturers agreed to reduce lead content to 1.5%, or face a requirement to warn consumers about lead content. Keys plated with other metals are not affected by the settlement, and may continue to use brass alloys with a higher percentage of lead content.
Also in California, lead-free materials must be used for "each component that comes into contact with the wetted surface of pipes and pipe fittings, plumbing fittings and fixtures". On 1 January 2010, the maximum amount of lead in "lead-free brass" in California was reduced from 4% to 0.25% lead.
Corrosion-resistant brass for harsh environments
Dezincification-resistant (DZR or DR) brasses, sometimes referred to as CR (corrosion resistant) brasses, are used where there is a large corrosion risk and where normal brasses do not meet the requirements. Applications with high water temperatures, chlorides present or deviating water qualities (soft water) play a role. DZR-brass is used in water boiler systems. This brass alloy must be produced with great care, with special attention placed on a balanced composition and proper production temperatures and parameters to avoid long-term failures.
An example of DZR brass is the C352 brass, with about 30% zinc, 61–63% copper, 1.7–2.8% lead, and 0.02–0.15% arsenic. The lead and arsenic significantly suppress the zinc loss.
"Red brasses", a family of alloys with high copper proportion and generally less than 15% zinc, are more resistant to zinc loss. One of the metals called "red brass" is 85% copper, 5% tin, 5% lead, and 5% zinc. Copper alloy C23000, which is also known as "red brass", contains 84–86% copper, 0.05% each iron and lead, with the balance being zinc.
Another such material is gunmetal, from the family of red brasses. Gunmetal alloys contain roughly 88% copper, 8-10% tin, and 2-4% zinc. Lead can be added for ease of machining or for bearing alloys.
"Naval brass", for use in seawater, contains 40% zinc but also 1% tin. The tin addition suppresses zinc leaching.
The NSF International requires brasses with more than 15% zinc, used in piping and plumbing fittings, to be dezincification-resistant.
Use in musical instruments
The high malleability and workability, relatively good resistance to corrosion, and traditionally attributed acoustic properties of brass, have made it the usual metal of choice for construction of musical instruments whose acoustic resonators consist of long, relatively narrow tubing, often folded or coiled for compactness; silver and its alloys, and even gold, have been used for the same reasons, but brass is the most economical choice. Collectively known as brass instruments, these include the trombone, tuba, trumpet, cornet, flugelhorn, baritone horn, euphonium, tenor horn, and French horn, and many other "horns", many in variously sized families, such as the saxhorns.
Other wind instruments may be constructed of brass or other metals, and indeed most modern student-model flutes and piccolos are made of some variety of brass, usually a cupronickel alloy similar to nickel silver (also known as German silver). Clarinets, especially low clarinets such as the contrabass and subcontrabass, are sometimes made of metal because of limited supplies of the dense, fine-grained tropical hardwoods traditionally preferred for smaller woodwinds. For the same reason, some low clarinets, bassoons and contrabassoons feature a hybrid construction, with long, straight sections of wood, and curved joints, neck, and/or bell of metal. The use of metal also avoids the risks of exposing wooden instruments to changes in temperature or humidity, which can cause sudden cracking. Even though the saxophones and sarrusophones are classified as woodwind instruments, they are normally made of brass for similar reasons, and because their wide, conical bores and thin-walled bodies are more easily and efficiently made by forming sheet metal than by machining wood.
The keywork of most modern woodwinds, including wooden-bodied instruments, is also usually made of an alloy such as nickel silver. Such alloys are stiffer and more durable than the brass used to construct the instrument bodies, but still workable with simple hand tools—a boon to quick repairs. The mouthpieces of both brass instruments and, less commonly, woodwind instruments are often made of brass among other metals as well.
Next to the brass instruments, the most notable use of brass in music is in various percussion instruments, most notably cymbals, gongs, and orchestral (tubular) bells (large "church" bells are normally made of bronze). Small handbells and "jingle bells" are also commonly made of brass.
The harmonica is a free reed aerophone, also often made from brass. In organ pipes of the reed family, brass strips (called tongues) are used as the reeds, which beat against the shallot (or beat "through" the shallot in the case of a "free" reed). Although not part of the brass section, snare drums are also sometimes made of brass. Some parts on electric guitars are also made from brass, especially inertia blocks on tremolo systems for its tonal properties, and for string nuts and saddles for both tonal properties and its low friction.
Germicidal and antimicrobial applications
The bactericidal properties of brass have been observed for centuries, particularly in marine environments where it prevents biofouling. Depending upon the type and concentration of pathogens and the medium they are in, brass kills these microorganisms within a few minutes to hours of contact.
A large number of independent studies confirm this antimicrobial effect, even against antibiotic-resistant bacteria such as MRSA and VRSA. The mechanisms of antimicrobial action by copper and its alloys, including brass, are a subject of intense and ongoing investigation.
Season cracking
Brass is susceptible to stress corrosion cracking, especially from ammonia or substances containing or releasing ammonia. The problem is sometimes known as season cracking after it was first discovered in brass cartridges used for rifle ammunition during the 1920s in the British Indian Army. The problem was caused by high residual stresses from cold forming of the cases during manufacture, together with chemical attack from traces of ammonia in the atmosphere. The cartridges were stored in stables and the ammonia concentration rose during the hot summer months, thus initiating brittle cracks. The problem was resolved by annealing the cases, and storing the cartridges elsewhere.
Types
Other phases than α, β and γ are ε, a hexagonal intermetallic CuZn3, and η, a solid solution of copper in zinc.
Brass alloys
History
Although forms of brass have been in use since prehistory, its true nature as a copper-zinc alloy was not understood until the post-medieval period because the zinc vapor which reacted with copper to make brass was not recognized as a metal. The King James Bible makes many references to "brass" to translate "nechosheth" (bronze or copper) from Hebrew to English. The earliest brasses may have been natural alloys made by smelting zinc-rich copper ores. By the Roman period brass was being deliberately produced from metallic copper and zinc minerals using the cementation process, the product of which was calamine brass, and variations on this method continued until the mid-19th century. It was eventually replaced by speltering, the direct alloying of copper and zinc metal which was introduced to Europe in the 16th century.
Brass has sometimes historically been referred to as "yellow copper".
Early copper-zinc alloys
In West Asia and the Eastern Mediterranean early copper-zinc alloys are now known in small numbers from a number of 3rd millennium BC sites in the Aegean, Iraq, the United Arab Emirates, Kalmykia, Turkmenistan and Georgia and from 2nd millennium BC sites in western India, Uzbekistan, Iran, Syria, Iraq and Canaan. Isolated examples of copper-zinc alloys are known in China from the 1st century AD, long after bronze was widely used.
The compositions of these early "brass" objects are highly variable and most have zinc contents of between 5% and 15% wt which is lower than in brass produced by cementation. These may be "natural alloys" manufactured by smelting zinc rich copper ores in redox conditions. Many have similar tin contents to contemporary bronze artefacts and it is possible that some copper-zinc alloys were accidental and perhaps not even distinguished from copper. However the large number of copper-zinc alloys now known suggests that at least some were deliberately manufactured and many have zinc contents of more than 12% wt which would have resulted in a distinctive golden colour.
By the 8th–7th century BC Assyrian cuneiform tablets mention the exploitation of the "copper of the mountains" and this may refer to "natural" brass. "Oreikhalkon" (mountain copper), the Ancient Greek translation of this term, was later adapted to the Latin aurichalcum meaning "golden copper" which became the standard term for brass. In the 4th century BC Plato knew orichalkos as rare and nearly as valuable as gold and Pliny describes how aurichalcum had come from Cypriot ore deposits which had been exhausted by the 1st century AD. X-ray fluorescence analysis of 39 orichalcum ingots recovered from a 2,600-year-old shipwreck off Sicily found them to be an alloy made with 75–80% copper, 15–20% zinc and small percentages of nickel, lead and iron.
Roman world
During the later part of first millennium BC the use of brass spread across a wide geographical area from Britain and Spain in the west to Iran, and India in the east. This seems to have been encouraged by exports and influence from the Middle East and eastern Mediterranean where deliberate production of brass from metallic copper and zinc ores had been introduced. The 4th century BC writer Theopompus, quoted by Strabo, describes how heating earth from Andeira in Turkey produced "droplets of false silver", probably metallic zinc, which could be used to turn copper into oreichalkos. In the 1st century BC the Greek Dioscorides seems to have recognized a link between zinc minerals and brass describing how Cadmia (zinc oxide) was found on the walls of furnaces used to heat either zinc ore or copper and explaining that it can then be used to make brass.
By the first century BC brass was available in sufficient supply to use as coinage in Phrygia and Bithynia, and after the Augustan currency reform of 23 BC it was also used to make Roman dupondii and sestertii. The uniform use of brass for coinage and military equipment across the Roman world may indicate a degree of state involvement in the industry, and brass even seems to have been deliberately boycotted by Jewish communities in Palestine because of its association with Roman authority.
Brass was produced by the cementation process where copper and zinc ore are heated together until zinc vapor is produced which reacts with the copper. There is good archaeological evidence for this process and crucibles used to produce brass by cementation have been found on Roman period sites including Xanten and Nidda in Germany, Lyon in France and at a number of sites in Britain. They vary in size from tiny acorn sized to large amphorae like vessels but all have elevated levels of zinc on the interior and are lidded. They show no signs of slag or metal prills suggesting that zinc minerals were heated to produce zinc vapor which reacted with metallic copper in a solid state reaction. The fabric of these crucibles is porous, probably designed to prevent a buildup of pressure, and many have small holes in the lids which may be designed to release pressure or to add additional zinc minerals near the end of the process. Dioscorides mentioned that zinc minerals were used for both the working and finishing of brass, perhaps suggesting secondary additions.
Brass made during the early Roman period seems to have varied between 20% and 28% wt zinc. The high content of zinc in coinage and brass objects declined after the first century AD and it has been suggested that this reflects zinc loss during recycling and thus an interruption in the production of new brass. However it is now thought this was probably a deliberate change in composition and overall the use of brass increases over this period making up around 40% of all copper alloys used in the Roman world by the 4th century AD.
Medieval period
Little is known about the production of brass during the centuries immediately after the collapse of the Roman Empire. Disruption in the trade of tin for bronze from Western Europe may have contributed to the increasing popularity of brass in the east and by the 6th–7th centuries AD over 90% of copper alloy artefacts from Egypt were made of brass. However other alloys such as low tin bronze were also used and they vary depending on local cultural attitudes, the purpose of the metal and access to zinc, especially between the Islamic and Byzantine world. Conversely the use of true brass seems to have declined in Western Europe during this period in favor of gunmetals and other mixed alloys but by about 1000 brass artefacts are found in Scandinavian graves in Scotland, brass was being used in the manufacture of coins in Northumbria and there is archaeological and historical evidence for the production of calamine brass in Germany and the Low Countries, areas rich in calamine ore.
These places would remain important centres of brass making throughout the Middle Ages period, especially Dinant. Brass objects are still collectively known as dinanderie in French. The baptismal font at St Bartholomew's Church, Liège in modern Belgium (before 1117) is an outstanding masterpiece of Romanesque brass casting, though also often described as bronze. The metal of the early 12th-century Gloucester Candlestick is unusual even by medieval standards in being a mixture of copper, zinc, tin, lead, nickel, iron, antimony and arsenic with an unusually large amount of silver, ranging from 22.5% in the base to 5.76% in the pan below the candle. The proportions of this mixture may suggest that the candlestick was made from a hoard of old coins, probably Late Roman. Latten is a term for medieval alloys of uncertain and often variable composition often covering decorative borders and similar objects cut from sheet metal, whether of brass or bronze. Especially in Tibetan art, analysis of some objects shows very different compositions from different ends of a large piece. Aquamaniles were typically made in brass in both the European and Islamic worlds.
The cementation process continued to be used but literary sources from both Europe and the Islamic world seem to describe variants of a higher temperature liquid process which took place in open-topped crucibles. Islamic cementation seems to have used zinc oxide known as tutiya or tutty rather than zinc ores for brass-making, resulting in a metal with lower iron impurities. A number of Islamic writers and the 13th century Italian Marco Polo describe how this was obtained by sublimation from zinc ores and condensed onto clay or iron bars, archaeological examples of which have been identified at Kush in Iran. It could then be used for brass making or medicinal purposes. In 10th century Yemen al-Hamdani described how spreading al-iglimiya, probably zinc oxide, onto the surface of molten copper produced tutiya vapor which then reacted with the metal. The 13th century Iranian writer al-Kashani describes a more complex process whereby tutiya was mixed with raisins and gently roasted before being added to the surface of the molten metal. A temporary lid was added at this point presumably to minimize the escape of zinc vapor.
In Europe a similar liquid process in open-topped crucibles took place which was probably less efficient than the Roman process and the use of the term tutty by Albertus Magnus in the 13th century suggests influence from Islamic technology. The 12th century German monk Theophilus described how preheated crucibles were one sixth filled with powdered calamine and charcoal then topped up with copper and charcoal before being melted, stirred then filled again. The final product was cast, then again melted with calamine. It has been suggested that this second melting may have taken place at a lower temperature to allow more zinc to be absorbed. Albertus Magnus noted that the "power" of both calamine and tutty could evaporate and described how the addition of powdered glass could create a film to bind it to the metal.
German brass making crucibles are known from Dortmund dating to the 10th century AD and from Soest and Schwerte in Westphalia dating to around the 13th century confirm Theophilus' account, as they are open-topped, although ceramic discs from Soest may have served as loose lids which may have been used to reduce zinc evaporation, and have slag on the interior resulting from a liquid process.
Africa
Some of the most famous objects in African art are the lost wax castings of West Africa, mostly from what is now Nigeria, produced first by the Kingdom of Ife and then the Benin Empire. Though normally described as "bronzes", the Benin Bronzes, now mostly in the British Museum and other Western collections, and the large portrait heads such as the Bronze Head from Ife of "heavily leaded zinc-brass" and the Bronze Head of Queen Idia, both also British Museum, are better described as brass, though of variable compositions. Work in brass or bronze continued to be important in Benin art and other West African traditions such as Akan goldweights, where the metal was regarded as a more valuable material than in Europe.
Renaissance and post-medieval Europe
The Renaissance saw important changes to both the theory and practice of brassmaking in Europe. By the 15th century there is evidence for the renewed use of lidded cementation crucibles at Zwickau in Germany. These large crucibles were capable of producing c.20 kg of brass. There are traces of slag and pieces of metal on the interior. Their irregular composition suggests that this was a lower temperature, not entirely liquid, process. The crucible lids had small holes which were blocked with clay plugs near the end of the process presumably to maximize zinc absorption in the final stages. Triangular crucibles were then used to melt the brass for casting.
16th-century technical writers such as Biringuccio, Ercker and Agricola described a variety of cementation brass making techniques and came closer to understanding the true nature of the process noting that copper became heavier as it changed to brass and that it became more golden as additional calamine was added. Zinc metal was also becoming more commonplace. By 1513 metallic zinc ingots from India and China were arriving in London and pellets of zinc condensed in furnace flues at the Rammelsberg in Germany were exploited for cementation brass making from around 1550.
Eventually it was discovered that metallic zinc could be alloyed with copper to make brass, a process known as speltering, and by 1657 the German chemist Johann Glauber had recognized that calamine was "nothing else but unmeltable zinc" and that zinc was a "half ripe metal". However some earlier high zinc, low iron brasses such as the 1530 Wightman brass memorial plaque from England may have been made by alloying copper with zinc and include traces of cadmium similar to those found in some zinc ingots from China.
However, the cementation process was not abandoned, and as late as the early 19th century there are descriptions of solid-state cementation in a domed furnace at around 900–950 °C and lasting up to 10 hours. The European brass industry continued to flourish into the post medieval period buoyed by innovations such as the 16th century introduction of water powered hammers for the production of wares such as pots. By 1559 the Germany city of Aachen alone was capable of producing 300,000 cwt of brass per year. After several false starts during the 16th and 17th centuries the brass industry was also established in England taking advantage of abundant supplies of cheap copper smelted in the new coal fired reverberatory furnace. In 1723 Bristol brass maker Nehemiah Champion patented the use of granulated copper, produced by pouring molten metal into cold water. This increased the surface area of the copper helping it react and zinc contents of up to 33% wt were reported using this new technique.
In 1738 Nehemiah's son William Champion patented a technique for the first industrial scale distillation of metallic zinc known as distillation per descencum or "the English process". This local zinc was used in speltering and allowed greater control over the zinc content of brass and the production of high-zinc copper alloys which would have been difficult or impossible to produce using cementation, for use in expensive objects such as scientific instruments, clocks, brass buttons and costume jewelry. However Champion continued to use the cheaper calamine cementation method to produce lower-zinc brass and the archaeological remains of bee-hive shaped cementation furnaces have been identified at his works at Warmley. By the mid-to-late 18th century developments in cheaper zinc distillation such as John-Jaques Dony's horizontal furnaces in Belgium and the reduction of tariffs on zinc as well as demand for corrosion-resistant high zinc alloys increased the popularity of speltering and as a result cementation was largely abandoned by the mid-19th century.
See also
Brass bed
Brass rubbing
List of copper alloys
Citations
General references
Bayley, J. (1990). "The Production of Brass in Antiquity with Particular Reference to Roman Britain". In Craddock, P. T. (ed.). 2000 Years of Zinc and Brass. London: British Museum.
Craddock, P. T. and Eckstein, K (2003). "Production of Brass in Antiquity by Direct Reduction". In Craddock, P. T. and Lang, J. (eds.). Mining and Metal Production Through the Ages. London: British Museum.
Day, J. (1990). "Brass and Zinc in Europe from the Middle Ages until the 19th century". In Craddock, P. T. (ed.). 2000 Years of Zinc and Brass. London: British Museum.
Day, J. (1991). "Copper, Zinc and Brass Production". In Day, J. and Tylecote, R. F. (eds.). The Industrial Revolution in Metals. London: The Institute of Metals.
Rehren, T. and Martinon Torres, M. (2008) "Naturam ars imitate: European brassmaking between craft and science". In Martinon-Torres, M. and Rehren, T. (eds.). Archaeology, History and Science: Integrating Approaches to Ancient Material. Left Coast Press.
External links
Copper alloys
History of metallurgy
Zinc alloys
|
https://en.wikipedia.org/wiki/Beer
|
Beer is one of the oldest types of alcoholic drinks in the world, and the most widely consumed. It is the third most popular drink overall after potable water and tea. It is produced by the brewing and fermentation of starches, mainly derived from cereal grains—most commonly malted barley, though wheat, maize (corn), rice, and oats are also used. During the brewing process, fermentation of the starch sugars in the wort produces ethanol and carbonation in the resulting beer. Most modern beer is brewed with hops, which add bitterness and other flavours and act as a natural preservative and stabilising agent. Other flavouring agents such as gruit, herbs, or fruits may be included or used instead of hops. In commercial brewing, the natural carbonation effect is often removed during processing and replaced with forced carbonation.
Some of humanity's earliest known writings refer to the production and distribution of beer: the Code of Hammurabi included laws regulating beer and beer parlours, and "The Hymn to Ninkasi", a prayer to the Mesopotamian goddess of beer, served as both a prayer and as a method of remembering the recipe for beer in a culture with few literate people.
Beer is distributed in bottles and cans and is also commonly available on draught, particularly in pubs and bars. The brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional breweries. The strength of modern beer is usually around 4% to 6% alcohol by volume (ABV), although it may vary between 0.5% and 20%, with some breweries creating examples of 40% ABV and above.
Beer forms part of the culture of many nations and is associated with social traditions such as beer festivals, as well as a rich pub culture involving activities like pub crawling, pub quizzes and pub games.
When beer is distilled, the resulting liquor is a form of whisky.
Etymology
In early forms of English and in the Scandinavian languages, the usual word for beer was the word whose Modern English form is ale.
The word beer comes into present-day English from Old English , itself from Common Germanic; although the word is not attested in the East Germanic branch of the language family, it is found throughout the West Germanic and North Germanic dialects (modern Dutch and German , Old Norse ). The earlier etymology of the word is debated: the three main theories are that the word originates in Proto-Germanic (putatively from Proto-Indo-European ), meaning 'brewer's yeast, beer dregs'; that it is related to the word barley, or that it was somehow borrowed from Latin 'to drink'.
In Old English and Old Norse, the beer-word did not denote a malted alcoholic drink like ale, but a sweet, potent drink made from honey and the juice of one or more fruits other than grapes, much less ubiquitous than ale, perhaps served in the kind of tiny drinking cups sometimes found in early mediaeval grave goods: a drink more like mead or cider. In German, however, the meaning of the beer-word expanded to cover the meaning of the ale-word already before our earliest surviving written evidence. As German hopped ale became fashionable in England in the late Middle Ages, the English word beer took on the German meaning, and thus in English too, beer came during the early modern period to denote hopped, malt-based alcoholic drinks.
History
Beer is one of the world's oldest prepared alcoholic drinks. The earliest archaeological evidence of fermentation consists of 13,000 year-old residues of a beer with the consistency of gruel, used by the semi-nomadic Natufians for ritual feasting, at the Raqefet Cave in the Carmel Mountains near Haifa in Israel. There is evidence that beer was produced at Göbekli Tepe during the Pre-Pottery Neolithic (around 8500 to 5500 ). The earliest clear chemical evidence of beer produced from barley dates to about 3500–3100 , from the site of Godin Tepe in the Zagros Mountains of western Iran. It is possible, but not proven, that it dates back even further – to about 10,000 , when cereal was first farmed.
Beer is recorded in the written history of ancient Egypt, and archaeologists speculate that beer was instrumental in the formation of civilizations. Approximately 5000 years ago, workers in the city of Uruk (modern day Iraq) were paid by their employers with volumes of beer. During the building of the Great Pyramids in Giza, Egypt, each worker got a daily ration of four to five litres of beer, which served as both nutrition and refreshment and was crucial to the pyramids' construction.
Some of the earliest Sumerian writings contain references to beer; examples include a prayer to the goddess Ninkasi, known as "The Hymn to Ninkasi", which served as both a prayer and a method of remembering the recipe for beer in a culture with few literate people, and the ancient advice ("Fill your belly. Day and night make merry") to Gilgamesh, recorded in the Epic of Gilgamesh by the alewife Siduri, may, at least in part, have referred to the consumption of beer. The Ebla tablets, discovered in 1974 in Ebla, Syria, show that beer was produced in the city in 2500 BC. A fermented drink using rice and fruit was made in China around 7000 BC. Unlike sake, mould was not used to saccharify the rice (amylolytic fermentation); the rice was probably prepared for fermentation by chewing or malting. During the Vedic period in Ancient India, there are records of the consumption of the beer-like sura. Xenophon noted that during his travels, beer was being produced in Armenia.
Almost any substance containing sugar can naturally undergo alcoholic fermentation and thus be utilised in the brewing of beer. It is likely that many cultures, on observing that a sweet liquid could be obtained from a source of starch, independently invented beer. Bread and beer increased prosperity to a level that allowed time for the development of other technologies and contributed to the building of civilizations.
Beer was spread through Europe by Germanic and Celtic tribes as far back as 3000 BC, and it was mainly brewed on a domestic scale. The product that the early Europeans drank might not be recognised as beer by most people today. Alongside the basic starch source, the early European beers may have contained fruits, honey, numerous types of plants, spices, and other substances such as narcotic herbs. What they did not contain was hops, as that was a later addition, first mentioned in Europe around 822 by a Carolingian Abbot and again in 1067 by abbess Hildegard of Bingen.
In 1516, William IV, Duke of Bavaria, adopted the Reinheitsgebot (purity law), perhaps the oldest food-quality regulation still in use in the 21st century, according to which the only allowed ingredients of beer are water, hops, and barley-malt. Beer produced before the Industrial Revolution continued to be made and sold on a domestic scale, although by the 7th century , beer was also being produced and sold by European monasteries. During the Industrial Revolution, the production of beer moved from artisanal manufacture to industrial manufacture, and domestic manufacture ceased to be significant by the end of the 19th century. The development of hydrometers and thermometers changed brewing by allowing the brewer more control of the process and greater knowledge of the results.
In 1912, brown bottles began to be used by the Joseph Schlitz Brewing Company of Milwaukee, Wisconsin, in the United States. This innovation has since been accepted worldwide and prevents harmful rays from destroying the quality and stability of beer.
The brewing industry is now a global business, consisting of several dominant multinational companies and many thousands of smaller producers, ranging from brewpubs to regional breweries. As of 2006, more than , the equivalent of a cube 510 metres on a side, of beer are sold per year, producing total global revenues of US$294.5 billion. In 2010, China's beer consumption hit , or nearly twice that of the United States, but only 5 per cent sold were premium draught beers, compared with 50 per cent in France and Germany.
A widely publicised study in 2018 suggested that sudden decreases in barley production due to extreme drought and heat could in the future cause substantial volatility in the availability and price of beer.
Brewing
The process of making beer is known as brewing. A dedicated building for the making of beer is called a brewery, though beer can be made at home and has been for much of its history, in which case the brewing location is often called a brewhouse. A company that makes beer is called either a brewery or a brewing company. Beer made on a domestic scale for non-commercial reasons is today usually classified as homebrewing, regardless of where it is made, though most homebrewed beer is made at home. Historically, domestic beer was what's called farmhouse ale.
Brewing beer has been subject to legislation and taxation for millennia, and from the late 19th century on, taxation largely restricted brewing to commercial operations only in the UK. However, the UK government relaxed legislation in 1963, followed by Australia in 1972 and the US in 1978, though individual states were allowed to pass their own laws limiting production, allowing homebrewing to become a popular hobby.
The purpose of brewing is to convert the starch source into a sugary liquid called wort and to convert the wort into the alcoholic drink known as beer in a fermentation process effected by yeast.
The first step, where the wort is prepared by mixing the starch source (normally malted barley) with hot water, is known as "mashing". Hot water (known as "liquor" in brewing terms) is mixed with crushed malt or malts (known as "grist") in a mash tun. The mashing process takes around 1 to 2 hours, during which the starches are converted to sugars, and then the sweet wort is drained off the grains. The grains are then washed in a process known as "sparging". This washing allows the brewer to gather as much of the fermentable liquid from the grains as possible. The process of filtering the spent grain from the wort and sparge water is called wort separation. The traditional process for wort separation is lautering, in which the grain bed itself serves as the filter medium. Some modern breweries prefer the use of filter frames, which allow for a more finely ground grist.
Most modern breweries use a continuous sparge, collecting the original wort and the sparge water together. However, it is possible to collect a second or even third wash with the not quite spent grains as separate batches. Each run would produce a weaker wort and thus, a weaker beer. This process is known as the second (and third) runnings. Brewing with several runnings is called parti gyle brewing.
The sweet wort collected from sparging is put into a kettle, or "copper" (so-called because these vessels were traditionally made from copper), and boiled, usually for about one hour. During boiling, the water in the wort evaporates, but the sugars and other components of the wort remain; this allows more efficient use of the starch sources in the beer. Boiling also destroys any remaining enzymes left over from the mashing stage. Hops are added during boiling as a source of bitterness, flavour, and aroma. Hops may be added at more than one point during the boil. The longer the hops are boiled, the more bitterness they contribute, but the less hop flavour and aroma remain in the beer.
After boiling, the hopped wort is cooled and ready for the yeast. In some breweries, the hopped wort may pass through a hopback, which is a small vat filled with hops, to add aromatic hop flavouring and to act as a filter, but usually the hopped wort is simply cooled for the fermenter, where the yeast is added. During fermentation, the wort becomes beer in a process that takes a week to several months, depending on the type of yeast and strength of the beer. In addition to producing ethanol, fine particulate matter suspended in the wort settles during fermentation. Once fermentation is complete, the yeast also settles, leaving the beer clear.
During fermentation, most of the carbon dioxide is allowed to escape through a trap, and the beer is left with carbonation of only about one atmosphere of pressure. The carbonation is often increased either by transferring the beer to a pressure vessel such as a keg and introducing pressurised carbon dioxide or by transferring it before the fermentation is finished so that carbon dioxide pressure builds up inside the container as the fermentation finishes. Sometimes the beer is put unfiltered (so it still contains yeast) into bottles with some added sugar, which then produces the desired amount of carbon dioxide inside the bottle.
Fermentation is sometimes carried out in two stages: primary and secondary. Once most of the alcohol has been produced during primary fermentation, the beer is transferred to a new vessel and allowed a period of secondary fermentation. Secondary fermentation is used when the beer requires long storage before packaging or greater clarity. When the beer has fermented, it is packaged either into casks for cask ale or kegs, aluminium cans, or bottles for other sorts of beer.
Ingredients
The basic ingredients of beer are water; a starch source, such as malted barley or malted maize (such as used in the preparation of Tiswin and Tesgüino), able to be saccharified (converted to sugars) and then fermented (converted into ethanol and carbon dioxide); a brewer's yeast to produce the fermentation; and a flavouring such as hops. A mixture of starch sources may be used, with a secondary carbohydrate source, such as maize (corn), rice, wheat, or sugar, often termed an adjunct, especially when used alongside malted barley. Less widely used starch sources include millet, sorghum, and cassava root in Africa; potato in Brazil; and agave in Mexico, among others. The amount of each starch source in a beer recipe is collectively called the grain bill.
Water is the main ingredient in beer, accounting for 93% of its weight. Though water itself is, ideally, flavourless, its level of dissolved minerals, specifically bicarbonate ions, does influence beer's finished taste. Due to the mineral properties of each region's water, specific areas were originally the sole producers of certain types of beer, each identifiable by regional characteristics. Regional geology accords that Dublin's hard water is well-suited to making stout, such as Guinness, while the Plzeň Region's soft water is ideal for brewing Pilsner (pale lager), such as Pilsner Urquell. The waters of Burton in England contain gypsum, which benefits making pale ale to such a degree that brewers of pale ales will add gypsum to the local water in a process known as Burtonisation.
The starch source, termed the "mash ingredients", in a beer provides the fermentable material and is a key determinant of the strength and flavour of the beer. The most common starch source used in beer is malted grain. Grain is malted by soaking it in water, allowing it to begin germination, and then drying the partially germinated grain in a kiln. Malting grain produces enzymes that convert starches in the grain into fermentable sugars. Different roasting times and temperatures are used to produce different colours of malt from the same grain. Darker malts will produce darker beers. Nearly all beers include barley malt as the majority of the starch. This is because its fibrous hull remains attached to the grain during threshing. After malting, barley is milled, which finally removes the hull, breaking it into large pieces. These pieces remain with the grain during the mash and act as a filter bed during lautering, when sweet wort is separated from insoluble grain material. Other malted and unmalted grains (including wheat, rice, oats, and rye, and less frequently, corn and sorghum) may be used. Some brewers have produced gluten-free beer, made with sorghum with no barley malt, for those who cannot consume gluten-containing grains like wheat, barley, and rye.
Flavouring beer is the sole major commercial use of hops. The flower of the hop vine is used as a flavouring and preservative agent in nearly all beer made today. The flowers themselves are often called "hops". The first historical mention of the use of hops in beer dates from 822 AD in monastery rules written by Adalhard the Elder, also known as Adalard of Corbie, though the date normally given for widespread cultivation of hops for use in beer is the thirteenth century. Before the thirteenth century and until the sixteenth century, during which hops took over as the dominant flavouring, beer was flavoured with other plants, for instance, grains of paradise or alehoof. Combinations of various aromatic herbs, berries, and even ingredients like wormwood would be combined into a mixture known as gruit and used as hops are now used. Some beers today, such as Fraoch' by the Scottish Heather Ales company and Cervoise Lancelot by the French Brasserie-Lancelot company, use plants other than hops for flavouring.
Hops contain several characteristics that brewers desire in beer. Hops contribute a bitterness that balances the sweetness of the malt; the bitterness of beers is measured on the International Bitterness Units scale. Hops contribute floral, citrus, and herbal aromas and flavours to beer. Hops have an antibiotic effect that favours the activity of brewer's yeast over less desirable microorganisms and aids in "head retention", the length of time that a foamy head created by carbonation will last. The acidity of hops is a preservative.
Yeast is the microorganism that is responsible for fermentation in beer. Yeast metabolises the sugars extracted from grains, which produce alcohol and carbon dioxide, and thereby turns wort into beer. In addition to fermenting the beer, yeast influences the character and flavour. The dominant types of yeast used to make beer are top-fermenting Saccharomyces cerevisiae and bottom-fermenting Saccharomyces pastorianus. Brettanomyces ferments lambics, and Torulaspora delbrueckii ferments Bavarian weissbier. Before the role of yeast in fermentation was understood, fermentation involved wild or airborne yeasts. A few styles, such as lambics, rely on this method today, but most modern fermentation adds pure yeast cultures.
Some brewers add one or more clarifying agents or finings to beer, which typically precipitate (collect as a solid) out of the beer along with protein solids and are found only in trace amounts in the finished product. This process makes the beer appear bright and clean, rather than the cloudy appearance of ethnic and older styles of beer, such as wheat beers. Examples of clarifying agents include isinglass, obtained from the swimbladders of fish; Irish moss, a seaweed; kappa carrageenan, from the seaweed Kappaphycus cottonii; Polyclar (artificial); and gelatin. If a beer is marked "suitable for vegans", it is clarified either with seaweed or with artificial agents.
Brewing industry
The history of breweries in the 21st century has included larger breweries absorbing smaller breweries in order to ensure economy of scale. In 2002, South African Breweries bought the North American Miller Brewing Company to found SABMiller, becoming the second-largest brewery after North American Anheuser-Busch. In 2004, the Belgian Interbrew was the third-largest brewery by volume, and the Brazilian AmBev was the fifth-largest. They merged into InBev, becoming the largest brewery. In 2007, SABMiller surpassed InBev and Anheuser-Busch when it acquired Royal Grolsch, the brewer of Dutch premium beer brand Grolsch. In 2008, when InBev (the second-largest) bought Anheuser-Busch (the third-largest), the new Anheuser-Busch InBev company became again the largest brewer in the world.
, according to the market research firm Technavio, AB InBev remains the largest brewing company in the world, with Heineken second, CR Snow third, Carlsberg fourth, and Molson Coors fifth.
A microbrewery, or craft brewery, produces a limited amount of beer. The maximum amount of beer a brewery can produce and still be classed as a 'microbrewery' varies by region and by authority; in the US, it is a year. A brewpub is a type of microbrewery that incorporates a pub or other drinking establishment. The highest density of breweries in the world, most of them microbreweries, exists in Franconia, Germany, especially in the district of Upper Franconia, which has about 200 breweries. The Benedictine Weihenstephan brewery in Bavaria, Germany, can trace its roots to the year 768, as a document from that year refers to a hop garden in the area paying a tithe to the monastery. The brewery was licensed by the City of Freising in 1040 and is therefore the oldest working brewery in the world.
Varieties
While there are many types of beer brewed, the basics of brewing beer are shared across national and cultural boundaries. The traditional European brewing regions—Germany, Belgium, England and the Czech Republic—have local varieties of beer.
English writer Michael Jackson, in his 1977 book The World Guide To Beer, categorised beers from around the world in local style groups suggested by local customs and names. Fred Eckhardt furthered Jackson's work in The Essentials of Beer Style in 1989.
Top-fermented beers
Top-fermented beers are most commonly produced with Saccharomyces cerevisiae, a top-fermenting yeast which clumps and rises to the surface, typically between . At these temperatures, yeast produces significant amounts of esters and other secondary flavour and aroma products, and the result is often a beer with slightly "fruity" compounds resembling apple, pear, pineapple, banana, plum, or prune, among others.
After the introduction of hops into England from Flanders in the 15th century, "ale" referred to an unhopped fermented drink, "beer" being used to describe a brew with an infusion of hops.
Real ale is the term coined by the Campaign for Real Ale (CAMRA) in 1973 for "beer brewed from traditional ingredients, matured by secondary fermentation in the container from which it is dispensed, and served without the use of extraneous carbon dioxide". It is applied to bottle conditioned and cask conditioned beers.
Pale ale is a beer which uses a top-fermenting yeast and predominantly pale malt. It is one of the world's major beer styles. The India pale ale (IPA) variety is especially popular.
Mild ale has a predominantly malty palate. It is usually dark coloured with an abv of 3% to 3.6%, although there are lighter hued milds as well as stronger examples reaching 6% abv and higher.
Wheat beer is brewed with a large proportion of wheat although it often also contains a significant proportion of malted barley. Wheat beers are usually top-fermented. The flavour of wheat beers varies considerably, depending upon the specific style.
Stout is a dark beer made using roasted barley, and typically brewed with slow fermenting yeast. There are a number of variations including dry stout (such as Guinness), sweet stout, and Imperial (or Russian) stout.
Like stout, porter is a dark beer, but made with malted barley. The name "porter" was first used in 1721 to describe a dark brown beer popular with the street and river porters of London. This same beer later also became known as stout, though the word stout had been used as early as 1677. The history and development of stout and porter are intertwined, though now distinguished by whether the barley has been malted or not.
Bottom-fermented beers
Lager is cool fermented beer. Pale lagers are the most commonly consumed beers in the world. Many are of the "pilsner" type. The name "lager" comes from the German "lagern" for "to store", as brewers around Bavaria stored beer in cool cellars and caves during the warm summer months. These brewers noticed that the beers continued to ferment, and to also clear of sediment, when stored in cool conditions.
Lager yeast is a cool bottom-fermenting yeast (Saccharomyces pastorianus) and typically undergoes primary fermentation at (the fermentation phase), and then is given a long secondary fermentation at (the lagering phase). During the secondary stage, the lager clears and mellows. The cooler conditions also inhibit the natural production of esters and other byproducts, resulting in a "cleaner"-tasting beer.
With improved modern yeast strains, most lager breweries use only short periods of cold storage, typically 1–3 weeks.
Other types of beer
Lambic, a beer of Belgium, is naturally fermented using wild yeasts, rather than cultivated. Many of these are not strains of brewer's yeast (Saccharomyces cerevisiae) and may have significant differences in aroma and sourness. Yeast varieties such as Brettanomyces bruxellensis and Brettanomyces lambicus are common in lambics. In addition, other organisms such as Lactobacillus bacteria produce acids which contribute to the sourness.
Measurement
Beer is measured and assessed by colour, by strength and by bitterness. The perceived bitterness is measured by the International Bitterness Units scale (IBU), defined in co-operation between the American Society of Brewing Chemists and the European Brewery Convention. The international scale was a development of the European Bitterness Units scale, often abbreviated as EBU, and the bitterness values should be identical.
Colour
Beer colour is determined by the malt. The most common colour is a pale amber produced from using pale malts. Pale lager and pale ale are terms used for beers made from malt dried with the fuel coke. Coke was first used for roasting malt in 1642, but it was not until around 1703 that the term pale ale was used.
In terms of sales volume, most of today's beer is based on the pale lager brewed in 1842 in the town of Pilsen in the present-day Czech Republic. The modern pale lager is light in colour with a noticeable carbonation (fizzy bubbles) and a typical alcohol by volume content of around 5%. The Pilsner Urquell, Bitburger, and Heineken brands of beer are typical examples of pale lager, as are the American brands Budweiser, Coors, and Miller.
Dark beers are usually brewed from a pale malt or lager malt base with a small proportion of darker malt added to achieve the desired shade. Other colourants—such as caramel—are also widely used to darken beers. Very dark beers, such as stout, use dark or patent malts that have been roasted longer. Some have roasted unmalted barley.
Strength
Beer ranges from less than 3% alcohol by volume (abv) to around 14% abv, though this strength can be increased to around 20% by re-pitching with champagne yeast, and to 55% abv by the freeze-distilling process. The alcohol content of beer varies by local practice or beer style. The pale lagers that most consumers are familiar with fall in the range of 4–6%, with a typical abv of 5%. The customary strength of British ales is quite low, with many session beers being around 4% abv. In Belgium, some beers, such as table beer are of such low alcohol content (1%–4%) that they are served instead of soft drinks in some schools. The weakest beers are dealcoholized beers, which typically have less than 0.05% alcohol (also called "near beer") and light beers, which usually have 4% alcohol.
The alcohol in beer comes primarily from the metabolism of sugars that are produced during fermentation. The quantity of fermentable sugars in the wort and the variety of yeast used to ferment the wort are the primary factors that determine the amount of alcohol in the final beer. Additional fermentable sugars are sometimes added to increase alcohol content, and enzymes are often added to the wort for certain styles of beer (primarily "light" beers) to convert more complex carbohydrates (starches) to fermentable sugars. Alcohol is a by-product of yeast metabolism and is toxic to the yeast in higher concentrations; typical brewing yeast cannot survive at alcohol concentrations above 12% by volume. Low temperatures and too little fermentation time decreases the effectiveness of yeasts and consequently decreases the alcohol content.
The strength of beers has climbed during the later years of the 20th century. Vetter 33, a 10.5% abv (33 degrees Plato, hence Vetter "33") doppelbock, was listed in the 1994 Guinness Book of World Records as the strongest beer at that time, though Samichlaus, by the Swiss brewer Hürlimann, had also been listed by the Guinness Book of World Records as the strongest at 14% abv. Since then, some brewers have used champagne yeasts to increase the alcohol content of their beers. Samuel Adams reached 20% abv with Millennium, and then surpassed that amount to 25.6% abv with Utopias. The strongest beer brewed in Britain was Baz's Super Brew by Parish Brewery, a 23% abv beer. In September 2011, the Scottish brewery BrewDog produced Ghost Deer, which, at 28%, they claim to be the world's strongest beer produced by fermentation alone.
The product claimed to be the strongest beer made is Schorschbräu's 2011 Schorschbock 57 with 57,5%. It was preceded by The End of History, a 55% Belgian ale, made by BrewDog in 2010. The same company had previously made Sink The Bismarck!, a 41% abv IPA, and Tactical Nuclear Penguin, a 32% abv Imperial stout. Each of these beers are made using the eisbock method of fractional freezing, in which a strong ale is partially frozen and the ice is repeatedly removed, until the desired strength is reached, a process that may class the product as spirits rather than beer. The German brewery Schorschbräu's Schorschbock, a 31% abv eisbock, and Hair of the Dog's Dave, a 29% abv barley wine made in 1994, used the same fractional freezing method. A 60% abv blend of beer with whiskey was jokingly claimed as the strongest beer by a Dutch brewery in July 2010.
Serving
Draught
Draught (also spelled "draft") beer from a pressurised keg using a lever-style dispenser and a spout is the most common method of dispensing in bars around the world. A metal keg is pressurised with carbon dioxide (CO2) gas which drives the beer to the dispensing tap or faucet. Some beers may be served with a nitrogen/carbon dioxide mixture. Nitrogen produces fine bubbles, resulting in a dense head and a creamy mouthfeel. Some types of beer can also be found in smaller, disposable kegs called beer balls. In traditional pubs, the pull levers for major beer brands may include the beer's logo and trademark.
In the 1980s, Guinness introduced the beer widget, a nitrogen-pressurised ball inside a can which creates a dense, tight head, similar to beer served from a nitrogen system. The words draft and draught can be used as marketing terms to describe canned or bottled beers containing a beer widget, or which are cold-filtered rather than pasteurised.
Cask-conditioned ales (or cask ales) are unfiltered and unpasteurised beers. These beers are termed "real ale" by the CAMRA organisation. Typically, when a cask arrives in a pub, it is placed horizontally on a frame called a "stillage" which is designed to hold it steady and at the right angle, and then allowed to cool to cellar temperature (typically between ), before being tapped and vented—a tap is driven through a (usually rubber) bung at the bottom of one end, and a hard spile or other implement is used to open a hole in the side of the cask, which is now uppermost. The act of stillaging and then venting a beer in this manner typically disturbs all the sediment, so it must be left for a suitable period to "drop" (clear) again, as well as to fully condition—this period can take anywhere from several hours to several days. At this point the beer is ready to sell, either being pulled through a beer line with a hand pump, or simply being "gravity-fed" directly into the glass.
Draught beer's environmental impact can be 68% lower than bottled beer due to packaging differences. A life cycle study of one beer brand, including grain production, brewing, bottling, distribution and waste management, shows that the CO2 emissions from a 6-pack of micro-brew beer is about 3 kilograms (6.6 pounds). The loss of natural habitat potential from the 6-pack of micro-brew beer is estimated to be 2.5 square metres (26 square feet). Downstream emissions from distribution, retail, storage and disposal of waste can be over 45% of a bottled micro-brew beer's CO2 emissions. Where legal, the use of a refillable jug, reusable bottle or other reusable containers to transport draught beer from a store or a bar, rather than buying pre-bottled beer, can reduce the environmental impact of beer consumption.
Packaging
Most beers are cleared of yeast by filtering when packaged in bottles and cans. However, bottle conditioned beers retain some yeast—either by being unfiltered, or by being filtered and then reseeded with fresh yeast. It is usually recommended that the beer be poured slowly, leaving any yeast sediment at the bottom of the bottle. However, some drinkers prefer to pour in the yeast; this practice is customary with wheat beers. Typically, when serving a hefeweizen wheat beer, 90% of the contents are poured, and the remainder is swirled to suspend the sediment before pouring it into the glass. Alternatively, the bottle may be inverted prior to opening. Glass bottles are always used for bottle conditioned beers.
Many beers are sold in cans, though there is considerable variation in the proportion between different countries. In Sweden in 2001, 63.9% of beer was sold in cans. People either drink from the can or pour the beer into a glass. A technology developed by Crown Holdings for the 2010 FIFA World Cup is the 'full aperture' can, so named because the entire lid is removed during the opening process, turning the can into a drinking cup. Cans protect the beer from light (thereby preventing "skunked" beer) and have a seal less prone to leaking over time than bottles. Cans were initially viewed as a technological breakthrough for maintaining the quality of a beer, then became commonly associated with less expensive, mass-produced beers, even though the quality of storage in cans is much like bottles. Plastic (PET) bottles are used by some breweries.
Temperature
The temperature of a beer has an influence on a drinker's experience; warmer temperatures reveal the range of flavours in a beer but cooler temperatures are more refreshing. Most drinkers prefer pale lager to be served chilled, a low- or medium-strength pale ale to be served cool, while a strong barley wine or imperial stout to be served at room temperature.
Beer writer Michael Jackson proposed a five-level scale for serving temperatures: well chilled () for "light" beers (pale lagers); chilled () for Berliner Weisse and other wheat beers; lightly chilled () for all dark lagers, altbier and German wheat beers; cellar temperature () for regular British ale, stout and most Belgian specialities; and room temperature () for strong dark ales (especially trappist beer) and barley wine.
Drinking chilled beer began with the development of artificial refrigeration and by the 1870s, was spread in those countries that concentrated on brewing pale lager. Chilling beer makes it more refreshing, though below 15.5 °C (60 °F) the chilling starts to reduce taste awareness and reduces it significantly below . Beer served unchilled—either cool or at room temperature—reveal more of their flavours. Cask Marque, a non-profit UK beer organisation, has set a temperature standard range of 12°–14 °C (53°–57 °F) for cask ales to be served.
Vessels
Beer is consumed out of a variety of vessels, such as a glass, a beer stein, a mug, a pewter tankard, a beer bottle or a can; or at music festivals and some bars and nightclubs, from a plastic cup. The shape of the glass from which beer is consumed can influence the perception of the beer and can define and accent the character of the style. Breweries offer branded glassware intended only for their own beers as a marketing promotion, as this increases sales of their product.
The pouring process has an influence on a beer's presentation. The rate of flow from the tap or other serving vessel, tilt of the glass, and position of the pour (in the centre or down the side) into the glass all influence the result, such as the size and longevity of the head, lacing (the pattern left by the head as it moves down the glass as the beer is drunk), and the release of carbonation.
A beer tower is a beer dispensing device, usually found in bars and pubs, that consists of a cylinder attached to a beer cooling device at the bottom. Beer is dispensed from the beer tower into a drinking vessel.
Health effects
A 2016 systematic review and meta-analysis found that moderate ethanol consumption brought no mortality benefit compared with lifetime abstention from ethanol consumption. Some studies have concluded that drinking small quantities of alcohol (less than one drink in women and two in men, per day) is associated with a decreased risk of heart disease, stroke, diabetes mellitus, and early death. Some of these studies combined former ethanol drinkers and lifelong abstainers into a single group of nondrinkers, which hides the health benefits of lifelong abstention from ethanol. The long-term health effects of continuous, moderate or heavy alcohol consumption include the risk of developing alcoholism and alcoholic liver disease. Alcoholism, also known as "alcohol use disorder", is a broad term for any drinking of alcohol that results in problems. It was previously divided into two types: alcohol abuse and alcohol dependence. In a medical context, alcoholism is said to exist when two or more of the following conditions are present: a person drinks large amounts over a long time period, has difficulty cutting down, acquiring and drinking alcohol takes up a great deal of time, alcohol is strongly desired, usage results in not fulfilling responsibilities, usage results in social problems, usage results in health problems, usage results in risky situations, withdrawal occurs when stopping, and alcohol tolerance has occurred with use. Alcoholism reduces a person's life expectancy by around ten years and alcohol use is the third leading cause of early death in the United States. No professional medical association recommends that people who are nondrinkers should start drinking alcoholic beverages. In the United States, a total of 3.3 million deaths per year (5.9% of all deaths) are believed to be due to alcohol.
It is considered that overeating and lack of muscle tone is the main cause of a beer belly, rather than beer consumption. A 2004 study, however, found a link between binge drinking and a beer belly. But with most overconsumption, it is more a problem of improper exercise and overconsumption of carbohydrates than the product itself. Several diet books quote beer as having an undesirably high glycemic index of 110, the same as maltose; however, the maltose in beer undergoes metabolism by yeast during fermentation so that beer consists mostly of water, hop oils and only trace amounts of sugars, including maltose.
Nutritional information
Beers vary in their nutritional content. The ingredients used to make beer, including the yeast, provide a rich source of nutrients; therefore beer may contain nutrients including magnesium, selenium, potassium, phosphorus, biotin, chromium and B vitamins. Beer is sometimes referred to as "liquid bread", though beer is not a meal in itself.
Society and culture
In many societies, beer is the most popular alcoholic drink. Various social traditions and activities are associated with beer drinking, such as playing cards, darts, or other pub games; attending beer festivals; engaging in zythology (the study of beer); visiting a series of pubs in one evening; visiting breweries; beer-oriented tourism; or rating beer. Drinking games, such as beer pong, are also popular. A relatively new profession is that of the beer sommelier, who informs restaurant patrons about beers and food pairings.
Beer is considered to be a social lubricant in many societies and is consumed in countries all over the world. There are breweries in Middle Eastern countries such as Syria, and in some African countries. Sales of beer are four times those of wine, which is the second most popular alcoholic drink.
A study published in the Neuropsychopharmacology journal in 2013 revealed the finding that the flavour of beer alone could provoke dopamine activity in the brain of the male participants, who wanted to drink more as a result. The 49 men in the study were subject to positron emission tomography scans, while a computer-controlled device sprayed minute amounts of beer, water and a sports drink onto their tongues. Compared with the taste of the sports drink, the taste of beer significantly increased the participants desire to drink. Test results indicated that the flavour of the beer triggered a dopamine release, even though alcohol content in the spray was insufficient for the purpose of becoming intoxicated.
Some breweries have developed beers to pair with food. Wine writer Malcolm Gluck disputed the need to pair beer with food, while beer writers Roger Protz and Melissa Cole contested that claim.
Related drinks
Around the world, there are many traditional and ancient starch-based drinks classed as beer. In Africa, there are various ethnic beers made from sorghum or millet, such as Oshikundu in Namibia and Tella in Ethiopia. Kyrgyzstan also has a beer made from millet; it is a low alcohol, somewhat porridge-like drink called "Bozo". Bhutan, Nepal, Tibet and Sikkim also use millet in Chhaang, a popular semi-fermented rice/millet drink in the eastern Himalayas. Further east in China are found Huangjiu and Choujiu—traditional rice-based drinks related to beer.
The Andes in South America has Chicha, made from germinated maize (corn); while the indigenous peoples in Brazil have Cauim, a traditional drink made since pre-Columbian times by chewing manioc so that an enzyme (amylase) present in human saliva can break down the starch into fermentable sugars; this is similar to Masato in Peru.
Some beers which are made from bread, which is linked to the earliest forms of beer, are Sahti in Finland, Kvass in Russia and Ukraine, and Bouza in Sudan. 4000 years ago fermented bread was used in Mesopotamia. Food waste activists got inspired by these ancient recipes and use leftover bread to replace a third of the malted barley that would otherwise be used for brewing their craft ale.
Chemistry
Beer contains the phenolic acids 4-hydroxyphenylacetic acid, vanillic acid, caffeic acid, syringic acid, p-coumaric acid, ferulic acid, and sinapic acid. Alkaline hydrolysis experiments show that most of the phenolic acids are present as bound forms and only a small portion can be detected as free compounds. Hops, and beer made with it, contain 8-prenylnaringenin which is a potent phytoestrogen. Hop also contains myrcene, humulene, xanthohumol, isoxanthohumol, myrcenol, linalool, tannins, and resin. The alcohol 2M2B is a component of hops brewing.
Barley, in the form of malt, brings the condensed tannins prodelphinidins B3, B9 and C2 into beer. Tryptophol, tyrosol, and phenylethanol are aromatic higher alcohols found in beer as secondary products of alcoholic fermentation
(products also known as congeners) by Saccharomyces cerevisiae.
See also
References
Bibliography
Further reading
External links
Brewing
Fermented drinks
Alcoholic drinks
|
https://en.wikipedia.org/wiki/Byte
|
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol () refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness. The first bit is number 0, making the eighth bit number 7.
The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as syllables or slab, before the term byte became common.
The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively.
The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). Internationally, the unit octet, symbol o, explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte".
Etymology and history
The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction.
It is a deliberate respelling of bite to avoid accidental mutation to bit.
Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31.
Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations used in earlier card punches.
The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different.
In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines. These used the eight-bit μ-law encoding. This large investment promised to reduce transmission costs for eight-bit data.
In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote a unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits". He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to the former sense of the word, harking back to the days when bytes were not yet standardized."
The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit.
The term octet is used to unambiguously specify a size of eight bits. It is used extensively in protocol definitions.
Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers.
Unit symbol
The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format as the upper-case character B.
In the International System of Quantities (ISQ), B is also the symbol of the bel, a unit of logarithmic power ratio named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates.
The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French and Romanian, and is also combined with metric prefixes for multiples, for example ko and Mo.
Multiple-byte units
More than one system exists to define unit multiples based on the byte. Some systems are based on powers of 10, following the International System of Units (SI), which defines for example the prefix kilo as 1000 (103); other systems are based on powers of 2. Nomenclature for these systems has confusion. Systems based on powers of 10 use standard SI prefixes (kilo, mega, giga, ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes (kibi, mebi, gibi, ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use the prefixes K, M, and G, creating ambiguity when the prefixes M or G are used.
While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based teribyte.
Units based on powers of 10
Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB) is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 10008 bytes. The additional prefixes ronna- for 10009 and quetta- for 100010 were adopted by the International Bureau of Weights and Measures (BIPM) in 2022.
This definition is most commonly used for data-rate units in computer networks, internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media, particularly hard drives, flash-based storage, and DVDs. Operating systems that use this definition include macOS, iOS, Ubuntu, and Debian. It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance.
Units based on powers of 2
A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 210) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies (BIPM, IEC, NIST). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 10248 bytes. The natural binary counterparts to ronna- and quetta- were given in a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) as robi- (Ri, 10249) and quebi- (Qi, 102410), but have not yet been adopted by the IEC and ISO.
An alternate system of nomenclature for the same units (referred to here as the customary convention), in which 1 kilobyte (KB) is equal to 1,024 bytes, 1 megabyte (MB) is equal to 10242 bytes and 1 gigabyte (GB) is equal to 10243 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. The customary convention is used by the Microsoft Windows operating system and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone, AT&T, Orange and Telstra.
For storage capacity, the customary convention was used by macOS and iOS through Mac OS X 10.6 Snow Leopard and iOS 10, after which they switched to units based on powers of 10.
Parochial units
Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for the same term even within a single vendor. These terms include double word, half word, long word, quad word, slab, superword and syllable. There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for .
History of the conflicting definitions
Contemporary computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as a convenience, because 1,024 is approximately 1,000. This definition was popular in early decades of personal computing, with products like the Tandon 5-inch DD floppy format (holding 368,640 bytes) being advertised as "360 KB", following the 1,024-byte convention. It was not universal, however. The Shugart SA-400 5-inch floppy disk held 109,375 bytes unformatted, and was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC RX01 floppy (1975) held 256,256 bytes formatted, and was advertised as "256k". Other disks were advertised using a mixture of the two definitions: notably, -inch HD disks advertised as "1.44 MB" in fact have a capacity of 1,440 KiB, the equivalent of 1.47 MB or 1.41 MiB.
In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary).
In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. Thus one kibibyte (1 KiB) is 10241 bytes = 1024 bytes, one mebibyte (1 MiB) is 10242 bytes = bytes, and so on.
In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" (KKB).
Modern standard definitions
The IEC adopted the IUPAC proposal and published the standard in January 1999. The IEC prefixes are part of the International System of Quantities. The IEC further specified that the kilobyte should only be used to refer to 1,000 bytes.
Lawsuits over definition
Lawsuits arising from alleged consumer confusion over the binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (109) bytes (the decimal definition), rather than the binary definition (230, i.e., 1,073,741,824). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state.
Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled.
Practical examples
Common uses
Many programming languages define the data type byte.
The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. In addition, the C and C++ standards require that there are no gaps between two bytes. This means every bit in memory is part of a byte.
Java's primitive data type byte is defined as eight bits. It is a signed data type, holding values from −128 to 127.
.NET programming languages, such as C#, define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127, respectively.
In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. A transmission unit might additionally include start bits, stop bits, and parity bits, and thus its size may vary from seven to twelve bits to contain a single seven-bit ASCII code.
See also
Data
Data hierarchy
Nibble
Octet (computing)
Primitive data type
Tryte
Word (computer architecture)
Notes
References
Further reading
Ashley Taylor. "Bits and Bytes." Stanford. https://web.stanford.edu/class/cs101/bits-bytes.html
Data types
Units of information
Binary arithmetic
Computer memory
Data unit
Primitive types
1950s neologisms
8 (number)
|
https://en.wikipedia.org/wiki/Beryllium
|
Beryllium is a chemical element with the symbol Be and atomic number 4. It is a steel-gray, strong, lightweight and brittle alkaline earth metal. It is a divalent element that occurs naturally only in combination with other elements to form minerals. Gemstones high in beryllium include beryl (aquamarine, emerald, red beryl) and chrysoberyl. It is a relatively rare element in the universe, usually occurring as a product of the spallation of larger atomic nuclei that have collided with cosmic rays. Within the cores of stars, beryllium is depleted as it is fused into heavier elements. Beryllium constitutes about 0.0004 percent by mass of Earth's crust. The world's annual beryllium production of 220 tons is usually manufactured by extraction from the mineral beryl, a difficult process because beryllium bonds strongly to oxygen.
In structural applications, the combination of high flexural rigidity, thermal stability, thermal conductivity and low density (1.85 times that of water) make beryllium metal a desirable aerospace material for aircraft components, missiles, spacecraft, and satellites. Because of its low density and atomic mass, beryllium is relatively transparent to X-rays and other forms of ionizing radiation; therefore, it is the most common window material for X-ray equipment and components of particle detectors. When added as an alloying element to aluminium, copper (notably the alloy beryllium copper), iron, or nickel, beryllium improves many physical properties. For example, tools and components made of beryllium copper alloys are strong and hard and do not create sparks when they strike a steel surface. In air, the surface of beryllium oxidizes readily at room temperature to form a passivation layer 1–10 nm thick that protects it from further oxidation and corrosion. The metal oxidizes in bulk (beyond the passivation layer) when heated above , and burns brilliantly when heated to about .
The commercial use of beryllium requires the use of appropriate dust control equipment and industrial controls at all times because of the toxicity of inhaled beryllium-containing dusts that can cause a chronic life-threatening allergic disease in some people called berylliosis. Berylliosis causes pneumonia and other associated respiratory illness.
Characteristics
Physical properties
Beryllium is a steel gray and hard metal that is brittle at room temperature and has a close-packed hexagonal crystal structure. It has exceptional stiffness (Young's modulus 287 GPa) and a melting point of 1287 °C. The modulus of elasticity of beryllium is approximately 35% greater than that of steel. The combination of this modulus and a relatively low density results in an unusually fast sound conduction speed in beryllium – about 12.9 km/s at ambient conditions. Other significant properties are high specific heat () and thermal conductivity (), which make beryllium the metal with the best heat dissipation characteristics per unit weight. In combination with the relatively low coefficient of linear thermal expansion (11.4×10−6 K−1), these characteristics result in a unique stability under conditions of thermal loading.
Nuclear properties
Naturally occurring beryllium, save for slight contamination by the cosmogenic radioisotopes, is isotopically pure beryllium-9, which has a nuclear spin of . Beryllium has a large scattering cross section for high-energy neutrons, about 6 barns for energies above approximately 10 keV. Therefore, it works as a neutron reflector and neutron moderator, effectively slowing the neutrons to the thermal energy range of below 0.03 eV, where the total cross section is at least an order of magnitude lower; the exact value strongly depends on the purity and size of the crystallites in the material.
The single primordial beryllium isotope 9Be also undergoes a (n,2n) neutron reaction with neutron energies over about 1.9 MeV, to produce 8Be, which almost immediately breaks into two alpha particles. Thus, for high-energy neutrons, beryllium is a neutron multiplier, releasing more neutrons than it absorbs. This nuclear reaction is:
+ n → 2 + 2 n
Neutrons are liberated when beryllium nuclei are struck by energetic alpha particles producing the nuclear reaction
+ → + n
where is an alpha particle and is a carbon-12 nucleus.
Beryllium also releases neutrons under bombardment by gamma rays. Thus, natural beryllium bombarded either by alphas or gammas from a suitable radioisotope is a key component of most radioisotope-powered nuclear reaction neutron sources for the laboratory production of free neutrons.
Small amounts of tritium are liberated when nuclei absorb low energy neutrons in the three-step nuclear reaction
+ n → + , → + β−, + n → +
has a half-life of only 0.8 seconds, β− is an electron, and has a high neutron absorption cross section. Tritium is a radioisotope of concern in nuclear reactor waste streams.
Optical properties
As a metal, beryllium is transparent or translucent to most wavelengths of X-rays and gamma rays, making it useful for the output windows of X-ray tubes and other such apparatus.
Isotopes and nucleosynthesis
Both stable and unstable isotopes of beryllium are created in stars, but the radioisotopes do not last long. It is believed that most of the stable beryllium in the universe was originally created in the interstellar medium when cosmic rays induced fission in heavier elements found in interstellar gas and dust. Primordial beryllium contains only one stable isotope, 9Be, and therefore beryllium is a monoisotopic and mononuclidic element.
Radioactive cosmogenic 10Be is produced in the atmosphere of the Earth by the cosmic ray spallation of oxygen. 10Be accumulates at the soil surface, where its relatively long half-life (1.36 million years) permits a long residence time before decaying to boron-10. Thus, 10Be and its daughter products are used to examine natural soil erosion, soil formation and the development of lateritic soils, and as a proxy for measurement of the variations in solar activity and the age of ice cores. The production of 10Be is inversely proportional to solar activity, because increased solar wind during periods of high solar activity decreases the flux of galactic cosmic rays that reach the Earth. Nuclear explosions also form 10Be by the reaction of fast neutrons with 13C in the carbon dioxide in air. This is one of the indicators of past activity at nuclear weapon test sites.
The isotope 7Be (half-life 53 days) is also cosmogenic, and shows an atmospheric abundance linked to sunspots, much like 10Be.
8Be has a very short half-life of about 8 s that contributes to its significant cosmological role, as elements heavier than beryllium could not have been produced by nuclear fusion in the Big Bang. This is due to the lack of sufficient time during the Big Bang's nucleosynthesis phase to produce carbon by the fusion of 4He nuclei and the very low concentrations of available beryllium-8. British astronomer Sir Fred Hoyle first showed that the energy levels of 8Be and 12C allow carbon production by the so-called triple-alpha process in helium-fueled stars where more nucleosynthesis time is available. This process allows carbon to be produced in stars, but not in the Big Bang. Star-created carbon (the basis of carbon-based life) is thus a component in the elements in the gas and dust ejected by AGB stars and supernovae (see also Big Bang nucleosynthesis), as well as the creation of all other elements with atomic numbers larger than that of carbon.
The 2s electrons of beryllium may contribute to chemical bonding. Therefore, when 7Be decays by L-electron capture, it does so by taking electrons from its atomic orbitals that may be participating in bonding. This makes its decay rate dependent to a measurable degree upon its chemical surroundings – a rare occurrence in nuclear decay.
The shortest-lived known isotope of beryllium is 16Be, which decays through neutron emission with a half-life of . The exotic isotopes 11Be and 14Be are known to exhibit a nuclear halo. This phenomenon can be understood as the nuclei of 11Be and 14Be have, respectively, 1 and 4 neutrons orbiting substantially outside the classical Fermi 'waterdrop' model of the nucleus.
Occurrence
The Sun has a concentration of 0.1 parts per billion (ppb) of beryllium. Beryllium has a concentration of 2 to 6 parts per million (ppm) in the Earth's crust. It is most concentrated in the soils, 6 ppm. Trace amounts of 9Be are found in the Earth's atmosphere. The concentration of beryllium in sea water is 0.2–0.6 parts per trillion. In stream water, however, beryllium is more abundant with a concentration of 0.1 ppb.
Beryllium is found in over 100 minerals, but most are uncommon to rare. The more common beryllium containing minerals include: bertrandite (Be4Si2O7(OH)2), beryl (Al2Be3Si6O18), chrysoberyl (Al2BeO4) and phenakite (Be2SiO4). Precious forms of beryl are aquamarine, red beryl and emerald.
The green color in gem-quality forms of beryl comes from varying amounts of chromium (about 2% for emerald).
The two main ores of beryllium, beryl and bertrandite, are found in Argentina, Brazil, India, Madagascar, Russia and the United States. Total world reserves of beryllium ore are greater than 400,000 tonnes.
Production
The extraction of beryllium from its compounds is a difficult process due to its high affinity for oxygen at elevated temperatures, and its ability to reduce water when its oxide film is removed. Currently the United States, China and Kazakhstan are the only three countries involved in the industrial-scale extraction of beryllium. Kazakhstan produces beryllium from a concentrate stockpiled before the breakup of the Soviet Union around 1991. This resource had become nearly depleted by mid-2010s.
Production of beryllium in Russia was halted in 1997, and is planned to be resumed in the 2020s.
Beryllium is most commonly extracted from the mineral beryl, which is either sintered using an extraction agent or melted into a soluble mixture. The sintering process involves mixing beryl with sodium fluorosilicate and soda at to form sodium fluoroberyllate, aluminium oxide and silicon dioxide. Beryllium hydroxide is precipitated from a solution of sodium fluoroberyllate and sodium hydroxide in water. Extraction of beryllium using the melt method involves grinding beryl into a powder and heating it to . The melt is quickly cooled with water and then reheated in concentrated sulfuric acid, mostly yielding beryllium sulfate and aluminium sulfate. Aqueous ammonia is then used to remove the aluminium and sulfur, leaving beryllium hydroxide.
Beryllium hydroxide created using either the sinter or melt method is then converted into beryllium fluoride or beryllium chloride. To form the fluoride, aqueous ammonium hydrogen fluoride is added to beryllium hydroxide to yield a precipitate of ammonium tetrafluoroberyllate, which is heated to to form beryllium fluoride. Heating the fluoride to with magnesium forms finely divided beryllium, and additional heating to creates the compact metal. Heating beryllium hydroxide forms the oxide, which becomes beryllium chloride when combined with carbon and chlorine. Electrolysis of molten beryllium chloride is then used to obtain the metal.
Chemical properties
A beryllium atom has the electronic configuration [He] 2s2. The predominant oxidation state of beryllium is +2; the beryllium atom has lost both of its valence electrons. Lower oxidation states complexes of beryllium are exceedingly rare. For example, bis(carbene) compounds proposed to contain beryllium in the 0- and +1-oxidation state have been reported, although these claims have proved controversial.
A stable complex with a Be-Be bond, which formally features beryllium in the +1 oxidation state, has been described. Beryllium's chemical behavior is largely a result of its small atomic and ionic radii. It thus has very high ionization potentials and strong polarization while bonded to other atoms, which is why all of its compounds are covalent. Its chemistry has similarities to that of aluminium, an example of a diagonal relationship.
At room temperature, the surface of beryllium forms a 1−10 nm-thick oxide passivation layer that prevents further reactions with air, except for gradual thickening of the oxide up to about 25 nm. When heated above about 500 °C, oxidation into the bulk metal progresses along grain boundaries. Once the metal is ignited in air by heating above the oxide melting point around 2500 °C, beryllium burns brilliantly, forming a mixture of beryllium oxide and beryllium nitride. Beryllium dissolves readily in non-oxidizing acids, such as HCl and diluted H2SO4, but not in nitric acid or water as this forms the oxide. This behavior is similar to that of aluminium metal. Beryllium also dissolves in alkali solutions.
Binary compounds of beryllium(II) are polymeric in the solid state. BeF2 has a silica-like structure with corner-shared BeF4 tetrahedra. BeCl2 and BeBr2 have chain structures with edge-shared tetrahedra. Beryllium oxide, BeO, is a white refractory solid, which has the wurtzite crystal structure and a thermal conductivity as high as some metals. BeO is amphoteric. Beryllium sulfide, selenide and telluride are known, all having the zincblende structure. Beryllium nitride, Be3N2 is a high-melting-point compound which is readily hydrolyzed. Beryllium azide, BeN6 is known and beryllium phosphide, Be3P2 has a similar structure to Be3N2. A number of beryllium borides are known, such as Be5B, Be4B, Be2B, BeB2, BeB6 and BeB12. Beryllium carbide, Be2C, is a refractory brick-red compound that reacts with water to give methane. No beryllium silicide has been identified.
The halides BeX2 (X = F, Cl, Br, I) have a linear monomeric molecular structure in the gas phase. Complexes of the halides are formed with one or more ligands donating at total of two pairs of electrons. Such compounds obey the octet rule. Other 4-coordinate complexes such as the aqua-ion [Be(H2O)4]2+ also obey the octet rule.
Aqueous solutions
Solutions of beryllium salts, such as beryllium sulfate and beryllium nitrate, are acidic because of hydrolysis of the [Be(H2O)4]2+ ion. The concentration of the first hydrolysis product, [Be(H2O)3(OH)]+, is less than 1% of the beryllium concentration. The most stable hydrolysis product is the trimeric ion [Be3(OH)3(H2O)6]3+. Beryllium hydroxide, Be(OH)2, is insoluble in water at pH 5 or more. Consequently, beryllium compounds are generally insoluble at biological pH. Because of this, inhalation of beryllium metal dust by people leads to the development of the fatal condition of berylliosis. Be(OH)2 dissolves in strongly alkaline solutions.
Beryllium(II) forms few complexes with monodentate ligands because the water molecules in the aquo-ion, [Be(H2O)4]2+ are bound very strongly to the beryllium ion. Notable exceptions are the series of water-soluble complexes with the fluoride ion: [Be(H2O)4]^2+{} + \mathit{n}\,F^- <=> Be[(H2O)_{2\!-\mathit{n}}F_\mathit{n}]^{2\!-\mathit{n}}{} + \mathit{n}\,H2O
Beryllium(II) forms many complexes with bidentate ligands containing oxygen-donor atoms. The species [Be3O(H2PO4)6]2- is notable for having a 3-coordinate oxide ion at its center. Basic beryllium acetate, Be4O(OAc)6, has an oxide ion surrounded by a tetrahedron of beryllium atoms.
With organic ligands, such as the malonate ion, the acid deprotonates when forming the complex. The donor atoms are two oxygens. H2A + [Be(H2O)4]^2+ <=> [BeA(H2O)2] + 2H+ + 2H2O H2A + [BeA(H2O)2] <=> [BeA2]^2- + 2H+ + 2H2O Formation of a complex is in competition with the metal ion-hydrolysis reaction and mixed complexes with both the anion and the hydroxide ion are also formed. For example, derivatives of the cyclic trimer are known, with a bidentate ligand replacing one or more pairs of water molecules.
Aliphatic hydroxycarboxylic acids such as glycollic acid form rather weak, monodentate complexes in solution, in which the hydroxyl group remains intact. In the solid state, the hydroxyl group may deprotonate: a hexamer, Na_4[Be_6(OCH_2(O)O)_6] , was isolated long ago. Aromatic hydroxy ligands (i.e. phenols) form relatively strong complexes. For example, log K1 and log K2 values of 12.2 and 9.3 have been reported for complexes with tiron.
Beryllium has generally a rather poor affinity for ammine ligands. Ligands such as EDTA behave as dicarboxylic acids. There are many early reports of complexes with amino acids, but unfortunately they are not reliable as the concomitant hydrolysis reactions were not understood at the time of publication. Values for log β of ca. 6 to 7 have been reported. The degree of formation is small because of competition with hydrolysis reactions.
Organic chemistry
Organoberyllium chemistry is limited to academic research due to the cost and toxicity of beryllium, beryllium derivatives and reagents required for the introduction of beryllium, such as beryllium chloride. Organometallic beryllium compounds are known to be highly reactive Examples of known organoberyllium compounds are dineopentylberyllium, beryllocene (Cp2Be), diallylberyllium (by exchange reaction of diethyl beryllium with triallyl boron), bis(1,3-trimethylsilylallyl)beryllium, Be(mes)2, and (beryllium(I) complex) diberyllocene. Ligands can also be aryls and alkynyls.
History
The mineral beryl, which contains beryllium, has been used at least since the Ptolemaic dynasty of Egypt. In the first century CE, Roman naturalist Pliny the Elder mentioned in his encyclopedia Natural History that beryl and emerald ("smaragdus") were similar. The Papyrus Graecus Holmiensis, written in the third or fourth century CE, contains notes on how to prepare artificial emerald and beryl.
Early analyses of emeralds and beryls by Martin Heinrich Klaproth, Torbern Olof Bergman, Franz Karl Achard, and Johann Jakob Bindheim always yielded similar elements, leading to the mistaken conclusion that both substances are aluminium silicates. Mineralogist René Just Haüy discovered that both crystals are geometrically identical, and he asked chemist Louis-Nicolas Vauquelin for a chemical analysis.
In a 1798 paper read before the Institut de France, Vauquelin reported that he found a new "earth" by dissolving aluminium hydroxide from emerald and beryl in an additional alkali. The editors of the journal Annales de Chimie et de Physique named the new earth "glucine" for the sweet taste of some of its compounds. Klaproth preferred the name "beryllina" due to the fact that yttria also formed sweet salts. The name "beryllium" was first used by Wöhler in 1828.
Friedrich Wöhler and Antoine Bussy independently isolated beryllium in 1828 by the chemical reaction of metallic potassium with beryllium chloride, as follows:
BeCl2 + 2 K → 2 KCl + Be
Using an alcohol lamp, Wöhler heated alternating layers of beryllium chloride and potassium in a wired-shut platinum crucible. The above reaction immediately took place and caused the crucible to become white hot. Upon cooling and washing the resulting gray-black powder he saw that it was made of fine particles with a dark metallic luster. The highly reactive potassium had been produced by the electrolysis of its compounds, a process discovered 21 years before. The chemical method using potassium yielded only small grains of beryllium from which no ingot of metal could be cast or hammered.
The direct electrolysis of a molten mixture of beryllium fluoride and sodium fluoride by Paul Lebeau in 1898 resulted in the first pure (99.5 to 99.8%) samples of beryllium. However, industrial production started only after the First World War. The original industrial involvement included subsidiaries and scientists related to the Union Carbide and Carbon Corporation in Cleveland, Ohio, and Siemens & Halske AG in Berlin. In the US, the process was ruled by Hugh S. Cooper, director of The Kemet Laboratories Company. In Germany, the first commercially successful process for producing beryllium was developed in 1921 by Alfred Stock and Hans Goldschmidt.
A sample of beryllium was bombarded with alpha rays from the decay of radium in a 1932 experiment by James Chadwick that uncovered the existence of the neutron. This same method is used in one class of radioisotope-based laboratory neutron sources that produce 30 neutrons for every million α particles.
Beryllium production saw a rapid increase during World War II, due to the rising demand for hard beryllium-copper alloys and phosphors for fluorescent lights. Most early fluorescent lamps used zinc orthosilicate with varying content of beryllium to emit greenish light. Small additions of magnesium tungstate improved the blue part of the spectrum to yield an acceptable white light. Halophosphate-based phosphors replaced beryllium-based phosphors after beryllium was found to be toxic.
Electrolysis of a mixture of beryllium fluoride and sodium fluoride was used to isolate beryllium during the 19th century. The metal's high melting point makes this process more energy-consuming than corresponding processes used for the alkali metals. Early in the 20th century, the production of beryllium by the thermal decomposition of beryllium iodide was investigated following the success of a similar process for the production of zirconium, but this process proved to be uneconomical for volume production.
Pure beryllium metal did not become readily available until 1957, even though it had been used as an alloying metal to harden and toughen copper much earlier. Beryllium could be produced by reducing beryllium compounds such as beryllium chloride with metallic potassium or sodium. Currently, most beryllium is produced by reducing beryllium fluoride with magnesium. The price on the American market for vacuum-cast beryllium ingots was about $338 per pound ($745 per kilogram) in 2001.
Between 1998 and 2008, the world's production of beryllium had decreased from 343 to about 200 tonnes. It then increased to 230 tonnes by 2018, of which 170 tonnes came from the United States.
Etymology
Named after beryl, a semiprecious mineral, from which it was first isolated.
Applications
Radiation windows
Because of its low atomic number and very low absorption for X-rays, the oldest and still one of the most important applications of beryllium is in radiation windows for X-ray tubes. Extreme demands are placed on purity and cleanliness of beryllium to avoid artifacts in the X-ray images. Thin beryllium foils are used as radiation windows for X-ray detectors, and the extremely low absorption minimizes the heating effects caused by high intensity, low energy X-rays typical of synchrotron radiation. Vacuum-tight windows and beam-tubes for radiation experiments on synchrotrons are manufactured exclusively from beryllium. In scientific setups for various X-ray emission studies (e.g., energy-dispersive X-ray spectroscopy) the sample holder is usually made of beryllium because its emitted X-rays have much lower energies (≈100 eV) than X-rays from most studied materials.
Low atomic number also makes beryllium relatively transparent to energetic particles. Therefore, it is used to build the beam pipe around the collision region in particle physics setups, such as all four main detector experiments at the Large Hadron Collider (ALICE, ATLAS, CMS, LHCb), the Tevatron and at SLAC. The low density of beryllium allows collision products to reach the surrounding detectors without significant interaction, its stiffness allows a powerful vacuum to be produced within the pipe to minimize interaction with gases, its thermal stability allows it to function correctly at temperatures of only a few degrees above absolute zero, and its diamagnetic nature keeps it from interfering with the complex multipole magnet systems used to steer and focus the particle beams.
Mechanical applications
Because of its stiffness, light weight and dimensional stability over a wide temperature range, beryllium metal is used for lightweight structural components in the defense and aerospace industries in high-speed aircraft, guided missiles, spacecraft, and satellites, including the James Webb Space Telescope. Several liquid-fuel rockets have used rocket nozzles made of pure beryllium. Beryllium powder was itself studied as a rocket fuel, but this use has never materialized. A small number of extreme high-end bicycle frames have been built with beryllium. From 1998 to 2000, the McLaren Formula One team used Mercedes-Benz engines with beryllium-aluminium-alloy pistons. The use of beryllium engine components was banned following a protest by Scuderia Ferrari.
Mixing about 2.0% beryllium into copper forms an alloy called beryllium copper that is six times stronger than copper alone. Beryllium alloys are used in many applications because of their combination of elasticity, high electrical conductivity and thermal conductivity, high strength and hardness, nonmagnetic properties, as well as good corrosion and fatigue resistance. These applications include non-sparking tools that are used near flammable gases (beryllium nickel), in springs and membranes (beryllium nickel and beryllium iron) used in surgical instruments and high temperature devices. As little as 50 parts per million of beryllium alloyed with liquid magnesium leads to a significant increase in oxidation resistance and decrease in flammability.
The high elastic stiffness of beryllium has led to its extensive use in precision instrumentation, e.g. in inertial guidance systems and in the support mechanisms for optical systems. Beryllium-copper alloys were also applied as a hardening agent in "Jason pistols", which were used to strip the paint from the hulls of ships.
Beryllium was also used for cantilevers in high performance phonograph cartridge styli, where its extreme stiffness and low density allowed for tracking weights to be reduced to 1 gram, yet still track high frequency passages with minimal distortion.
An earlier major application of beryllium was in brakes for military airplanes because of its hardness, high melting point, and exceptional ability to dissipate heat. Environmental considerations have led to substitution by other materials.
To reduce costs, beryllium can be alloyed with significant amounts of aluminium, resulting in the AlBeMet alloy (a trade name). This blend is cheaper than pure beryllium, while still retaining many desirable properties.
Mirrors
Beryllium mirrors are of particular interest. Large-area mirrors, frequently with a honeycomb support structure, are used, for example, in meteorological satellites where low weight and long-term dimensional stability are critical. Smaller beryllium mirrors are used in optical guidance systems and in fire-control systems, e.g. in the German-made Leopard 1 and Leopard 2 main battle tanks. In these systems, very rapid movement of the mirror is required which again dictates low mass and high rigidity. Usually the beryllium mirror is coated with hard electroless nickel plating which can be more easily polished to a finer optical finish than beryllium. In some applications, though, the beryllium blank is polished without any coating. This is particularly applicable to cryogenic operation where thermal expansion mismatch can cause the coating to buckle.
The James Webb Space Telescope has 18 hexagonal beryllium sections for its mirrors, each plated with a thin layer of gold. Because JWST will face a temperature of 33 K, the mirror is made of gold-plated beryllium, capable of handling extreme cold better than glass. Beryllium contracts and deforms less than glass – and remains more uniform – in such temperatures. For the same reason, the optics of the Spitzer Space Telescope are entirely built of beryllium metal.
Magnetic applications
Beryllium is non-magnetic. Therefore, tools fabricated out of beryllium-based materials are used by naval or military explosive ordnance disposal teams for work on or near naval mines, since these mines commonly have magnetic fuzes. They are also found in maintenance and construction materials near magnetic resonance imaging (MRI) machines because of the high magnetic fields generated. In the fields of radio communications and powerful (usually military) radars, hand tools made of beryllium are used to tune the highly magnetic klystrons, magnetrons, traveling wave tubes, etc., that are used for generating high levels of microwave power in the transmitters.
Nuclear applications
Thin plates or foils of beryllium are sometimes used in nuclear weapon designs as the very outer layer of the plutonium pits in the primary stages of thermonuclear bombs, placed to surround the fissile material. These layers of beryllium are good "pushers" for the implosion of the plutonium-239, and they are good neutron reflectors, just as in beryllium-moderated nuclear reactors.
Beryllium is also commonly used in some neutron sources in laboratory devices in which relatively few neutrons are needed (rather than having to use a nuclear reactor, or a particle accelerator-powered neutron generator). For this purpose, a target of beryllium-9 is bombarded with energetic alpha particles from a radioisotope such as polonium-210, radium-226, plutonium-238, or americium-241. In the nuclear reaction that occurs, a beryllium nucleus is transmuted into carbon-12, and one free neutron is emitted, traveling in about the same direction as the alpha particle was heading. Such alpha decay driven beryllium neutron sources, named "urchin" neutron initiators, were used in some early atomic bombs. Neutron sources in which beryllium is bombarded with gamma rays from a gamma decay radioisotope, are also used to produce laboratory neutrons.
Beryllium is also used in fuel fabrication for CANDU reactors. The fuel elements have small appendages that are resistance brazed to the fuel cladding using an induction brazing process with Be as the braze filler material. Bearing pads are brazed in place to prevent contact between the fuel bundle and the pressure tube containing it, and inter-element spacer pads are brazed on to prevent element to element contact.
Beryllium is also used at the Joint European Torus nuclear-fusion research laboratory, and it will be used in the more advanced ITER to condition the components which face the plasma. Beryllium has also been proposed as a cladding material for nuclear fuel rods, because of its good combination of mechanical, chemical, and nuclear properties. Beryllium fluoride is one of the constituent salts of the eutectic salt mixture FLiBe, which is used as a solvent, moderator and coolant in many hypothetical molten salt reactor designs, including the liquid fluoride thorium reactor (LFTR).
Acoustics
The low weight and high rigidity of beryllium make it useful as a material for high-frequency speaker drivers. Because beryllium is expensive (many times more than titanium), hard to shape due to its brittleness, and toxic if mishandled, beryllium tweeters are limited to high-end home, pro audio, and public address applications. Some high-fidelity products have been fraudulently claimed to be made of the material.
Some high-end phonograph cartridges used beryllium cantilevers to improve tracking by reducing mass.
Electronic
Beryllium is a p-type dopant in III-V compound semiconductors. It is widely used in materials such as GaAs, AlGaAs, InGaAs and InAlAs grown by molecular beam epitaxy (MBE). Cross-rolled beryllium sheet is an excellent structural support for printed circuit boards in surface-mount technology. In critical electronic applications, beryllium is both a structural support and heat sink. The application also requires a coefficient of thermal expansion that is well matched to the alumina and polyimide-glass substrates. The beryllium-beryllium oxide composite "E-Materials" have been specially designed for these electronic applications and have the additional advantage that the thermal expansion coefficient can be tailored to match diverse substrate materials.
Beryllium oxide is useful for many applications that require the combined properties of an electrical insulator and an excellent heat conductor, with high strength and hardness, and a very high melting point. Beryllium oxide is frequently used as an insulator base plate in high-power transistors in radio frequency transmitters for telecommunications. Beryllium oxide is also being studied for use in increasing the thermal conductivity of uranium dioxide nuclear fuel pellets. Beryllium compounds were used in fluorescent lighting tubes, but this use was discontinued because of the disease berylliosis which developed in the workers who were making the tubes.
Healthcare
Beryllium is a component of several dental alloys.
Toxicity and safety
Biological effects
Approximately 35 micrograms of beryllium is found in the average human body, an amount not considered harmful. Beryllium is chemically similar to magnesium and therefore can displace it from enzymes, which causes them to malfunction. Because Be2+ is a highly charged and small ion, it can easily get into many tissues and cells, where it specifically targets cell nuclei, inhibiting many enzymes, including those used for synthesizing DNA. Its toxicity is exacerbated by the fact that the body has no means to control beryllium levels, and once inside the body, beryllium cannot be removed.
Inhalation
Chronic beryllium disease (CBD), or berylliosis, is a pulmonary and systemic granulomatous disease caused by inhalation of dust or fumes contaminated with beryllium; either large amounts over a short time or small amounts over a long time can lead to this ailment. Symptoms of the disease can take up to five years to develop; about a third of patients with it die and the survivors are left disabled. The International Agency for Research on Cancer (IARC) lists beryllium and beryllium compounds as Category 1 carcinogens.
Occupational exposure
In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) for beryllium and beryllium compounds of 0.2 µg/m3 as an 8-hour time-weighted average (TWA) and 2.0 µg/m3 as a short-term exposure limit over a sampling period of 15 minutes. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) upper-bound threshold of 0.5 µg/m3. The IDLH (immediately dangerous to life and health) value is 4 mg/m3. The toxicity of beryllium is on par with other toxic metalloids/metals, such as arsenic and mercury.
Exposure to beryllium in the workplace can lead to a sensitization immune response and can over time develop chronic beryllium disease. The National Institute for Occupational Safety and Health (NIOSH) in the United States researches these effects in collaboration with a major manufacturer of beryllium products. NIOSH also conducts genetic research on sensitization and CBD, independently of this collaboration.
Acute beryllium disease in the form of chemical pneumonitis was first reported in Europe in 1933 and in the United States in 1943. A survey found that about 5% of workers in plants manufacturing fluorescent lamps in 1949 in the United States had beryllium-related lung diseases. Chronic berylliosis resembles sarcoidosis in many respects, and the differential diagnosis is often difficult. It killed some early workers in nuclear weapons design, such as Herbert L. Anderson.
Beryllium may be found in coal slag. When the slag is formulated into an abrasive agent for blasting paint and rust from hard surfaces, the beryllium can become airborne and become a source of exposure.
Although the use of beryllium compounds in fluorescent lighting tubes was discontinued in 1949, potential for exposure to beryllium exists in the nuclear and aerospace industries and in the refining of beryllium metal and melting of beryllium-containing alloys, the manufacturing of electronic devices, and the handling of other beryllium-containing material.
Detection
Early researchers undertook the highly hazardous practice of identifying beryllium and its various compounds from its sweet taste. Identification is now performed using safe modern diagnostics techniques. A successful test for beryllium in air and on surfaces has been developed and published as an international voluntary consensus standard ASTM D7202. The procedure uses dilute ammonium bifluoride for dissolution and fluorescence detection with beryllium bound to sulfonated hydroxybenzoquinoline, allowing up to 100 times more sensitive detection than the recommended limit for beryllium concentration in the workplace. Fluorescence increases with increasing beryllium concentration. The new procedure has been successfully tested on a variety of surfaces and is effective for the dissolution and detection of refractory beryllium oxide and siliceous beryllium in minute concentrations (ASTM D7458). The NIOSH Manual of Analytical Methods contains methods for measuring occupational exposures to beryllium.
References
Cited sources
Further reading
Mroz MM, Balkissoon R, Newman LS. "Beryllium". In: Bingham E, Cohrssen B, Powell C (eds.) Patty's Toxicology, Fifth Edition. New York: John Wiley & Sons 2001, 177–220.
Walsh, KA, Beryllium Chemistry and Processing. Vidal, EE. et al. Eds. 2009, Materials Park, OH:ASM International.
Beryllium Lymphocyte Proliferation Testing (BeLPT). DOE Specification 1142–2001. Washington, DC: U.S. Department of Energy, 2001.
2007, Eric Scerri,The periodic table: Its story and its significance, Oxford University Press, New York,
External links
ATSDR Case Studies in Environmental Medicine: Beryllium Toxicity U.S. Department of Health and Human Services
It's Elemental – Beryllium
MSDS: ESPI Metals
Beryllium at The Periodic Table of Videos (University of Nottingham)
National Institute for Occupational Safety and Health – Beryllium Page
National Supplemental Screening Program (Oak Ridge Associated Universities)
Historic Price of Beryllium in USA
Chemical elements
Alkaline earth metals
Neutron moderators
Nuclear materials
IARC Group 1 carcinogens
Chemical hazards
Reducing agents
Chemical elements with hexagonal close-packed structure
|
https://en.wikipedia.org/wiki/Bridge
|
A bridge is a structure built to span a physical obstacle (such as a body of water, valley, road, or railway) without blocking the way underneath. It is constructed for the purpose of providing passage over the obstacle, which is usually something that is otherwise difficult or impossible to cross. There are many different designs of bridges, each serving a particular purpose and applicable to different situations. Designs of bridges vary depending on factors such as the function of the bridge, the nature of the terrain where the bridge is constructed and anchored, and the material used to make it, and the funds available to build it.
The earliest bridges were likely made with fallen trees and stepping stones. The Neolithic people built boardwalk bridges across marshland. The Arkadiko Bridge (dating from the 13th century BC, in the Peloponnese) is one of the oldest arch bridges still in existence and use.
Etymology
The Oxford English Dictionary traces the origin of the word bridge to an Old English word brycg, of the same meaning. The word can be traced directly back to Proto-Indo-European *bʰrēw-. The origin of the word for the card game of the same name is unknown.
History
The simplest and earliest types of bridges were stepping stones. Neolithic people also built a form of boardwalk across marshes; examples of such bridges include the Sweet Track and the Post Track in England, approximately 6000 years old. Undoubtedly, ancient people would also have used log bridges; that is a timber bridge that fall naturally or are intentionally felled or placed across streams. Some of the first human-made bridges with significant span were probably intentionally felled trees.
Among the oldest timber bridges is the Holzbrücke Rapperswil-Hurden bridge that crossed upper Lake Zürich in Switzerland; prehistoric timber pilings discovered to the west of the Seedamm causeway date back to 1523 BC. The first wooden footbridge there led across Lake Zürich; it was reconstructed several times through the late 2nd century AD, when the Roman Empire built a wooden bridge to carry transport across the lake. Between 1358 and 1360, Rudolf IV, Duke of Austria, built a 'new' wooden bridge across the lake that was used until 1878; it was approximately long and wide. On April 6, 2001, a reconstruction of the original wooden footbridge was opened; it is also the longest wooden bridge in Switzerland.
The Arkadiko Bridge is one of four Mycenaean corbel arch bridges part of a former network of roads, designed to accommodate chariots, between the fort of Tiryns and town of Epidauros in the Peloponnese, in southern Greece. Dating to the Greek Bronze Age (13th century BC), it is one of the oldest arch bridges still in existence and use. Several intact arched stone bridges from the Hellenistic era can be found in the Peloponnese.
The greatest bridge builders of antiquity were the ancient Romans. The Romans built arch bridges and aqueducts that could stand in conditions that would damage or destroy earlier designs. Some stand today. An example is the Alcántara Bridge, built over the river Tagus, in Spain. The Romans also used cement, which reduced the variation of strength found in natural stone. One type of cement, called pozzolana, consisted of water, lime, sand, and volcanic rock. Brick and mortar bridges were built after the Roman era, as the technology for cement was lost (then later rediscovered).
In India, the Arthashastra treatise by Kautilya mentions the construction of dams and bridges. A Mauryan bridge near Girnar was surveyed by James Princep. The bridge was swept away during a flood, and later repaired by Puspagupta, the chief architect of emperor Chandragupta I. The use of stronger bridges using plaited bamboo and iron chain was visible in India by about the 4th century. A number of bridges, both for military and commercial purposes, were constructed by the Mughal administration in India.
Although large Chinese bridges of wooden construction existed at the time of the Warring States period, the oldest surviving stone bridge in China is the Zhaozhou Bridge, built from 595 to 605 AD during the Sui dynasty. This bridge is also historically significant as it is the world's oldest open-spandrel stone segmental arch bridge. European segmental arch bridges date back to at least the Alconétar Bridge (approximately 2nd century AD), while the enormous Roman era Trajan's Bridge (105 AD) featured open-spandrel segmental arches in wooden construction.
Rope bridges, a simple type of suspension bridge, were used by the Inca civilization in the Andes mountains of South America, just prior to European colonization in the 16th century.
The Ashanti built bridges over streams and rivers. They were constructed by pounding four large forked tree trunks into the stream bed, placing beams along these forked pillars, then positioning cross-beams that were finally covered with four to six inches of dirt.
During the 18th century, there were many innovations in the design of timber bridges by Hans Ulrich Grubenmann, Johannes Grubenmann, and others. The first book on bridge engineering was written by Hubert Gautier in 1716.
A major breakthrough in bridge technology came with the erection of the Iron Bridge in Shropshire, England in 1779. It used cast iron for the first time as arches to cross the river Severn. With the Industrial Revolution in the 19th century, truss systems of wrought iron were developed for larger bridges, but iron does not have the tensile strength to support large loads. With the advent of steel, which has a high tensile strength, much larger bridges were built, many using the ideas of Gustave Eiffel.
In Canada and the United States, numerous timber covered bridges were built in the late 1700s to the late 1800s, reminiscent of earlier designs in Germany and Switzerland. Some covered bridges were also built in Asia. In later years, some were partly made of stone or metal but the trusses were usually still made of wood; in the United States, there were three styles of trusses, the Queen Post, the Burr Arch and the Town Lattice. Hundreds of these structures still stand in North America. They were brought to the attention of the general public in the 1990s by the novel, movie, and play The Bridges of Madison County.
In 1927 welding pioneer Stefan Bryła designed the first welded road bridge in the world, the Maurzyce Bridge which was later built across the river Słudwia at Maurzyce near Łowicz, Poland in 1929. In 1995, the American Welding Society presented the Historic Welded Structure Award for the bridge to Poland.
Types of bridges
Bridges can be categorized in several different ways. Common categories include the type of structural elements used, by what they carry, whether they are fixed or movable, and by the materials used.
Structure types
Bridges may be classified by how the actions of tension, compression, bending, torsion and shear are distributed through their structure. Most bridges will employ all of these to some degree, but only a few will predominate. The separation of forces and moments may be quite clear. In a suspension or cable-stayed bridge, the elements in tension are distinct in shape and placement. In other cases the forces may be distributed among a large number of members, as in a truss.
Some Engineers sub-divide 'beam' bridges into slab, beam-and-slab and box girder on the basis of their cross-section. A slab can be solid or voided (though this is no longer favored for inspectability reasons) while beam-and-slab consists of concrete or steel girders connected by a concrete slab. A box-girder cross-section consists of a single-cell or multi-cellular box. In recent years, integral bridge construction has also become popular.
Fixed or movable bridges
Most bridges are fixed bridges, meaning they have no moving parts and stay in one place until they fail or are demolished. Temporary bridges, such as Bailey bridges, are designed to be assembled, taken apart, transported to a different site, and re-used. They are important in military engineering and are also used to carry traffic while an old bridge is being rebuilt. Movable bridges are designed to move out of the way of boats or other kinds of traffic, which would otherwise be too tall to fit. These are generally electrically powered.
The Tank bridge transporter (TBT) has the same cross-country performance as a tank even when fully loaded. It can deploy, drop off and load bridges independently, but it cannot recover them.
Double-decked bridges
Double-decked (or double-decker) bridges have two levels, such as the George Washington Bridge, connecting New York City to Bergen County, New Jersey, US, as the world's busiest bridge, carrying 102 million vehicles annually; truss work between the roadway levels provided stiffness to the roadways and reduced movement of the upper level when the lower level was installed three decades after the upper level. The Tsing Ma Bridge and Kap Shui Mun Bridge in Hong Kong have six lanes on their upper decks, and on their lower decks there are two lanes and a pair of tracks for MTR metro trains. Some double-decked bridges only use one level for street traffic; the Washington Avenue Bridge in Minneapolis reserves its lower level for automobile and light rail traffic and its upper level for pedestrian and bicycle traffic (predominantly students at the University of Minnesota). Likewise, in Toronto, the Prince Edward Viaduct has five lanes of motor traffic, bicycle lanes, and sidewalks on its upper deck; and a pair of tracks for the Bloor–Danforth subway line on its lower deck. The western span of the San Francisco–Oakland Bay Bridge also has two levels.
Robert Stephenson's High Level Bridge across the River Tyne in Newcastle upon Tyne, completed in 1849, is an early example of a double-decked bridge. The upper level carries a railway, and the lower level is used for road traffic. Other examples include Britannia Bridge over the Menai Strait and Craigavon Bridge in Derry, Northern Ireland. The Oresund Bridge between Copenhagen and Malmö consists of a four-lane highway on the upper level and a pair of railway tracks at the lower level. Tower Bridge in London is different example of a double-decked bridge, with the central section consisting of a low-level bascule span and a high-level footbridge.
Viaducts
A viaduct is made up of multiple bridges connected into one longer structure. The longest and some of the highest bridges are viaducts, such as the Lake Pontchartrain Causeway and Millau Viaduct.
Multi-way bridge
A multi-way bridge has three or more separate spans which meet near the center of the bridge. Multi-way bridges with only three spans appear as a "T" or "Y" when viewed from above. Multi-way bridges are extremely rare. The Tridge, Margaret Bridge, and Zanesville Y-Bridge are examples.
Bridge types by use
A bridge can be categorized by what it is designed to carry, such as trains, pedestrian or road traffic (road bridge), a pipeline (Pipe bridge) or waterway for water transport or barge traffic. An aqueduct is a bridge that carries water, resembling a viaduct, which is a bridge that connects points of equal height. A road-rail bridge carries both road and rail traffic. Overway is a term for a bridge that separates incompatible intersecting traffic, especially road and rail. A bridge can carry overhead power lines as does the Storstrøm Bridge.
Some bridges accommodate other purposes, such as the tower of Nový Most Bridge in Bratislava, which features a restaurant, or a bridge-restaurant which is a bridge built to serve as a restaurant. Other suspension bridge towers carry transmission antennas.
Conservationists use wildlife overpasses to reduce habitat fragmentation and animal-vehicle collisions. The first animal bridges sprung up in France in the 1950s, and these types of bridges are now used worldwide to protect both large and small wildlife.
Bridges are subject to unplanned uses as well. The areas underneath some bridges have become makeshift shelters and homes to homeless people, and the undertimbers of bridges all around the world are spots of prevalent graffiti. Some bridges attract people attempting suicide, and become known as suicide bridges.
Bridge types by material
The materials used to build the structure are also used to categorize bridges. Until the end of the 18th century, bridges were made out of timber, stone and masonry. Modern bridges are currently built in concrete, steel, fiber reinforced polymers (FRP), stainless steel or combinations of those materials. Living bridges have been constructed of live plants such as Ficus elastica tree roots in India and wisteria vines in Japan.
Analysis and design
Unlike buildings whose design is led by architects, bridges are usually designed by engineers. This follows from the importance of the engineering requirements; namely spanning the obstacle and having the durability to survive, with minimal maintenance, in an aggressive outdoor environment. Bridges are first analysed; the bending moment and shear force distributions are calculated due to the applied loads. For this, the finite element method is the most popular. The analysis can be one-, two-, or three-dimensional. For the majority of bridges, a two-dimensional plate model (often with stiffening beams) is sufficient or an upstand finite element model. On completion of the analysis, the bridge is designed to resist the applied bending moments and shear forces, section sizes are selected with sufficient capacity to resist the stresses. Many bridges are made of prestressed concrete which has good durability properties, either by pre-tensioning of beams prior to installation or post-tensioning on site.
In most countries, bridges, like other structures, are designed according to Load and Resistance Factor Design (LRFD) principles. In simple terms, this means that the load is factored up by a factor greater than unity, while the resistance or capacity of the structure is factored down, by a factor less than unity. The effect of the factored load (stress, bending moment) should be less than the factored resistance to that effect. Both of these factors allow for uncertainty and are greater when the uncertainty is greater.
Aesthetics
Most bridges are utilitarian in appearance, but in some cases, the appearance of the bridge can have great importance. Often, this is the case with a large bridge that serves as an entrance to a city, or crosses over a main harbor entrance. These are sometimes known as signature bridges. Designers of bridges in parks and along parkways often place more importance on aesthetics, as well. Examples include the stone-faced bridges along the Taconic State Parkway in New York.
Generally bridges are more aesthetically pleasing if they are simple in shape, the deck is thinner (in proportion to its span), the lines of the structure are continuous, and the shapes of the structural elements reflect the forces acting on them. To create a beautiful image, some bridges are built much taller than necessary. This type, often found in east-Asian style gardens, is called a Moon bridge, evoking a rising full moon. Other garden bridges may cross only a dry bed of stream-washed pebbles, intended only to convey an impression of a stream. Often in palaces, a bridge will be built over an artificial waterway as symbolic of a passage to an important place or state of mind. A set of five bridges cross a sinuous waterway in an important courtyard of the Forbidden City in Beijing, China. The central bridge was reserved exclusively for the use of the Emperor and Empress, with their attendants.
Bridge maintenance
The estimated life of bridges varies between 25 and 80 years depending on location and material. However, bridges may age hundred years with proper maintenance and rehabilitation. Bridge maintenance consisting of a combination of structural health monitoring and testing. This is regulated in country-specific engineer standards and includes an ongoing monitoring every three to six months, a simple test or inspection every two to three years and a major inspection every six to ten years. In Europe, the cost of maintenance is considerable and is higher in some countries than spending on new bridges. The lifetime of welded steel bridges can be significantly extended by aftertreatment of the weld transitions. This results in a potential high benefit, using existing bridges far beyond the planned lifetime.
Bridge traffic loading
While the response of a bridge to the applied loading is well understood, the applied traffic loading itself is still the subject of research. This is a statistical problem as loading is highly variable, particularly for road bridges. Load Effects in bridges (stresses, bending moments) are designed for using the principles of Load and Resistance Factor Design. Before factoring to allow for uncertainty, the load effect is generally considered to be the maximum characteristic value in a specified return period. Notably, in Europe, it is the maximum value expected in 1000 years.
Bridge standards generally include a load model, deemed to represent the characteristic maximum load to be expected in the return period. In the past, these load models were agreed by standard drafting committees of experts but today, this situation is changing. It is now possible to measure the components of bridge traffic load, to weigh trucks, using weigh-in-motion (WIM) technologies. With extensive WIM databases, it is possible to calculate the maximum expected load effect in the specified return period. This is an active area of research, addressing issues of opposing direction lanes, side-by-side (same direction) lanes, traffic growth, permit/non-permit vehicles and long-span bridges (see below). Rather than repeat this complex process every time a bridge is to be designed, standards authorities specify simplified notional load models, notably HL-93, intended to give the same load effects as the characteristic maximum values. The Eurocode is an example of a standard for bridge traffic loading that was developed in this way.
Traffic loading on long span bridges
Most bridge standards are only applicable for short and medium spans - for example, the Eurocode is only applicable for loaded lengths up to 200 m. Longer spans are dealt with on a case-by-case basis. It is generally accepted that the intensity of load reduces as span increases because the probability of many trucks being closely spaced and extremely heavy reduces as the number of trucks involved increases. It is also generally assumed that short spans are governed by a small number of trucks traveling at high speed, with an allowance for dynamics. Longer spans on the other hand, are governed by congested traffic and no allowance for dynamics is needed. Calculating the loading due to congested traffic remains a challenge as there is a paucity of data on inter-vehicle gaps, both within-lane and inter-lane, in congested conditions. Weigh-in-Motion (WIM) systems provide data on inter-vehicle gaps but only operate well in free flowing traffic conditions. Some authors have used cameras to measure gaps and vehicle lengths in jammed situations and have inferred weights from lengths using WIM data. Others have used microsimulation to generate typical clusters of vehicles on the bridge.
Bridge vibration
Bridges vibrate under load and this contributes, to a greater or lesser extent, to the stresses. Vibration and dynamics are generally more significant for slender structures such as pedestrian bridges and long-span road or rail bridges. One of the most famous examples is the Tacoma Narrows Bridge that collapsed shortly after being constructed due to excessive vibration. More recently, the Millennium Bridge in London vibrated excessively under pedestrian loading and was closed and retrofitted with a system of dampers. For smaller bridges, dynamics is not catastrophic but can contribute an added amplification to the stresses due to static effects. For example, the Eurocode for bridge loading specifies amplifications of between 10% and 70%, depending on the span, the number of traffic lanes and the type of stress (bending moment or shear force).
Vehicle-bridge dynamic interaction
There have been many studies of the dynamic interaction between vehicles and bridges during vehicle crossing events. Fryba did pioneering work on the interaction of a moving load and an Euler-Bernoulli beam. With increased computing power, vehicle-bridge interaction (VBI) models have become ever more sophisticated. The concern is that one of the many natural frequencies associated with the vehicle will resonate with the bridge first natural frequency. The vehicle-related frequencies include body bounce and axle hop but there are also pseudo-frequencies associated with the vehicle's speed of crossing and there are many frequencies associated with the surface profile. Given the wide variety of heavy vehicles on road bridges, a statistical approach has been suggested, with VBI analyses carried out for many statically extreme loading events.
Bridge failures
The failure of bridges is of special concern for structural engineers in trying to learn lessons vital to bridge design, construction and maintenance.
The failure of bridges first assumed national interest in Britain during the Victorian era when many new designs were being built, often using new materials, with some of them failing catastrophically.
In the United States, the National Bridge Inventory tracks the structural evaluations of all bridges, including designations such as "structurally deficient" and "functionally obsolete".
Bridge health monitoring
There are several methods used to monitor the condition of large structures like bridges. Many long-span bridges are now routinely monitored with a range of sensors, including strain transducers, accelerometers, tiltmeters, and GPS. Accelerometers have the advantage that they are inertial, i.e., they do not require a reference point to measure from. This is often a problem for distance or deflection measurement, especially if the bridge is over water. Crowdsourcing bridge conditions by accessing data passively captured by cell phones, which routinely include accelerometers and GPS sensors, has been suggested as an alternative to including sensors during bridge construction and an augment for professional examinations.
An option for structural-integrity monitoring is "non-contact monitoring", which uses the Doppler effect (Doppler shift). A laser beam from a Laser Doppler Vibrometer is directed at the point of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface. The advantage of this method is that the setup time for the equipment is faster and, unlike an accelerometer, this makes measurements possible on multiple structures in as short a time as possible. Additionally, this method can measure specific points on a bridge that might be difficult to access. However, vibrometers are relatively expensive and have the disadvantage that a reference point is needed to measure from.
Snapshots in time of the external condition of a bridge can be recorded using Lidar to aid bridge inspection. This can provide measurement of the bridge geometry (to facilitate the building of a computer model) but the accuracy is generally insufficient to measure bridge deflections under load.
While larger modern bridges are routinely monitored electronically, smaller bridges are generally inspected visually by trained inspectors. There is considerable research interest in the challenge of smaller bridges as they are often remote and do not have electrical power on site. Possible solutions are the installation of sensors on a specialist inspection vehicle and the use of its measurements as it drives over the bridge to infer information about the bridge condition. These vehicles can be equipped with accelerometers, gyrometers, Laser Doppler Vibrometers and some even have the capability to apply a resonant force to the road surface in order to dynamically excite the bridge at its resonant frequency.
Visual index
See also
Air draft
Architectural engineering
Bridge chapel
Bridge tower
Bridge to nowhere
Bridges Act
BS 5400
Causeway
Coal trestle
Covered bridges
Cross-sea traffic ways
Culvert
Deck
Devil's Bridge
Footbridge
Jet bridge
Landscape architecture
Megaproject
Military bridges
Orphan bridge
Outline of bridges
Overpass
Pontoon bridge
Rigid-frame bridge
Structure gauge
Transporter bridge
Tensegrity
Trestle bridge
Tunnel
References
Further reading
Bagher Shemirani, Alireza. Experimental and numerical studies of concrete bridge decks using ultra high-performance concrete and reinforced concrete. Computers and Concrete, 29(6), p. 407-418, 2022.
Brown, David J. Bridges: Three Thousand Years of Defying Nature. Richmond Hill, Ont: Firefly Books, 2005. .
Sandak, Cass R. Bridges. An Easy-read modern wonders book. New York: F. Watts, 1983. .
Whitney, Charles S. Bridges of the World: Their Design and Construction. Mineola, NY: Dover Publications, 2003. (Unabridged republication of Bridges : a study in their art, science, and evolution. 1929.)
External links
Digital Bridge: Bridges of the Nineteenth Century , a collection of digitized books at Lehigh University
Structurae – International Database and Gallery of Engineerings Structures with over 10000 Bridges.
U.S. Federal Highway Administration Bridge Technology
The Museum of Japanese Timber Bridges Fukuoka University
"bridge-info.org": site for bridges
Articles containing video clips
Infrastructure
Structural engineering
|
https://en.wikipedia.org/wiki/Bead
|
A bead is a small, decorative object that is formed in a variety of shapes and sizes of a material such as stone, bone, shell, glass, plastic, wood, or pearl and with a small hole for threading or stringing. Beads range in size from under to over in diameter.
Beads represent some of the earliest forms of jewellery, with a pair of beads made from Nassarius sea snail shells dating to approximately 100,000 years ago thought to be the earliest known example. Beadwork is the art or craft of making things with beads. Beads can be woven together with specialized thread, strung onto thread or soft, flexible wire, or adhered to a surface (e.g. fabric, clay).
Types of beads
Beads can be divided into several types of overlapping categories based on different criteria such as the materials from which they are made, the process used in their manufacturing, the place or period of origin, the patterns on their surface, or their general shape. In some cases, such as millefiori and cloisonné beads, multiple categories may overlap in an interdependent fashion.
Components
Beads can be made of many different materials. The earliest beads were made of a variety of natural materials which, after they were gathered, could be readily drilled and shaped. As humans became capable of obtaining and working with more difficult materials, those materials were added to the range of available substances.
In modern manufacturing, the most common bead materials are wood, plastic, glass, metal, and stone.
Natural materials
Beads are still made from many naturally occurring materials, both organic (i.e., of animal- or plant-based origin) and inorganic (purely mineral origin). However, some of these materials now routinely undergo some extra processing beyond mere shaping and drilling such as color enhancement via dyes or irradiation.
The natural organics include bone, coral, horn, ivory, seeds (such as tagua nuts), animal shell, and wood. For most of human history pearls were the ultimate precious beads of natural origin because of their rarity; the modern pearl-culturing process has made them far more common. Amber and jet are also of natural organic origin although both are the result of partial fossilization.
The natural inorganics include various types of stones, ranging from gemstones to common minerals, and metals. Of the latter, only a few precious metals occur in pure forms, but other purified base metals may as well be placed in this category along with certain naturally occurring alloys such as electrum.
Synthetic materials
The oldest-surviving synthetic materials used for bead making have generally been ceramics: pottery and glass. Beads were also made from ancient alloys such as bronze and brass, but as those were more vulnerable to oxidation they have generally been less well-preserved at archaeological sites.
Many different subtypes of glass are now used for beadmaking, some of which have their own component-specific names. Lead crystal beads have a high percentage of lead oxide in the glass formula, increasing the refractive index. Most of the other named glass types have their formulations and patterns inseparable from the manufacturing process.
Small, colorful, fusible plastic beads can be placed on a solid plastic-backed peg array to form designs and then melted together with a clothes iron; alternatively, they can be strung into necklaces and bracelets or woven into keychains. Fusible beads come in many colors and degrees of transparency/opacity, including varieties that glow in the dark or have internal glitter; peg boards come in various shapes and several geometric patterns. Plastic toy beads, made by chopping plastic tubes into short pieces, were introduced in 1958 by Munkplast AB in Munka-Ljungby, Sweden. Known as Indian beads, they were originally sewn together to form ribbons. The pegboard for bead designs was invented in the early 1960s (patented 1962, patent granted 1967) by Gunnar Knutsson in Vällingby, Sweden, as a therapy for elderly homes; the pegboard later gained popularity as a toy for children. The bead designs were glued to cardboard or Masonite boards and used as trivets. Later, when the beads were made of polyethylene, it became possible to fuse them with a flat iron. Hama come in three sizes: mini (diameter ), midi () and maxi (). Perler beads come in two sizes called classic (5mm) and biggie (10mm). Pyssla beads (by IKEA) only come in one size (5mm).
Manufacturing
Modern mass-produced beads are generally shaped by carving or casting, depending on the material and desired effect. In some cases, more specialized metalworking or glassworking techniques may be employed, or a combination of multiple techniques and materials may be used such as in cloisonné.
Glassworking
Most glass beads are pressed glass, mass-produced by preparing a molten batch of glass of the desired color and pouring it into molds to form the desired shape. This is also true of most plastic beads.
A smaller and more expensive subset of glass and lead crystal beads are cut into precise faceted shapes on an individual basis. This was once done by hand but has largely been taken over by precision machinery.
"Fire-polished" faceted beads are a less expensive alternative to hand-cut faceted glass or crystal. They derive their name from the second half of a two-part process: first, the glass batch is poured into round bead molds, then they are faceted with a grinding wheel. The faceted beads are then poured onto a tray and briefly reheated just long enough to melt the surface, "polishing" out any minor surface irregularities from the grinding wheel.
Specialized glass techniques and types
There are several specialized glassworking techniques that create a distinctive appearance throughout the body of the resulting beads, which are then primarily referred to by the glass type.
If the glass batch is used to create a large massive block instead of pre-shaping it as it cools, the result may then be carved into smaller items in the same manner as stone. Conversely, glass artisans may make beads by lampworking the glass on an individual basis; once formed, the beads undergo little or no further shaping after the layers have been properly annealed.
Most of these glass subtypes are some form of fused glass, although goldstone is created by controlling the reductive atmosphere and cooling conditions of the glass batch rather than by fusing separate components together.
Dichroic glass beads incorporate a semitransparent microlayer of metal between two or more layers. Fibre optic glass beads have an eyecatching chatoyant effect across the grain.
There are also several ways to fuse many small glass canes together into a multicolored pattern, resulting in millefiori beads or chevron beads (sometimes called "trade beads"). "Furnace glass" beads encase a multicolored core in a transparent exterior layer which is then annealed in a furnace.
More economically, millefiori beads can also be made by limiting the patterning process to long, narrow canes or rods known as murrine. Thin cross-sections, or "decals", can then be cut from the murrine and fused into the surface of a plain glass bead.
Shapes
Beads can be made in variety of shapes, including the following, as well as tubular and oval-shaped beads.
Round
This is the most common shape of beads that are strung on wire to create necklaces, and bracelets. The shape of the round beads lay together and are pleasing to the eye. Round beads can be made of glass, stone, ceramic, metal, or wood.
Square or cubed
Square beads can be to enhance a necklace design as a spacer however a necklace can be strung with just square beads. The necklaces with square beads are used in Rosary necklaces/prayer necklaces, and wooden or shell ones are made for beachwear.
Hair pipe beads
Elk rib bones were the original material for the long, tubular hair pipe beads. Today these beads are commonly made of bison and water buffalo bones and are popular for breastplates and chokers among Plains Indians. Black variations of these beads are made from the animals' horns.
Seed beads
Seed beads are uniformly shaped spheroidal or tube shaped beads ranging in size from under a millimetre to several millimetres. "Seed bead" is a generic term for any small bead. Usually rounded in shape, seed beads are most commonly used for loom and off-loom bead weaving.
Place or period of origin
African trade beads or slave beads may be antique beads that were manufactured in Europe and used for trade during the colonial period, such as chevron beads; or they may have been made in West Africa by and for Africans, such as Mauritanian Kiffa beads, Ghanaian and Nigerian powder glass beads, or African-made brass beads. Archaeologists have documented that as recently as the late-nineteenth century beads manufactured in Europe continued to accompany exploration of Africa using Indigenous routes into the interior.
Austrian crystal is a generic term for cut lead-crystal beads, based on the location and prestige of the Swarovski firm.
Czech glass beads are made in the Czech Republic, in particular an area called Jablonec nad Nisou. Production of glass beads in the area dates back to the 14th century, though production was depressed under communist rule. Because of this long tradition, their workmanship and quality has an excellent reputation.
Islamic glass beads have been made in a wide geographical and historical range of Islamic cultures. Used and manufactured from medieval Spain and North Africa in the West and to China in the East, they can be identified by recognizable features, including styles and techniques.
Vintage beads, in the collectibles and antique market, refers to items that are at least 25 or more years old. Vintage beads are available in materials that include lucite, plastic, crystal, metal and glass.
Miscellaneous ethnic beads
Tibetan Dzi beads and Rudraksha beads are used to make Buddhist and Hindu rosaries (malas). Magatama are traditional Japanese beads, and cinnabar was often used for making beads in China. Wampum are cylindrical white or purple beads made from quahog or North Atlantic channeled whelk shells by northeastern Native American tribes, such as the Wampanoag and Shinnecock. Job's tears are seed beads popular among southeastern Native American tribes. Heishe are beads made of shells or stones by the Kewa Pueblo people of New Mexico.
Symbolic meaning of beads
In many parts of the world, beads are used for symbolic purposes, for example:
use for prayer or devotion - e.g. rosary beads for Roman Catholics and many other Christians, misbaha for Shia and many other Muslims, japamala/nenju for Hindus, Buddhists, Jains, some Sikhs, Confucianism, Taoists/Daoists, Shinto, etc.
use for anti-tension devices, e.g. Greek komboloi, or worry beads.
use as currency e.g. Aggrey beads from Ghana
use for gaming e.g. owari beads for mankala
History
Beads are thought to be one of the earliest forms of trade between members of the human race. It is believed that bead trading was one of the reasons why humans developed language. Beads are said to have been used and traded for most of human history. The oldest beads found to date were at Blombos Cave, about 72,000 years old, and at Ksar Akil in Lebanon, about 40,000 years old.
Surface patterns
After shaping, glass and crystal beads can have their surface appearance enhanced by etching a translucent frosted layer, applying an additional color layer, or both. Aurora Borealis, or AB, is a surface coating that diffuses light into a rainbow. Other surface coatings are vitrail, moonlight, dorado, satin, star shine, and heliotrope.
Faux beads are beads that are made to look like a more expensive original material, especially in the case of fake pearls and simulated rocks, minerals and gemstones. Precious metals and ivory are also imitated.
Tagua nuts from South America are used as an ivory substitute since the natural ivory trade has been restricted worldwide.
See also
Fly tying#Bead (Spherical brass, tungsten, and glass beads are often used in Fly tying)
Glass beadmaking
Jewelry design
Mardi Gras beads
Murano beads
Pearl
Ultraviolet-sensitive bead
References
Further reading
Beck, Horace (1928) "Classification and Nomenclature of Beads and Pendants." Archaeologia 77. (Reprinted by Shumway Publishers York, PA 1981)
Dubin, Lois Sherr. North American Indian Jewelry and Adornment: From Prehistory to the Present. New York: Harry N. Abrams, 1999: 170–171. .
Dubin, Lois Sherr. The History of Beads: From 100,000 B.C. to the Present, Revised and Expanded Edition. New York: Harry N. Abrams, (2009). .
Beadwork
Craft materials
Jewellery components
|
https://en.wikipedia.org/wiki/Bird
|
Birds are a group of warm-blooded vertebrates constituting the class Aves (), characterised by feathers, toothless beaked jaws, the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart, and a strong yet lightweight skeleton. Birds live worldwide and range in size from the bee hummingbird to the common ostrich. There are about ten thousand living species, more than half of which are passerine, or "perching" birds. Birds have whose development varies according to species; the only known groups without wings are the extinct moa and elephant birds. Wings, which are modified forelimbs, gave birds the ability to fly, although further evolution has led to the loss of flight in some birds, including ratites, penguins, and diverse endemic island species. The digestive and respiratory systems of birds are also uniquely adapted for flight. Some bird species of aquatic environments, particularly seabirds and some waterbirds, have further evolved for swimming. The study of birds is called ornithology.
Birds are feathered theropod dinosaurs and constitute the only known living dinosaurs. Likewise, birds are considered reptiles in the modern cladistic sense of the term, and their closest living relatives are the crocodilians. Birds are descendants of the primitive avialans (whose members include Archaeopteryx) which first appeared during the Late Jurassic. According to DNA evidence, modern birds (Neornithes) evolved in the Early to Late Cretaceous, and diversified dramatically around the time of the Cretaceous–Paleogene extinction event 66 mya, which killed off the pterosaurs and all non-avian dinosaurs.
Many social species pass on knowledge across generations, which is considered a form of culture. Birds are social, communicating with visual signals, calls, and songs, and participating in such behaviours as cooperative breeding and hunting, flocking, and mobbing of predators. The vast majority of bird species are socially (but not necessarily sexually) monogamous, usually for one breeding season at a time, sometimes for years, and rarely for life. Other species have breeding systems that are polygynous (one male with many females) or, rarely, polyandrous (one female with many males). Birds produce offspring by laying eggs which are fertilised through sexual reproduction. They are usually laid in a nest and incubated by the parents. Most birds have an extended period of parental care after hatching.
Many species of birds are economically important as food for human consumption and raw material in manufacturing, with domesticated and undomesticated birds being important sources of eggs, meat, and feathers. Songbirds, parrots, and other species are popular as pets. Guano (bird excrement) is harvested for use as a fertiliser. Birds figure throughout human culture. About 120 to 130 species have become extinct due to human activity since the 17th century, and hundreds more before then. Human activity threatens about 1,200 bird species with extinction, though efforts are underway to protect them. Recreational birdwatching is an important part of the ecotourism industry.
Evolution and classification
The first classification of birds was developed by Francis Willughby and John Ray in their 1676 volume Ornithologiae.
Carl Linnaeus modified that work in 1758 to devise the taxonomic classification system currently in use. Birds are categorised as the biological class Aves in Linnaean taxonomy. Phylogenetic taxonomy places Aves in the clade Theropoda.
Definition
Aves and a sister group, the order Crocodilia, contain the only living representatives of the reptile clade Archosauria. During the late 1990s, Aves was most commonly defined phylogenetically as all descendants of the most recent common ancestor of modern birds and Archaeopteryx lithographica. However, an earlier definition proposed by Jacques Gauthier gained wide currency in the 21st century, and is used by many scientists including adherents to the PhyloCode. Gauthier defined Aves to include only the crown group of the set of modern birds. This was done by excluding most groups known only from fossils, and assigning them, instead, to the broader group Avialae, in part to avoid the uncertainties about the placement of Archaeopteryx in relation to animals traditionally thought of as theropod dinosaurs.
Gauthier and de Queiroz identified four different definitions for the same biological name "Aves", which is a problem. The authors proposed to reserve the term Aves only for the crown group consisting of the last common ancestor of all living birds and all of its descendants, which corresponds to meaning number 4 below. He assigned other names to the other groups.
Aves can mean all archosaurs closer to birds than to crocodiles (alternately Avemetatarsalia)
Aves can mean those advanced archosaurs with feathers (alternately Avifilopluma)
Aves can mean those feathered dinosaurs that fly (alternately Avialae)
Aves can mean the last common ancestor of all the currently living birds and all of its descendants (a "crown group", in this sense synonymous with Neornithes)
Under the fourth definition Archaeopteryx, traditionally considered one of the earliest members of Aves, is removed from this group, becoming a non-avian dinosaur instead. These proposals have been adopted by many researchers in the field of palaeontology and bird evolution, though the exact definitions applied have been inconsistent. Avialae, initially proposed to replace the traditional fossil content of Aves, is often used synonymously with the vernacular term "bird" by these researchers.
Most researchers define Avialae as branch-based clade, though definitions vary. Many authors have used a definition similar to "all theropods closer to birds than to Deinonychus", with Troodon being sometimes added as a second external specifier in case it is closer to birds than to Deinonychus. Avialae is also occasionally defined as an apomorphy-based clade (that is, one based on physical characteristics). Jacques Gauthier, who named Avialae in 1986, re-defined it in 2001 as all dinosaurs that possessed feathered wings used in flapping flight, and the birds that descended from them.
Despite being currently one of the most widely used, the crown-group definition of Aves has been criticised by some researchers. Lee and Spencer (1997) argued that, contrary to what Gauthier defended, this definition would not increase the stability of the clade and the exact content of Aves will always be uncertain because any defined clade (either crown or not) will have few synapomorphies distinguishing it from its closest relatives. Their alternative definition is synonymous to Avifilopluma.
Dinosaurs and the origin of birds
Based on fossil and biological evidence, most scientists accept that birds are a specialised subgroup of theropod dinosaurs and, more specifically, members of Maniraptora, a group of theropods which includes dromaeosaurids and oviraptorosaurs, among others. As scientists have discovered more theropods closely related to birds, the previously clear distinction between non-birds and birds has become blurred. By the 2000s, discoveries in the Liaoning Province of northeast China, which demonstrated many small theropod feathered dinosaurs, contributed to this ambiguity.
The consensus view in contemporary palaeontology is that the flying theropods, or avialans, are the closest relatives of the deinonychosaurs, which include dromaeosaurids and troodontids. Together, these form a group called Paraves. Some basal members of Deinonychosauria, such as Microraptor, have features which may have enabled them to glide or fly. The most basal deinonychosaurs were very small. This evidence raises the possibility that the ancestor of all paravians may have been arboreal, have been able to glide, or both. Unlike Archaeopteryx and the non-avialan feathered dinosaurs, who primarily ate meat, studies suggest that the first avialans were omnivores.
The Late Jurassic Archaeopteryx is well known as one of the first transitional fossils to be found, and it provided support for the theory of evolution in the late 19th century. Archaeopteryx was the first fossil to display both clearly traditional reptilian characteristics—teeth, clawed fingers, and a long, lizard-like tail—as well as wings with flight feathers similar to those of modern birds. It is not considered a direct ancestor of birds, though it is possibly closely related to the true ancestor.
Early evolution
Over 40% of key traits found in modern birds evolved during the 60 million year transition from the earliest bird-line archosaurs to the first maniraptoromorphs, i.e. the first dinosaurs closer to living birds than to Tyrannosaurus rex. The loss of osteoderms otherwise common in archosaurs and acquisition of primitive feathers might have occurred early during this phase. After the appearance of Maniraptoromorpha, the next 40 million years marked a continuous reduction of body size and the accumulation of neotenic (juvenile-like) characteristics. Hypercarnivory became increasingly less common while braincases enlarged and forelimbs became longer. The integument evolved into complex, pennaceous feathers.
The oldest known paravian (and probably the earliest avialan) fossils come from the Tiaojishan Formation of China, which has been dated to the late Jurassic period (Oxfordian stage), about 160 million years ago. The avialan species from this time period include Anchiornis huxleyi, Xiaotingia zhengi, and Aurornis xui.
The well-known probable early avialan, Archaeopteryx, dates from slightly later Jurassic rocks (about 155 million years old) from Germany. Many of these early avialans shared unusual anatomical features that may be ancestral to modern birds but were later lost during bird evolution. These features include enlarged claws on the second toe which may have been held clear of the ground in life, and long feathers or "hind wings" covering the hind limbs and feet, which may have been used in aerial maneuvering.
Avialans diversified into a wide variety of forms during the Cretaceous period. Many groups retained primitive characteristics, such as clawed wings and teeth, though the latter were lost independently in a number of avialan groups, including modern birds (Aves). Increasingly stiff tails (especially the outermost half) can be seen in the evolution of maniraptoromorphs, and this process culminated in the appearance of the pygostyle, an ossification of fused tail vertebrae. In the late Cretaceous, about 100 million years ago, the ancestors of all modern birds evolved a more open pelvis, allowing them to lay larger eggs compared to body size. Around 95 million years ago, they evolved a better sense of smell.
A third stage of bird evolution starting with Ornithothoraces (the "bird-chested" avialans) can be associated with the refining of aerodynamics and flight capabilities, and the loss or co-ossification of several skeletal features. Particularly significant are the development of an enlarged, keeled sternum and the alula, and the loss of grasping hands.
Early diversity of bird ancestors
The first large, diverse lineage of short-tailed avialans to evolve were the Enantiornithes, or "opposite birds", so named because the construction of their shoulder bones was in reverse to that of modern birds. Enantiornithes occupied a wide array of ecological niches, from sand-probing shorebirds and fish-eaters to tree-dwelling forms and seed-eaters. While they were the dominant group of avialans during the Cretaceous period, enantiornithes became extinct along with many other dinosaur groups at the end of the Mesozoic era.
Many species of the second major avialan lineage to diversify, the Euornithes (meaning "true birds", because they include the ancestors of modern birds), were semi-aquatic and specialised in eating fish and other small aquatic organisms. Unlike the Enantiornithes, which dominated land-based and arboreal habitats, most early euornithes lacked perching adaptations and likely included shorebird-like species, waders, and swimming and diving species.
The latter included the superficially gull-like Ichthyornis and the Hesperornithiformes, which became so well adapted to hunting fish in marine environments that they lost the ability to fly and became primarily aquatic. The early euornithes also saw the development of many traits associated with modern birds, like strongly keeled breastbones, toothless, beaked portions of their jaws (though most non-avian euornithes retained teeth in other parts of the jaws). Euornithes also included the first avialans to develop true pygostyle and a fully mobile fan of tail feathers, which may have replaced the "hind wing" as the primary mode of aerial maneuverability and braking in flight.
A study on mosaic evolution in the avian skull found that the last common ancestor of all Neornithes might have had a beak similar to that of the modern hook-billed vanga and a skull similar to that of the Eurasian golden oriole. As both species are small aerial and canopy foraging omnivores, a similar ecological niche was inferred for this hypothetical ancestor.
Diversification of modern birds
Most studies agree on a Cretaceous age for the most recent common ancestor of modern birds but estimates range from the Early Cretaceous to the latest Cretaceous. Similarly, there is no agreement on whether most of the early diversification of modern birds occurred in the Cretaceous and associated withe breakup of the supercontinent Gondwana or occurred later and potentially as a consequence of the Cretaceous–Palaeogene extinction event. This disagreement is in part caused by a divergence in the evidence; most molecular dating studies suggests a Cretaceous evolutionary radiation, while fossil evidence points to a Cenozoic radiation (the so-called 'rocks' versus 'clocks' controversy).
The discovery of Vegavis from the Maastrichtian, the last stage of the Late Cretaceous proved that the diversification of modern birds started before the Cenozoic era. The affinities of an earlier fossil, the possible galliform Austinornis lentus, dated to about 85 million years ago, are still too controversial to provide a fossil evidence of modern bird diversification. In 2020, Asteriornis from the Maastrichtian was described, it appears to be a close relative of Galloanserae, the earliest diverging lineage within Neognathae.
Attempts to reconcile molecular and fossil evidence using genomic-scale DNA data and comprehensive fossil information have not resolved the controversy. However, a 2015 estimate that used a new method for calibrating molecular clocks confirmed that while modern birds originated early in the Late Cretaceous, likely in Western Gondwana, a pulse of diversification in all major groups occurred around the Cretaceous–Palaeogene extinction event. Modern birds would have expanded from West Gondwana through two routes. One route was an Antarctic interchange in the Paleogene. The other route was probably via Paleocene land bridges between South American and North America, which allowed for the rapid expansion and diversification of Neornithes into the Holarctic and Paleotropics. On the other hand, the occurrence of Asteriornis in the Northern Hemisphere suggest that Neornithes dispersed out of East Gondwana before the Paleocene.
Classification of bird orders
All modern birds lie within the crown group Aves (alternately Neornithes), which has two subdivisions: the Palaeognathae, which includes the flightless ratites (such as the ostriches) and the weak-flying tinamous, and the extremely diverse Neognathae, containing all other birds. These two subdivisions have variously been given the rank of superorder, cohort, or infraclass. Depending on the taxonomic viewpoint, the number of known living bird species is around 10,906 although other sources may differ in their precise number.
Cladogram of modern bird relationships based on Braun & Kimball (2021)
The classification of birds is a contentious issue. Sibley and Ahlquist's Phylogeny and Classification of Birds (1990) is a landmark work on the subject. Most evidence seems to suggest the assignment of orders is accurate, but scientists disagree about the relationships among the orders themselves; evidence from modern bird anatomy, fossils and DNA have all been brought to bear on the problem, but no strong consensus has emerged. Fossil and molecular evidence from the 2010s is providing an increasingly clear picture of the evolution of modern bird orders.
Genomics
, the genome had been sequenced for only two birds, the chicken and the zebra finch. the genomes of 542 species of birds had been completed. At least one genome has been sequenced from every order.
These include at least one species in about 90% of extant avian families (218 out of 236 families recognised by the Howard and Moore Checklist).
Being able to sequence and compare whole genomes gives researchers many types of information, about genes, the DNA that regulates the genes, and their evolutionary history. This has led to reconsideration of some of the classifications that were based solely on the identification of protein-coding genes. Waterbirds such as pelicans and flamingos, for example, may have in common specific adaptations suited to their environment that were developed independently.
Distribution
Birds live and breed in most terrestrial habitats and on all seven continents, reaching their southern extreme in the snow petrel's breeding colonies up to inland in Antarctica. The highest bird diversity occurs in tropical regions. It was earlier thought that this high diversity was the result of higher speciation rates in the tropics; however studies from the 2000s found higher speciation rates in the high latitudes that were offset by greater extinction rates than in the tropics. Many species migrate annually over great distances and across oceans; several families of birds have adapted to life both on the world's oceans and in them, and some seabird species come ashore only to breed, while some penguins have been recorded diving up to deep.
Many bird species have established breeding populations in areas to which they have been introduced by humans. Some of these introductions have been deliberate; the ring-necked pheasant, for example, has been introduced around the world as a game bird. Others have been accidental, such as the establishment of wild monk parakeets in several North American cities after their escape from captivity. Some species, including cattle egret, yellow-headed caracara and galah, have spread naturally far beyond their original ranges as agricultural expansion created alternative habitats although modern practices of intensive agriculture have negatively impacted farmland bird populations.
Anatomy and physiology
Compared with other vertebrates, birds have a body plan that shows many unusual adaptations, mostly to facilitate flight.
Skeletal system
The skeleton consists of very lightweight bones. They have large air-filled cavities (called pneumatic cavities) which connect with the respiratory system. The skull bones in adults are fused and do not show cranial sutures. The orbital cavities that house the eyeballs are large and separated from each other by a bony septum (partition). The spine has cervical, thoracic, lumbar and caudal regions with the number of cervical (neck) vertebrae highly variable and especially flexible, but movement is reduced in the anterior thoracic vertebrae and absent in the later vertebrae. The last few are fused with the pelvis to form the synsacrum. The ribs are flattened and the sternum is keeled for the attachment of flight muscles except in the flightless bird orders. The forelimbs are modified into wings. The wings are more or less developed depending on the species; the only known groups that lost their wings are the extinct moa and elephant birds.
Excretory system
Like the reptiles, birds are primarily uricotelic, that is, their kidneys extract nitrogenous waste from their bloodstream and excrete it as uric acid, instead of urea or ammonia, through the ureters into the intestine. Birds do not have a urinary bladder or external urethral opening and (with exception of the ostrich) uric acid is excreted along with faeces as a semisolid waste. However, birds such as hummingbirds can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. They also excrete creatine, rather than creatinine like mammals. This material, as well as the output of the intestines, emerges from the bird's cloaca. The cloaca is a multi-purpose opening: waste is expelled through it, most birds mate by joining cloaca, and females lay eggs from it. In addition, many species of birds regurgitate pellets.
It is a common but not universal feature of altricial passerine nestlings (born helpless, under constant parental care) that instead of excreting directly into the nest, they produce a fecal sac. This is a mucus-covered pouch that allows parents to either dispose of the waste outside the nest or to recycle the waste through their own digestive system.
Reproductive system
Males within Palaeognathae (with the exception of the kiwis), the Anseriformes (with the exception of screamers), and in rudimentary forms in Galliformes (but fully developed in Cracidae) possess a penis, which is never present in Neoaves. The length is thought to be related to sperm competition. For male birds to get an erection, they depend on lymphatic fluid instead of blood. When not copulating, it is hidden within the proctodeum compartment within the cloaca, just inside the vent. Female birds have sperm storage tubules that allow sperm to remain viable long after copulation, a hundred days in some species. Sperm from multiple males may compete through this mechanism. Most female birds have a single ovary and a single oviduct, both on the left side, but there are exceptions: species in at least 16 different orders of birds have two ovaries. Even these species, however, tend to have a single oviduct. It has been speculated that this might be an adaptation to flight, but males have two testes, and it is also observed that the gonads in both sexes decrease dramatically in size outside the breeding season. Also terrestrial birds generally have a single ovary, as does the platypus, an egg-laying mammal. A more likely explanation is that the egg develops a shell while passing through the oviduct over a period of about a day, so that if two eggs were to develop at the same time, there would be a risk to survival. While rare, mostly abortive, parthenogenesis is not unknown in birds and eggs can be diploid, automictic and results in male offspring.
Birds are solely gonochoric. Meaning they have two sexes: either female or male. The sex of birds is determined by the Z and W sex chromosomes, rather than by the X and Y chromosomes present in mammals. Male birds have two Z chromosomes (ZZ), and female birds have a W chromosome and a Z chromosome (WZ). A complex system of disassortative mating with two morphs is involved in the white-throated sparrow Zonotrichia albicollis, where white- and tan-browed morphs of opposite sex pair, making it appear as if four sexes were involved since any individual is compatible with only a fourth of the population.
In nearly all species of birds, an individual's sex is determined at fertilisation. However, one 2007 study claimed to demonstrate temperature-dependent sex determination among the Australian brushturkey, for which higher temperatures during incubation resulted in a higher female-to-male sex ratio. This, however, was later proven to not be the case. These birds do not exhibit temperature-dependent sex determination, but temperature-dependent sex mortality.
Respiratory and circulatory systems
Birds have one of the most complex respiratory systems of all animal groups. Upon inhalation, 75% of the fresh air bypasses the lungs and flows directly into a posterior air sac which extends from the lungs and connects with air spaces in the bones and fills them with air. The other 25% of the air goes directly into the lungs. When the bird exhales, the used air flows out of the lungs and the stored fresh air from the posterior air sac is simultaneously forced into the lungs. Thus, a bird's lungs receive a constant supply of fresh air during both inhalation and exhalation. Sound production is achieved using the syrinx, a muscular chamber incorporating multiple tympanic membranes which diverges from the lower end of the trachea; the trachea being elongated in some species, increasing the volume of vocalisations and the perception of the bird's size.
In birds, the main arteries taking blood away from the heart originate from the right aortic arch (or pharyngeal arch), unlike in the mammals where the left aortic arch forms this part of the aorta. The postcava receives blood from the limbs via the renal portal system. Unlike in mammals, the circulating red blood cells in birds retain their nucleus.
Heart type and features
The avian circulatory system is driven by a four-chambered, myogenic heart contained in a fibrous pericardial sac. This pericardial sac is filled with a serous fluid for lubrication. The heart itself is divided into a right and left half, each with an atrium and ventricle. The atrium and ventricles of each side are separated by atrioventricular valves which prevent back flow from one chamber to the next during contraction. Being myogenic, the heart's pace is maintained by pacemaker cells found in the sinoatrial node, located on the right atrium.
The sinoatrial node uses calcium to cause a depolarising signal transduction pathway from the atrium through right and left atrioventricular bundle which communicates contraction to the ventricles. The avian heart also consists of muscular arches that are made up of thick bundles of muscular layers. Much like a mammalian heart, the avian heart is composed of endocardial, myocardial and epicardial layers. The atrium walls tend to be thinner than the ventricle walls, due to the intense ventricular contraction used to pump oxygenated blood throughout the body. Avian hearts are generally larger than mammalian hearts when compared to body mass. This adaptation allows more blood to be pumped to meet the high metabolic need associated with flight.
Organisation
Birds have a very efficient system for diffusing oxygen into the blood; birds have a ten times greater surface area to gas exchange volume than mammals. As a result, birds have more blood in their capillaries per unit of volume of lung than a mammal. The arteries are composed of thick elastic muscles to withstand the pressure of the ventricular contractions, and become more rigid as they move away from the heart. Blood moves through the arteries, which undergo vasoconstriction, and into arterioles which act as a transportation system to distribute primarily oxygen as well as nutrients to all tissues of the body. As the arterioles move away from the heart and into individual organs and tissues they are further divided to increase surface area and slow blood flow. Blood travels through the arterioles and moves into the capillaries where gas exchange can occur.
Capillaries are organised into capillary beds in tissues; it is here that blood exchanges oxygen for carbon dioxide waste. In the capillary beds, blood flow is slowed to allow maximum diffusion of oxygen into the tissues. Once the blood has become deoxygenated, it travels through venules then veins and back to the heart. Veins, unlike arteries, are thin and rigid as they do not need to withstand extreme pressure. As blood travels through the venules to the veins a funneling occurs called vasodilation bringing blood back to the heart. Once the blood reaches the heart, it moves first into the right atrium, then the right ventricle to be pumped through the lungs for further gas exchange of carbon dioxide waste for oxygen. Oxygenated blood then flows from the lungs through the left atrium to the left ventricle where it is pumped out to the body.
Nervous system
The nervous system is large relative to the bird's size. The most developed part of the brain of birds is the one that controls the flight-related functions, while the cerebellum coordinates movement and the cerebrum controls behaviour patterns, navigation, mating and nest building. Most birds have a poor sense of smell with notable exceptions including kiwis, New World vultures and tubenoses. The avian visual system is usually highly developed. Water birds have special flexible lenses, allowing accommodation for vision in air and water. Some species also have dual fovea. Birds are tetrachromatic, possessing ultraviolet (UV) sensitive cone cells in the eye as well as green, red and blue ones. They also have double cones, likely to mediate achromatic vision.
Many birds show plumage patterns in ultraviolet that are invisible to the human eye; some birds whose sexes appear similar to the naked eye are distinguished by the presence of ultraviolet reflective patches on their feathers. Male blue tits have an ultraviolet reflective crown patch which is displayed in courtship by posturing and raising of their nape feathers. Ultraviolet light is also used in foraging—kestrels have been shown to search for prey by detecting the UV reflective urine trail marks left on the ground by rodents. With the exception of pigeons and a few other species, the eyelids of birds are not used in blinking. Instead the eye is lubricated by the nictitating membrane, a third eyelid that moves horizontally. The nictitating membrane also covers the eye and acts as a contact lens in many aquatic birds. The bird retina has a fan shaped blood supply system called the pecten.
Eyes of most birds are large, not very round and capable of only limited movement in the orbits, typically 10–20°. Birds with eyes on the sides of their heads have a wide visual field, while birds with eyes on the front of their heads, such as owls, have binocular vision and can estimate the depth of field. The avian ear lacks external pinnae but is covered by feathers, although in some birds, such as the Asio, Bubo and Otus owls, these feathers form tufts which resemble ears. The inner ear has a cochlea, but it is not spiral as in mammals.
Defence and intraspecific combat
A few species are able to use chemical defences against predators; some Procellariiformes can eject an unpleasant stomach oil against an aggressor, and some species of pitohuis from New Guinea have a powerful neurotoxin in their skin and feathers.
A lack of field observations limit our knowledge, but intraspecific conflicts are known to sometimes result in injury or death. The screamers (Anhimidae), some jacanas (Jacana, Hydrophasianus), the spur-winged goose (Plectropterus), the torrent duck (Merganetta) and nine species of lapwing (Vanellus) use a sharp spur on the wing as a weapon. The steamer ducks (Tachyeres), geese and swans (Anserinae), the solitaire (Pezophaps), sheathbills (Chionis), some guans (Crax) and stone curlews (Burhinus) use a bony knob on the alular metacarpal to punch and hammer opponents. The jacanas Actophilornis and Irediparra have an expanded, blade-like radius. The extinct Xenicibis was unique in having an elongate forelimb and massive hand which likely functioned in combat or defence as a jointed club or flail. Swans, for instance, may strike with the bony spurs and bite when defending eggs or young.
Feathers, plumage, and scales
Feathers are a feature characteristic of birds (though also present in some dinosaurs not currently considered to be true birds). They facilitate flight, provide insulation that aids in thermoregulation, and are used in display, camouflage, and signalling. There are several types of feathers, each serving its own set of purposes. Feathers are epidermal growths attached to the skin and arise only in specific tracts of skin called pterylae. The distribution pattern of these feather tracts (pterylosis) is used in taxonomy and systematics. The arrangement and appearance of feathers on the body, called plumage, may vary within species by age, social status, and sex.
Plumage is regularly moulted; the standard plumage of a bird that has moulted after breeding is known as the "" plumage, or—in the Humphrey–Parkes terminology—"basic" plumage; breeding plumages or variations of the basic plumage are known under the Humphrey–Parkes system as "" plumages. Moulting is annual in most species, although some may have two moults a year, and large birds of prey may moult only once every few years. Moulting patterns vary across species. In passerines, flight feathers are replaced one at a time with the innermost being the first. When the fifth of sixth primary is replaced, the outermost begin to drop. After the innermost tertiaries are moulted, the starting from the innermost begin to drop and this proceeds to the outer feathers (centrifugal moult). The greater primary are moulted in synchrony with the primary that they overlap.
A small number of species, such as ducks and geese, lose all of their flight feathers at once, temporarily becoming flightless. As a general rule, the tail feathers are moulted and replaced starting with the innermost pair. Centripetal moults of tail feathers are however seen in the Phasianidae. The centrifugal moult is modified in the tail feathers of woodpeckers and treecreepers, in that it begins with the second innermost pair of feathers and finishes with the central pair of feathers so that the bird maintains a functional climbing tail. The general pattern seen in passerines is that the primaries are replaced outward, secondaries inward, and the tail from centre outward. Before nesting, the females of most bird species gain a bare brood patch by losing feathers close to the belly. The skin there is well supplied with blood vessels and helps the bird in incubation.
Feathers require maintenance and birds preen or groom them daily, spending an average of around 9% of their daily time on this. The bill is used to brush away foreign particles and to apply waxy secretions from the uropygial gland; these secretions protect the feathers' flexibility and act as an antimicrobial agent, inhibiting the growth of feather-degrading bacteria. This may be supplemented with the secretions of formic acid from ants, which birds receive through a behaviour known as anting, to remove feather parasites.
The scales of birds are composed of the same keratin as beaks, claws, and spurs. They are found mainly on the toes and metatarsus, but may be found further up on the ankle in some birds. Most bird scales do not overlap significantly, except in the cases of kingfishers and woodpeckers.
The scales of birds are thought to be homologous to those of reptiles and mammals.
Flight
Most birds can fly, which distinguishes them from almost all other vertebrate classes. Flight is the primary means of locomotion for most bird species and is used for searching for food and for escaping from predators. Birds have various adaptations for flight, including a lightweight skeleton, two large flight muscles, the pectoralis (which accounts for 15% of the total mass of the bird) and the supracoracoideus, as well as a modified forelimb (wing) that serves as an aerofoil.
Wing shape and size generally determine a bird's flight style and performance; many birds combine powered, flapping flight with less energy-intensive soaring flight. About 60 extant bird species are flightless, as were many extinct birds. Flightlessness often arises in birds on isolated islands, most likely due to limited resources and the absence of mammalian land predators. Flightlessness is almost exclusively correlated with gigantism due to an island's inherent condition of isolation. Although flightless, penguins use similar musculature and movements to "fly" through the water, as do some flight-capable birds such as auks, shearwaters and dippers.
Behaviour
Most birds are diurnal, but some birds, such as many species of owls and nightjars, are nocturnal or crepuscular (active during twilight hours), and many coastal waders feed when the tides are appropriate, by day or night.
Diet and feeding
are varied and often include nectar, fruit, plants, seeds, carrion, and various small animals, including other birds. The digestive system of birds is unique, with a crop for storage and a gizzard that contains swallowed stones for grinding food to compensate for the lack of teeth. Some species such as pigeons and some psittacine species do not have a gallbladder. Most birds are highly adapted for rapid digestion to aid with flight. Some migratory birds have adapted to use protein stored in many parts of their bodies, including protein from the intestines, as additional energy during migration.
Birds that employ many strategies to obtain food or feed on a variety of food items are called generalists, while others that concentrate time and effort on specific food items or have a single strategy to obtain food are considered specialists. Avian foraging strategies can vary widely by species. Many birds glean for insects, invertebrates, fruit, or seeds. Some hunt insects by suddenly attacking from a branch. Those species that seek pest insects are considered beneficial 'biological control agents' and their presence encouraged in biological pest control programmes. Combined, insectivorous birds eat 400–500 million metric tons of arthropods annually.
Nectar feeders such as hummingbirds, sunbirds, lories, and lorikeets amongst others have specially adapted brushy tongues and in many cases bills designed to fit co-adapted flowers. Kiwis and shorebirds with long bills probe for invertebrates; shorebirds' varied bill lengths and feeding methods result in the separation of ecological niches. Loons, diving ducks, penguins and auks pursue their prey underwater, using their wings or feet for propulsion, while aerial predators such as sulids, kingfishers and terns plunge dive after their prey. Flamingos, three species of prion, and some ducks are filter feeders. Geese and dabbling ducks are primarily grazers.
Some species, including frigatebirds, gulls, and skuas, engage in kleptoparasitism, stealing food items from other birds. Kleptoparasitism is thought to be a supplement to food obtained by hunting, rather than a significant part of any species' diet; a study of great frigatebirds stealing from masked boobies estimated that the frigatebirds stole at most 40% of their food and on average stole only 5%. Other birds are scavengers; some of these, like vultures, are specialised carrion eaters, while others, like gulls, corvids, or other birds of prey, are opportunists.
Water and drinking
Water is needed by many birds although their mode of excretion and lack of sweat glands reduces the physiological demands. Some desert birds can obtain their water needs entirely from moisture in their food. They may also have other adaptations such as allowing their body temperature to rise, saving on moisture loss from evaporative cooling or panting. Seabirds can drink seawater and have salt glands inside the head that eliminate excess salt out of the nostrils.
Most birds scoop water in their beaks and raise their head to let water run down the throat. Some species, especially of arid zones, belonging to the pigeon, finch, mousebird, button-quail and bustard families are capable of sucking up water without the need to tilt back their heads. Some desert birds depend on water sources and sandgrouse are particularly well known for their daily congregations at waterholes. Nesting sandgrouse and many plovers carry water to their young by wetting their belly feathers. Some birds carry water for chicks at the nest in their crop or regurgitate it along with food. The pigeon family, flamingos and penguins have adaptations to produce a nutritive fluid called crop milk that they provide to their chicks.
Feather care
Feathers, being critical to the survival of a bird, require maintenance. Apart from physical wear and tear, feathers face the onslaught of fungi, ectoparasitic feather mites and bird lice. The physical condition of feathers are maintained by often with the application of secretions from the . Birds also bathe in water or dust themselves. While some birds dip into shallow water, more aerial species may make aerial dips into water and arboreal species often make use of dew or rain that collect on leaves. Birds of arid regions make use of loose soil to dust-bathe. A behaviour termed as anting in which the bird encourages ants to run through their plumage is also thought to help them reduce the ectoparasite load in feathers. Many species will spread out their wings and expose them to direct sunlight and this too is thought to help in reducing fungal and ectoparasitic activity that may lead to feather damage.
Migration
Many bird species migrate to take advantage of global differences of seasonal temperatures, therefore optimising availability of food sources and breeding habitat. These migrations vary among the different groups. Many landbirds, shorebirds, and waterbirds undertake annual long-distance migrations, usually triggered by the length of daylight as well as weather conditions. These birds are characterised by a breeding season spent in the temperate or polar regions and a non-breeding season in the tropical regions or opposite hemisphere. Before migration, birds substantially increase body fats and reserves and reduce the size of some of their organs.
Migration is highly demanding energetically, particularly as birds need to cross deserts and oceans without refuelling. Landbirds have a flight range of around and shorebirds can fly up to , although the bar-tailed godwit is capable of non-stop flights of up to . Seabirds also undertake long migrations, the longest annual migration being those of sooty shearwaters, which nest in New Zealand and Chile and spend the northern summer feeding in the North Pacific off Japan, Alaska and California, an annual round trip of . Other seabirds disperse after breeding, travelling widely but having no set migration route. Albatrosses nesting in the Southern Ocean often undertake circumpolar trips between breeding seasons.
Some bird species undertake shorter migrations, travelling only as far as is required to avoid bad weather or obtain food. Irruptive species such as the boreal finches are one such group and can commonly be found at a location in one year and absent the next. This type of migration is normally associated with food availability. Species may also travel shorter distances over part of their range, with individuals from higher latitudes travelling into the existing range of conspecifics; others undertake partial migrations, where only a fraction of the population, usually females and subdominant males, migrates. Partial migration can form a large percentage of the migration behaviour of birds in some regions; in Australia, surveys found that 44% of non-passerine birds and 32% of passerines were partially migratory.
Altitudinal migration is a form of short-distance migration in which birds spend the breeding season at higher altitudes and move to lower ones during suboptimal conditions. It is most often triggered by temperature changes and usually occurs when the normal territories also become inhospitable due to lack of food. Some species may also be nomadic, holding no fixed territory and moving according to weather and food availability. Parrots as a family are overwhelmingly neither migratory nor sedentary but considered to either be dispersive, irruptive, nomadic or undertake small and irregular migrations.
The ability of birds to return to precise locations across vast distances has been known for some time; in an experiment conducted in the 1950s, a Manx shearwater released in Boston in the United States returned to its colony in Skomer, in Wales within 13 days, a distance of . Birds navigate during migration using a variety of methods. For diurnal migrants, the sun is used to navigate by day, and a stellar compass is used at night. Birds that use the sun compensate for the changing position of the sun during the day by the use of an internal clock. Orientation with the stellar compass depends on the position of the constellations surrounding Polaris. These are backed up in some species by their ability to sense the Earth's geomagnetism through specialised photoreceptors.
Communication
Birds communicate primarily using visual and auditory signals. Signals can be interspecific (between species) and intraspecific (within species).
Birds sometimes use plumage to assess and assert social dominance, to display breeding condition in sexually selected species, or to make threatening displays, as in the sunbittern's mimicry of a large predator to ward off hawks and protect young chicks.
Visual communication among birds may also involve ritualised displays, which have developed from non-signalling actions such as preening, the adjustments of feather position, pecking, or other behaviour. These displays may signal aggression or submission or may contribute to the formation of pair-bonds. The most elaborate displays occur during courtship, where "dances" are often formed from complex combinations of many possible component movements; males' breeding success may depend on the quality of such displays.
Bird calls and songs, which are produced in the syrinx, are the major means by which birds communicate with sound. This communication can be very complex; some species can operate the two sides of the syrinx independently, allowing the simultaneous production of two different songs.
Calls are used for a variety of purposes, including mate attraction, evaluation of potential mates, bond formation, the claiming and maintenance of territories, the identification of other individuals (such as when parents look for chicks in colonies or when mates reunite at the start of breeding season), and the warning of other birds of potential predators, sometimes with specific information about the nature of the threat. Some birds also use mechanical sounds for auditory communication. The Coenocorypha snipes of New Zealand drive air through their feathers, woodpeckers drum for long-distance communication, and palm cockatoos use tools to drum.
Flocking and other associations
While some birds are essentially territorial or live in small family groups, other birds may form large flocks. The principal benefits of flocking are safety in numbers and increased foraging efficiency. Defence against predators is particularly important in closed habitats like forests, where ambush predation is common and multiple eyes can provide a valuable early warning system. This has led to the development of many mixed-species feeding flocks, which are usually composed of small numbers of many species; these flocks provide safety in numbers but increase potential competition for resources. Costs of flocking include bullying of socially subordinate birds by more dominant birds and the reduction of feeding efficiency in certain cases.
Birds sometimes also form associations with non-avian species. Plunge-diving seabirds associate with dolphins and tuna, which push shoaling fish towards the surface. Some species of hornbills have a mutualistic relationship with dwarf mongooses, in which they forage together and warn each other of nearby birds of prey and other predators.
Resting and roosting
The high metabolic rates of birds during the active part of the day is supplemented by rest at other times. Sleeping birds often use a type of sleep known as vigilant sleep, where periods of rest are interspersed with quick eye-opening "peeks", allowing them to be sensitive to disturbances and enable rapid escape from threats. Swifts are believed to be able to sleep in flight and radar observations suggest that they orient themselves to face the wind in their roosting flight. It has been suggested that there may be certain kinds of sleep which are possible even when in flight.
Some birds have also demonstrated the capacity to fall into slow-wave sleep one hemisphere of the brain at a time. The birds tend to exercise this ability depending upon its position relative to the outside of the flock. This may allow the eye opposite the sleeping hemisphere to remain vigilant for predators by viewing the outer margins of the flock. This adaptation is also known from marine mammals. Communal roosting is common because it lowers the loss of body heat and decreases the risks associated with predators. Roosting sites are often chosen with regard to thermoregulation and safety. Unusual mobile roost sites include large herbivores on the African savanna that are used by oxpeckers.
Many sleeping birds bend their heads over their backs and tuck their bills in their back feathers, although others place their beaks among their breast feathers. Many birds rest on one leg, while some may pull up their legs into their feathers, especially in cold weather. Perching birds have a tendon-locking mechanism that helps them hold on to the perch when they are asleep. Many ground birds, such as quails and pheasants, roost in trees. A few parrots of the genus Loriculus roost hanging upside down. Some hummingbirds go into a nightly state of torpor accompanied with a reduction of their metabolic rates. This physiological adaptation shows in nearly a hundred other species, including owlet-nightjars, nightjars, and woodswallows. One species, the common poorwill, even enters a state of hibernation. Birds do not have sweat glands, but can lose water directly through the skin, and they may cool themselves by moving to shade, standing in water, panting, increasing their surface area, fluttering their throat or using special behaviours like urohidrosis to cool themselves.
Breeding
Social systems
Ninety-five per cent of bird species are socially monogamous. These species pair for at least the length of the breeding season or—in some cases—for several years or until the death of one mate. Monogamy allows for both paternal care and biparental care, which is especially important for species in which care from both the female and the male parent is required in order to successfully rear a brood. Among many socially monogamous species, extra-pair copulation (infidelity) is common. Such behaviour typically occurs between dominant males and females paired with subordinate males, but may also be the result of forced copulation in ducks and other anatids.
For females, possible benefits of extra-pair copulation include getting better genes for her offspring and insuring against the possibility of infertility in her mate. Males of species that engage in extra-pair copulations will closely guard their mates to ensure the parentage of the offspring that they raise.
Other mating systems, including polygyny, polyandry, polygamy, polygynandry, and promiscuity, also occur. Polygamous breeding systems arise when females are able to raise broods without the help of males. Mating systems vary across bird families but variations within species are thought to be driven by environmental conditions.
Breeding usually involves some form of courtship display, typically performed by the male. Most displays are rather simple and involve some type of song. Some displays, however, are quite elaborate. Depending on the species, these may include wing or tail drumming, dancing, aerial flights, or communal lekking. Females are generally the ones that drive partner selection, although in the polyandrous phalaropes, this is reversed: plainer males choose brightly coloured females. Courtship feeding, billing and are commonly performed between partners, generally after the birds have paired and mated.
Homosexual behaviour has been observed in males or females in numerous species of birds, including copulation, pair-bonding, and joint parenting of chicks. Over 130 avian species around the world engage in sexual interactions between the same sex or homosexual behaviours. "Same-sex courtship activities may involve elaborate displays, synchronized dances, gift-giving ceremonies, or behaviors at specific display areas including bowers, arenas, or leks."
Territories, nesting and incubation
Many birds actively defend a territory from others of the same species during the breeding season; maintenance of territories protects the food source for their chicks. Species that are unable to defend feeding territories, such as seabirds and swifts, often breed in colonies instead; this is thought to offer protection from predators. Colonial breeders defend small nesting sites, and competition between and within species for nesting sites can be intense.
All birds lay amniotic eggs with hard shells made mostly of calcium carbonate. Hole and burrow nesting species tend to lay white or pale eggs, while open nesters lay camouflaged eggs. There are many exceptions to this pattern, however; the ground-nesting nightjars have pale eggs, and camouflage is instead provided by their plumage. Species that are victims of brood parasites have varying egg colours to improve the chances of spotting a parasite's egg, which forces female parasites to match their eggs to those of their hosts.
Bird eggs are usually laid in a nest. Most species create somewhat elaborate nests, which can be cups, domes, plates, mounds, or burrows. Some bird nests can be a simple scrape, with minimal or no lining; most seabird and wader nests are no more than a scrape on the ground. Most birds build nests in sheltered, hidden areas to avoid predation, but large or colonial birds—which are more capable of defence—may build more open nests. During nest construction, some species seek out plant matter from plants with parasite-reducing toxins to improve chick survival, and feathers are often used for nest insulation. Some bird species have no nests; the cliff-nesting common guillemot lays its eggs on bare rock, and male emperor penguins keep eggs between their body and feet. The absence of nests is especially prevalent in open habitat ground-nesting species where any addition of nest material would make the nest more conspicuous. Many ground nesting birds lay a clutch of eggs that hatch synchronously, with precocial chicks led away from the nests (nidifugous) by their parents soon after hatching.
Incubation, which regulates temperature for chick development, usually begins after the last egg has been laid. In monogamous species incubation duties are often shared, whereas in polygamous species one parent is wholly responsible for incubation. Warmth from parents passes to the eggs through brood patches, areas of bare skin on the abdomen or breast of the incubating birds. Incubation can be an energetically demanding process; adult albatrosses, for instance, lose as much as of body weight per day of incubation. The warmth for the incubation of the eggs of megapodes comes from the sun, decaying vegetation or volcanic sources. Incubation periods range from 10 days (in woodpeckers, cuckoos and passerine birds) to over 80 days (in albatrosses and kiwis).
The diversity of characteristics of birds is great, sometimes even in closely related species. Several avian characteristics are compared in the table below.
Parental care and fledging
At the time of their hatching, chicks range in development from helpless to independent, depending on their species. Helpless chicks are termed altricial, and tend to be born small, blind, immobile and naked; chicks that are mobile and feathered upon hatching are termed precocial. Altricial chicks need help thermoregulating and must be brooded for longer than precocial chicks. The young of many bird species do not precisely fit into either the precocial or altricial category, having some aspects of each and thus fall somewhere on an "altricial-precocial spectrum". Chicks at neither extreme but favouring one or the other may be termed or .
The length and nature of parental care varies widely amongst different orders and species. At one extreme, parental care in megapodes ends at hatching; the newly hatched chick digs itself out of the nest mound without parental assistance and can fend for itself immediately. At the other extreme, many seabirds have extended periods of parental care, the longest being that of the great frigatebird, whose chicks take up to six months to fledge and are fed by the parents for up to an additional 14 months. The chick guard stage describes the period of breeding during which one of the adult birds is permanently present at the nest after chicks have hatched. The main purpose of the guard stage is to aid offspring to thermoregulate and protect them from predation.
In some species, both parents care for nestlings and fledglings; in others, such care is the responsibility of only one sex. In some species, other members of the same species—usually close relatives of the breeding pair, such as offspring from previous broods—will help with the raising of the young. Such alloparenting is particularly common among the Corvida, which includes such birds as the true crows, Australian magpie and fairy-wrens, but has been observed in species as different as the rifleman and red kite. Among most groups of animals, male parental care is rare. In birds, however, it is quite common—more so than in any other vertebrate class. Although territory and nest site defence, incubation, and chick feeding are often shared tasks, there is sometimes a division of labour in which one mate undertakes all or most of a particular duty.
The point at which chicks fledge varies dramatically. The chicks of the Synthliboramphus murrelets, like the ancient murrelet, leave the nest the night after they hatch, following their parents out to sea, where they are raised away from terrestrial predators. Some other species, such as ducks, move their chicks away from the nest at an early age. In most species, chicks leave the nest just before, or soon after, they are able to fly. The amount of parental care after fledging varies; albatross chicks leave the nest on their own and receive no further help, while other species continue some supplementary feeding after fledging. Chicks may also follow their parents during their first migration.
Brood parasites
Brood parasitism, in which an egg-layer leaves her eggs with another individual's brood, is more common among birds than any other type of organism. After a parasitic bird lays her eggs in another bird's nest, they are often accepted and raised by the host at the expense of the host's own brood. Brood parasites may be either obligate brood parasites, which must lay their eggs in the nests of other species because they are incapable of raising their own young, or non-obligate brood parasites, which sometimes lay eggs in the nests of conspecifics to increase their reproductive output even though they could have raised their own young. One hundred bird species, including honeyguides, icterids, and ducks, are obligate parasites, though the most famous are the cuckoos. Some brood parasites are adapted to hatch before their host's young, which allows them to destroy the host's eggs by pushing them out of the nest or to kill the host's chicks; this ensures that all food brought to the nest will be fed to the parasitic chicks.
Sexual selection
Birds have evolved a variety of mating behaviours, with the peacock tail being perhaps the most famous example of sexual selection and the Fisherian runaway. Commonly occurring sexual dimorphisms such as size and colour differences are energetically costly attributes that signal competitive breeding situations. Many types of avian sexual selection have been identified; intersexual selection, also known as female choice; and intrasexual competition, where individuals of the more abundant sex compete with each other for the privilege to mate. Sexually selected traits often evolve to become more pronounced in competitive breeding situations until the trait begins to limit the individual's fitness. Conflicts between an individual fitness and signalling adaptations ensure that sexually selected ornaments such as plumage colouration and courtship behaviour are "honest" traits. Signals must be costly to ensure that only good-quality individuals can present these exaggerated sexual ornaments and behaviours.
Inbreeding depression
Inbreeding causes early death (inbreeding depression) in the zebra finch Taeniopygia guttata. Embryo survival (that is, hatching success of fertile eggs) was significantly lower for sib-sib mating pairs than for unrelated pairs.
Darwin's finch Geospiza scandens experiences inbreeding depression (reduced survival of offspring) and the magnitude of this effect is influenced by environmental conditions such as low food availability.
Inbreeding avoidance
Incestuous matings by the purple-crowned fairy wren Malurus coronatus result in severe fitness costs due to inbreeding depression (greater than 30% reduction in hatchability of eggs). Females paired with related males may undertake extra pair matings (see Promiscuity#Other animals for 90% frequency in avian species) that can reduce the negative effects of inbreeding. However, there are ecological and demographic constraints on extra pair matings. Nevertheless, 43% of broods produced by incestuously paired females contained extra pair young.
Inbreeding depression occurs in the great tit (Parus major) when the offspring produced as a result of a mating between close relatives show reduced fitness. In natural populations of Parus major, inbreeding is avoided by dispersal of individuals from their birthplace, which reduces the chance of mating with a close relative.
Southern pied babblers Turdoides bicolor appear to avoid inbreeding in two ways. The first is through dispersal, and the second is by avoiding familiar group members as mates.
Cooperative breeding in birds typically occurs when offspring, usually males, delay dispersal from their natal group in order to remain with the family to help rear younger kin. Female offspring rarely stay at home, dispersing over distances that allow them to breed independently, or to join unrelated groups. In general, inbreeding is avoided because it leads to a reduction in progeny fitness (inbreeding depression) due largely to the homozygous expression of deleterious recessive alleles. Cross-fertilisation between unrelated individuals ordinarily leads to the masking of deleterious recessive alleles in progeny.
Ecology
Birds occupy a wide range of ecological positions. While some birds are generalists, others are highly specialised in their habitat or food requirements. Even within a single habitat, such as a forest, the niches occupied by different species of birds vary, with some species feeding in the forest canopy, others beneath the canopy, and still others on the forest floor. Forest birds may be insectivores, frugivores, or nectarivores. Aquatic birds generally feed by fishing, plant eating, and piracy or kleptoparasitism. Many grassland birds are granivores. Birds of prey specialise in hunting mammals or other birds, while vultures are specialised scavengers. Birds are also preyed upon by a range of mammals including a few avivorous bats. A wide range of endo- and ectoparasites depend on birds and some parasites that are transmitted from parent to young have co-evolved and show host-specificity.
Some nectar-feeding birds are important pollinators, and many frugivores play a key role in seed dispersal. Plants and pollinating birds often coevolve, and in some cases a flower's primary pollinator is the only species capable of reaching its nectar.
Birds are often important to island ecology. Birds have frequently reached islands that mammals have not; on those islands, birds may fulfil ecological roles typically played by larger animals. For example, in New Zealand nine species of moa were important browsers, as are the kererū and kokako today. Today the plants of New Zealand retain the defensive adaptations evolved to protect them from the extinct moa.
Many birds act as ecosystem engineers through the construction of nests, which provide important microhabitats and food for hundreds of species of invertebrates. Nesting seabirds may affect the ecology of islands and surrounding seas, principally through the concentration of large quantities of guano, which may enrich the local soil and the surrounding seas.
A wide variety of avian ecology field methods, including counts, nest monitoring, and capturing and marking, are used for researching avian ecology.
Relationship with humans
Since birds are highly visible and common animals, humans have had a relationship with them since the dawn of man. Sometimes, these relationships are mutualistic, like the cooperative honey-gathering among honeyguides and African peoples such as the Borana. Other times, they may be commensal, as when species such as the house sparrow have benefited from human activities. Several bird species have become commercially significant agricultural pests, and some pose an aviation hazard. Human activities can also be detrimental, and have threatened numerous bird species with extinction (hunting, avian lead poisoning, pesticides, roadkill, wind turbine kills and predation by pet cats and dogs are common causes of death for birds).
Birds can act as vectors for spreading diseases such as psittacosis, salmonellosis, campylobacteriosis, mycobacteriosis (avian tuberculosis), avian influenza (bird flu), giardiasis, and cryptosporidiosis over long distances. Some of these are zoonotic diseases that can also be transmitted to humans.
Economic importance
Domesticated birds raised for meat and eggs, called poultry, are the largest source of animal protein eaten by humans; in 2003, tons of poultry and tons of eggs were produced worldwide. Chickens account for much of human poultry consumption, though domesticated turkeys, ducks, and geese are also relatively common. Many species of birds are also hunted for meat. Bird hunting is primarily a recreational activity except in extremely undeveloped areas. The most important birds hunted in North and South America are waterfowl; other widely hunted birds include pheasants, wild turkeys, quail, doves, partridge, grouse, snipe, and woodcock. Muttonbirding is also popular in Australia and New Zealand. Although some hunting, such as that of muttonbirds, may be sustainable, hunting has led to the extinction or endangerment of dozens of species.
Other commercially valuable products from birds include feathers (especially the down of geese and ducks), which are used as insulation in clothing and bedding, and seabird faeces (guano), which is a valuable source of phosphorus and nitrogen. The War of the Pacific, sometimes called the Guano War, was fought in part over the control of guano deposits.
Birds have been domesticated by humans both as pets and for practical purposes. Colourful birds, such as parrots and mynas, are bred in captivity or kept as pets, a practice that has led to the illegal trafficking of some endangered species. Falcons and cormorants have long been used for hunting and fishing, respectively. Messenger pigeons, used since at least 1 AD, remained important as recently as World War II. Today, such activities are more common either as hobbies, for entertainment and tourism,
Amateur bird enthusiasts (called birdwatchers, twitchers or, more commonly, birders) number in the millions. Many homeowners erect bird feeders near their homes to attract various species. Bird feeding has grown into a multimillion-dollar industry; for example, an estimated 75% of households in Britain provide food for birds at some point during the winter.
In religion and mythology
Birds play prominent and diverse roles in religion and mythology.
In religion, birds may serve as either messengers or priests and leaders for a deity, such as in the Cult of Makemake, in which the Tangata manu of Easter Island served as chiefs or as attendants, as in the case of Hugin and Munin, the two common ravens who whispered news into the ears of the Norse god Odin. In several civilisations of ancient Italy, particularly Etruscan and Roman religion, priests were involved in augury, or interpreting the words of birds while the "auspex" (from which the word "auspicious" is derived) watched their activities to foretell events.
They may also serve as religious symbols, as when Jonah (, dove) embodied the fright, passivity, mourning, and beauty traditionally associated with doves. Birds have themselves been deified, as in the case of the common peacock, which is perceived as Mother Earth by the people of southern India. In the ancient world, doves were used as symbols of the Mesopotamian goddess Inanna (later known as Ishtar), the Canaanite mother goddess Asherah, and the Greek goddess Aphrodite. In ancient Greece, Athena, the goddess of wisdom and patron deity of the city of Athens, had a little owl as her symbol. In religious images preserved from the Inca and Tiwanaku empires, birds are depicted in the process of transgressing boundaries between earthly and underground spiritual realms. Indigenous peoples of the central Andes maintain legends of birds passing to and from metaphysical worlds.
In culture and folklore
Birds have featured in culture and art since prehistoric times, when they were represented in early cave painting and carvings. Some birds have been perceived as monsters, including the mythological Roc and the Māori's legendary , a giant bird capable of snatching humans. Birds were later used as symbols of power, as in the magnificent Peacock Throne of the Mughal and Persian emperors. With the advent of scientific interest in birds, many paintings of birds were commissioned for books.
Among the most famous of these bird artists was John James Audubon, whose paintings of North American birds were a great commercial success in Europe and who later lent his name to the National Audubon Society. Birds are also important figures in poetry; for example, Homer incorporated nightingales into his Odyssey, and Catullus used a sparrow as an erotic symbol in his Catullus 2. The relationship between an albatross and a sailor is the central theme of Samuel Taylor Coleridge's The Rime of the Ancient Mariner, which led to the use of the term as a metaphor for a 'burden'. Other English metaphors derive from birds; vulture funds and vulture investors, for instance, take their name from the scavenging vulture. Aircraft, particularly military aircraft, are frequently named after birds. The predatory nature of raptors make them popular choices for fighter aircraft such as the F-16 Fighting Falcon and the Harrier Jump Jet, while the names of seabirds may be chosen for aircraft primarily used by naval forces such as the HU-16 Albatross and the V-22 Osprey.
Perceptions of bird species vary across cultures. Owls are associated with bad luck, witchcraft, and death in parts of Africa, but are regarded as wise across much of Europe. Hoopoes were considered sacred in Ancient Egypt and symbols of virtue in Persia, but were thought of as thieves across much of Europe and harbingers of war in Scandinavia. In heraldry, birds, especially eagles, often appear in coats of arms In vexillology, birds are a popular choice on flags. Birds feature in the flag designs of 17 countries and numerous subnational entities and territories. Birds are used by nations to symbolize a country's identity and heritage, with 91 countries officially recognizing a national bird. Birds of prey are highly represented, though some nations have chosen other species of birds with parrots being popular among smaller, tropical nations.
In music
In music, birdsong has influenced composers and musicians in several ways: they can be inspired by birdsong; they can intentionally imitate bird song in a composition, as Vivaldi, Messiaen, and Beethoven did, along with many later composers; they can incorporate recordings of birds into their works, as Ottorino Respighi first did; or like Beatrice Harrison and David Rothenberg, they can duet with birds.
A 2023 archaeological excavation of a 10000-year-old site in Israel yielded hollow wing bones of coots and ducks with perforations made on the side that are thought to have allowed them to be used as flutes or whistles possibly used by Natufian people to lure birds of prey.
Conservation
Although human activities have allowed the expansion of a few species, such as the barn swallow and European starling, they have caused population decreases or extinction in many other species. Over a hundred bird species have gone extinct in historical times, although the most dramatic human-caused avian extinctions, eradicating an estimated 750–1800 species, occurred during the human colonisation of Melanesian, Polynesian, and Micronesian islands. Many bird populations are declining worldwide, with 1,227 species listed as threatened by BirdLife International and the IUCN in 2009.
The most commonly cited human threat to birds is habitat loss. Other threats include overhunting, accidental mortality due to collisions with buildings or vehicles, long-line fishing bycatch, pollution (including oil spills and pesticide use), competition and predation from nonnative invasive species, and climate change.
Governments and conservation groups work to protect birds, either by passing laws that preserve and restore bird habitat or by establishing captive populations for reintroductions. Such projects have produced some successes; one study estimated that conservation efforts saved 16 species of bird that would otherwise have gone extinct between 1994 and 2004, including the California condor and Norfolk parakeet.
See also
Animal track
Avian sleep
Bat
Climate change and birds
Glossary of bird terms
List of individual birds
Ornithology
Paleocene dinosaurs
References
Further reading
All the Birds of the World, Lynx Edicions, 2020.
Del Hoyo, Josep; Elliott, Andrew; Sargatal, Jordi (eds.). Handbook of the Birds of the World (17-volume encyclopaedia), Lynx Edicions, Barcelona, 1992–2010. (Vol. 1: Ostrich to Ducks: , etc.).
Lederer, Roger; Carol Burr (2014). Latein für Vogelbeobachter: über 3000 ornithologische Begriffe erklärt und erforscht, aus dem Englischen übersetzt von Susanne Kuhlmannn-Krieg, Verlag DuMont, Köln, .
National Geographic Field Guide to Birds of North America, National Geographic, 7th edition, 2017.
National Audubon Society Field Guide to North American Birds: Eastern Region, National Audubon Society, Knopf.
National Audubon Society Field Guide to North American Birds: Western Region, National Audubon Society, Knopf.
Svensson, Lars (2010). Birds of Europe, Princeton University Press, second edition.
Svensson, Lars (2010). Collins Bird Guide: The Most Complete Guide to the Birds of Britain and Europe, Collins, 2nd edition.
External links
Birdlife International – Dedicated to bird conservation worldwide; has a database with about 250,000 records on endangered bird species.
Bird biogeography
Birds and Science from the National Audubon Society
Cornell Lab of Ornithology
Essays on bird biology
North American Birds for Kids
Ornithology
Sora – Searchable online research archive; Archives of the following ornithological journals The Auk, Condor, Journal of Field Ornithology''', North American Bird Bander, Studies in Avian Biology, Pacific Coast Avifauna, and the Wilson Bulletin''.
The Internet Bird Collection – A free library of videos of the world's birds
The Institute for Bird Populations, California
List of field guides to birds, from the International Field Guides database
RSPB bird identifier – Interactive identification of all UK birds
Are Birds Really Dinosaurs? — University of California Museum of Paleontology.
Animal classes
Dinosaurs
Extant Late Cretaceous first appearances
Feathered dinosaurs
Santonian first appearances
Taxa named by Carl Linnaeus
|
https://en.wikipedia.org/wiki/Brain
|
The brain (or encephalon) is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. The brain is the largest cluster of neurons in the body and is typically located in the head, usually near organs for special senses such as vision, hearing and olfaction. It is the most specialized and energy-consuming organ in the body, responsible for complex sensory perception, motor control, endocrine regulation and the development of intelligence.
While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates.
In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons, typically communicating with one another via root-like protrusions called dendrites and long fiber-like extensions called axons, which are usually myelinated and carry trains of rapid micro-electric signal pulses called action potentials to target specific recipient cells in other areas of the brain or distant parts of the body. The prefrontal cortex, which controls executive functions, is particularly well developed in humans.
Physiologically, brains exert centralized control over a body's other organs. They act on the rest of the body both by generating patterns of muscle activity and by driving the secretion of chemicals called hormones. This centralized control allows rapid and coordinated responses to changes in the environment. Some basic types of responsiveness such as reflexes can be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful control of behavior based on complex sensory input requires the information integrating capabilities of a centralized brain.
The operations of individual brain cells are now understood in considerable detail but the way they cooperate in ensembles of millions is yet to be solved. Recent models in modern neuroscience treat the brain as a biological computer, very different in mechanism from a digital computer, but similar in the sense that it acquires information from the surrounding world, stores it, and processes it in a variety of ways.
This article compares the properties of brains across the entire range of animal species, with the greatest attention to vertebrates. It deals with the human brain insofar as it shares the properties of other brains. The ways in which the human brain differs from other brains are covered in the human brain article. Several topics that might be covered here are instead covered there because much more can be said about them in a human context. The most important that are covered in the human brain article are brain disease and the effects of brain damage.
Anatomy
The shape and size of the brain varies greatly between species, and identifying common features is often difficult. Nevertheless, there are a number of principles of brain architecture that apply across a wide range of species. Some aspects of brain structure are common to almost the entire range of animal species; others distinguish "advanced" brains from more primitive ones, or distinguish vertebrates from invertebrates.
The simplest way to gain information about brain anatomy is by visual inspection, but many more sophisticated techniques have been developed. Brain tissue in its natural state is too soft to work with, but it can be hardened by immersion in alcohol or other fixatives, and then sliced apart for examination of the interior. Visually, the interior of the brain consists of areas of so-called grey matter, with a dark color, separated by areas of white matter, with a lighter color. Further information can be gained by staining slices of brain tissue with a variety of chemicals that bring out areas where specific types of molecules are present in high concentrations. It is also possible to examine the microstructure of brain tissue using a microscope, and to trace the pattern of connections from one brain area to another.
Cellular structure
The brains of all species are composed primarily of two broad classes of cells: neurons and glial cells. Glial cells (also known as glia or neuroglia) come in several types, and perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Neurons, however, are usually considered the most important cells in the brain.
The property that makes neurons unique is their ability to send signals to specific target cells over long distances. They send these signals by means of an axon, which is a thin protoplasmic fiber that extends from the cell body and projects, usually with numerous branches, to other areas, sometimes nearby, sometimes in distant parts of the brain or body. The length of an axon can be extraordinary: for example, if a pyramidal cell (an excitatory neuron) of the cerebral cortex were magnified so that its cell body became the size of a human body, its axon, equally magnified, would become a cable a few centimeters in diameter, extending more than a kilometer. These axons transmit signals in the form of electrochemical pulses called action potentials, which last less than a thousandth of a second and travel along the axon at speeds of 1–100 meters per second. Some neurons emit action potentials constantly, at rates of 10–100 per second, usually in irregular patterns; other neurons are quiet most of the time, but occasionally emit a burst of action potentials.
Axons transmit signals to other neurons by means of specialized junctions called synapses. A single axon may make as many as several thousand synaptic connections with other cells. When an action potential, traveling along an axon, arrives at a synapse, it causes a chemical called a neurotransmitter to be released. The neurotransmitter binds to receptor molecules in the membrane of the target cell.
Synapses are the key functional elements of the brain. The essential function of the brain is cell-to-cell communication, and synapses are the points at which communication occurs. The human brain has been estimated to contain approximately 100 trillion synapses; even the brain of a fruit fly contains several million. The functions of these synapses are very diverse: some are excitatory (exciting the target cell); others are inhibitory; others work by activating second messenger systems that change the internal chemistry of their target cells in complex ways. A large number of synapses are dynamically modifiable; that is, they are capable of changing strength in a way that is controlled by the patterns of signals that pass through them. It is widely believed that activity-dependent modification of synapses is the brain's primary mechanism for learning and memory.
Most of the space in the brain is taken up by axons, which are often bundled together in what are called nerve fiber tracts. A myelinated axon is wrapped in a fatty insulating sheath of myelin, which serves to greatly increase the speed of signal propagation. (There are also unmyelinated axons). Myelin is white, making parts of the brain filled exclusively with nerve fibers appear as light-colored white matter, in contrast to the darker-colored grey matter that marks areas with high densities of neuron cell bodies.
Evolution
Generic bilaterian nervous system
Except for a few primitive organisms such as sponges (which have no nervous system) and cnidarians (which have a diffuse nervous system consisting of a nerve net), all living multicellular animals are bilaterians, meaning animals with a bilaterally symmetric body plan (that is, left and right sides that are approximate mirror images of each other). All bilaterians are thought to have descended from a common ancestor that appeared late in the Cryogenian period, 700–650 million years ago, and it has been hypothesized that this common ancestor had the shape of a simple tubeworm with a segmented body. At a schematic level, that basic worm-shape continues to be reflected in the body and nervous system architecture of all modern bilaterians, including vertebrates. The fundamental bilateral body form is a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The brain is small and simple in some species, such as nematode worms; in other species, including vertebrates, it is the most complex organ in the body. Some types of worms, such as leeches, also have an enlarged ganglion at the back end of the nerve cord, known as a "tail brain".
There are a few types of existing bilaterians that lack a recognizable brain, including echinoderms and tunicates. It has not been definitively established whether the existence of these brainless species indicates that the earliest bilaterians lacked a brain, or whether their ancestors evolved in a way that led to the disappearance of a previously existing brain structure.
Invertebrates
This category includes tardigrades, arthropods, molluscs, and numerous types of worms. The diversity of invertebrate body plans is matched by an equal diversity in brain structures.
Two groups of invertebrates have notably complex brains: arthropods (insects, crustaceans, arachnids, and others), and cephalopods (octopuses, squids, and similar molluscs). The brains of arthropods and cephalopods arise from twin parallel nerve cords that extend through the body of the animal. Arthropods have a central brain, the supraesophageal ganglion, with three divisions and large optical lobes behind each eye for visual processing. Cephalopods such as the octopus and squid have the largest brains of any invertebrates.
There are several invertebrate species whose brains have been studied intensively because they have properties that make them convenient for experimental work:
Fruit flies (Drosophila), because of the large array of techniques available for studying their genetics, have been a natural subject for studying the role of genes in brain development. In spite of the large evolutionary distance between insects and mammals, many aspects of Drosophila neurogenetics have been shown to be relevant to humans. The first biological clock genes, for example, were identified by examining Drosophila mutants that showed disrupted daily activity cycles. A search in the genomes of vertebrates revealed a set of analogous genes, which were found to play similar roles in the mouse biological clock—and therefore almost certainly in the human biological clock as well. Studies done on Drosophila, also show that most neuropil regions of the brain are continuously reorganized throughout life in response to specific living conditions.
The nematode worm Caenorhabditis elegans, like Drosophila, has been studied largely because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model organism for studying the way that genes control development. One of the advantages of working with this worm is that the body plan is very stereotyped: the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed each one under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body. The complete neuronal wiring diagram of C.elegans – its connectome was achieved. Nothing approaching this level of detail is available for any other organism, and the information gained has enabled a multitude of studies that would otherwise have not been possible.
The sea slug Aplysia californica was chosen by Nobel Prize-winning neurophysiologist Eric Kandel as a model for studying the cellular basis of learning and memory, because of the simplicity and accessibility of its nervous system, and it has been examined in hundreds of experiments.
Vertebrates
The first vertebrates appeared over 500 million years ago (Mya), during the Cambrian period, and may have resembled the modern hagfish in form. Jawed fish appeared by 445 Mya, amphibians by 350 Mya, reptiles by 310 Mya and mammals by 200 Mya (approximately). Each species has an equally long evolutionary history, but the brains of modern hagfishes, lampreys, sharks, amphibians, reptiles, and mammals show a gradient of size and complexity that roughly follows the evolutionary sequence. All of these brains contain the same set of basic anatomical components, but many are rudimentary in the hagfish, whereas in mammals the foremost part (the telencephalon) is greatly elaborated and expanded.
Brains are most commonly compared in terms of their size. The relationship between brain size, body size and other variables has been studied across a wide range of vertebrate species. As a rule, brain size increases with body size, but not in a simple linear proportion. In general, smaller animals tend to have larger brains, measured as a fraction of body size. For mammals, the relationship between brain volume and body mass essentially follows a power law with an exponent of about 0.75. This formula describes the central tendency, but every family of mammals departs from it to some degree, in a way that reflects in part the complexity of their behavior. For example, primates have brains 5 to 10 times larger than the formula predicts. Predators tend to have larger brains than their prey, relative to body size.
All vertebrate brains share a common underlying form, which appears most clearly during early stages of embryonic development. In its earliest form, the brain appears as three swellings at the front end of the neural tube; these swellings eventually become the forebrain, midbrain, and hindbrain (the prosencephalon, mesencephalon, and rhombencephalon, respectively). At the earliest stages of brain development, the three areas are roughly equal in size. In many classes of vertebrates, such as fish and amphibians, the three parts remain similar in size in the adult, but in mammals the forebrain becomes much larger than the other parts, and the midbrain becomes very small.
The brains of vertebrates are made of very soft tissue. Living brain tissue is pinkish on the outside and mostly white on the inside, with subtle variations in color. Vertebrate brains are surrounded by a system of connective tissue membranes called meninges that separate the skull from the brain. Blood vessels enter the central nervous system through holes in the meningeal layers. The cells in the blood vessel walls are joined tightly to one another, forming the blood–brain barrier, which blocks the passage of many toxins and pathogens (though at the same time blocking antibodies and some drugs, thereby presenting special challenges in treatment of diseases of the brain).
Neuroanatomists usually divide the vertebrate brain into six main regions: the telencephalon (cerebral hemispheres), diencephalon (thalamus and hypothalamus), mesencephalon (midbrain), cerebellum, pons, and medulla oblongata. Each of these areas has a complex internal structure. Some parts, such as the cerebral cortex and the cerebellar cortex, consist of layers that are folded or convoluted to fit within the available space. Other parts, such as the thalamus and hypothalamus, consist of clusters of many small nuclei. Thousands of distinguishable areas can be identified within the vertebrate brain based on fine distinctions of neural structure, chemistry, and connectivity.
Although the same basic components are present in all vertebrate brains, some branches of vertebrate evolution have led to substantial distortions of brain geometry, especially in the forebrain area. The brain of a shark shows the basic components in a straightforward way, but in teleost fishes (the great majority of existing fish species), the forebrain has become "everted", like a sock turned inside out. In birds, there are also major changes in forebrain structure. These distortions can make it difficult to match brain components from one species with those of another species.
Here is a list of some of the most important vertebrate brain components, along with a brief description of their functions as currently understood:
The medulla, along with the spinal cord, contains many small nuclei involved in a wide variety of sensory and involuntary motor functions such as vomiting, heart rate and digestive processes.
The pons lies in the brainstem directly above the medulla. Among other things, it contains nuclei that control often voluntary but simple acts such as sleep, respiration, swallowing, bladder function, equilibrium, eye movement, facial expressions, and posture.
The hypothalamus is a small region at the base of the forebrain, whose complexity and importance belies its size. It is composed of numerous small nuclei, each with distinct connections and neurochemistry. The hypothalamus is engaged in additional involuntary or partially voluntary acts such as sleep and wake cycles, eating and drinking, and the release of some hormones.
The thalamus is a collection of nuclei with diverse functions: some are involved in relaying information to and from the cerebral hemispheres, while others are involved in motivation. The subthalamic area (zona incerta) seems to contain action-generating systems for several types of "consummatory" behaviors such as eating, drinking, defecation, and copulation.
The cerebellum modulates the outputs of other brain systems, whether motor-related or thought related, to make them certain and precise. Removal of the cerebellum does not prevent an animal from doing anything in particular, but it makes actions hesitant and clumsy. This precision is not built-in but learned by trial and error. The muscle coordination learned while riding a bicycle is an example of a type of neural plasticity that may take place largely within the cerebellum. 10% of the brain's total volume consists of the cerebellum and 50% of all neurons are held within its structure.
The optic tectum allows actions to be directed toward points in space, most commonly in response to visual input. In mammals, it is usually referred to as the superior colliculus, and its best-studied function is to direct eye movements. It also directs reaching movements and other object-directed actions. It receives strong visual inputs, but also inputs from other senses that are useful in directing actions, such as auditory input in owls and input from the thermosensitive pit organs in snakes. In some primitive fishes, such as lampreys, this region is the largest part of the brain. The superior colliculus is part of the midbrain.
The pallium is a layer of grey matter that lies on the surface of the forebrain and is the most complex and most recent evolutionary development of the brain as an organ. In reptiles and mammals, it is called the cerebral cortex. Multiple functions involve the pallium, including smell and spatial memory. In mammals, where it becomes so large as to dominate the brain, it takes over functions from many other brain areas. In many mammals, the cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci. The folds increase the surface area of the cortex and therefore increase the amount of gray matter and the amount of information that can be stored and processed.
The hippocampus, strictly speaking, is found only in mammals. However, the area it derives from, the medial pallium, has counterparts in all vertebrates. There is evidence that this part of the brain is involved in complex events such as spatial memory and navigation in fishes, birds, reptiles, and mammals.
The basal ganglia are a group of interconnected structures in the forebrain. The primary function of the basal ganglia appears to be action selection: they send inhibitory signals to all parts of the brain that can generate motor behaviors, and in the right circumstances can release the inhibition, so that the action-generating systems are able to execute their actions. Reward and punishment exert their most important neural effects by altering connections within the basal ganglia.
The olfactory bulb is a special structure that processes olfactory sensory signals and sends its output to the olfactory part of the pallium. It is a major brain component in many vertebrates, but is greatly reduced in humans and other primates (whose senses are dominated by information acquired by sight rather than smell).
Reptiles
Reptiles have a brain.
Birds
Mammals
The most obvious difference between the brains of mammals and other vertebrates is in terms of size. On average, a mammal has a brain roughly twice as large as that of a bird of the same body size, and ten times as large as that of a reptile of the same body size.
Size, however, is not the only difference: there are also substantial differences in shape. The hindbrain and midbrain of mammals are generally similar to those of other vertebrates, but dramatic differences appear in the forebrain, which is greatly enlarged and also altered in structure. The cerebral cortex is the part of the brain that most strongly distinguishes mammals. In non-mammalian vertebrates, the surface of the cerebrum is lined with a comparatively simple three-layered structure called the pallium. In mammals, the pallium evolves into a complex six-layered structure called neocortex or isocortex. Several areas at the edge of the neocortex, including the hippocampus and amygdala, are also much more extensively developed in mammals than in other vertebrates.
The elaboration of the cerebral cortex carries with it changes to other brain areas. The superior colliculus, which plays a major role in visual control of behavior in most vertebrates, shrinks to a small size in mammals, and many of its functions are taken over by visual areas of the cerebral cortex. The cerebellum of mammals contains a large portion (the neocerebellum) dedicated to supporting the cerebral cortex, which has no counterpart in other vertebrates.
Primates
The brains of humans and other primates contain the same structures as the brains of other mammals, but are generally larger in proportion to body size. The encephalization quotient (EQ) is used to compare brain sizes across species. It takes into account the nonlinearity of the brain-to-body relationship. Humans have an average EQ in the 7-to-8 range, while most other primates have an EQ in the 2-to-3 range. Dolphins have values higher than those of primates other than humans, but nearly all other mammals have EQ values that are substantially lower.
Most of the enlargement of the primate brain comes from a massive expansion of the cerebral cortex, especially the prefrontal cortex and the parts of the cortex involved in vision. The visual processing network of primates includes at least 30 distinguishable brain areas, with a complex web of interconnections. It has been estimated that visual processing areas occupy more than half of the total surface of the primate neocortex. The prefrontal cortex carries out functions that include planning, working memory, motivation, attention, and executive control. It takes up a much larger proportion of the brain for primates than for other species, and an especially large fraction of the human brain.
Development
The brain develops in an intricately orchestrated sequence of stages. It changes in shape from a simple swelling at the front of the nerve cord in the earliest embryonic stages, to a complex array of areas and connections. Neurons are created in special zones that contain stem cells, and then migrate through the tissue to reach their ultimate locations. Once neurons have positioned themselves, their axons sprout and navigate through the brain, branching and extending as they go, until the tips reach their targets and form synaptic connections. In a number of parts of the nervous system, neurons and synapses are produced in excessive numbers during the early stages, and then the unneeded ones are pruned away.
For vertebrates, the early stages of neural development are similar across all species. As the embryo transforms from a round blob of cells into a wormlike structure, a narrow strip of ectoderm running along the midline of the back is induced to become the neural plate, the precursor of the nervous system. The neural plate folds inward to form the neural groove, and then the lips that line the groove merge to enclose the neural tube, a hollow cord of cells with a fluid-filled ventricle at the center. At the front end, the ventricles and cord swell to form three vesicles that are the precursors of the prosencephalon (forebrain), mesencephalon (midbrain), and rhombencephalon (hindbrain). At the next stage, the forebrain splits into two vesicles called the telencephalon (which will contain the cerebral cortex, basal ganglia, and related structures) and the diencephalon (which will contain the thalamus and hypothalamus). At about the same time, the hindbrain splits into the metencephalon (which will contain the cerebellum and pons) and the myelencephalon (which will contain the medulla oblongata). Each of these areas contains proliferative zones where neurons and glial cells are generated; the resulting cells then migrate, sometimes for long distances, to their final positions.
Once a neuron is in place, it extends dendrites and an axon into the area around it. Axons, because they commonly extend a great distance from the cell body and need to reach specific targets, grow in a particularly complex way. The tip of a growing axon consists of a blob of protoplasm called a growth cone, studded with chemical receptors. These receptors sense the local environment, causing the growth cone to be attracted or repelled by various cellular elements, and thus to be pulled in a particular direction at each point along its path. The result of this pathfinding process is that the growth cone navigates through the brain until it reaches its destination area, where other chemical cues cause it to begin generating synapses. Considering the entire brain, thousands of genes create products that influence axonal pathfinding.
The synaptic network that finally emerges is only partly determined by genes, though. In many parts of the brain, axons initially "overgrow", and then are "pruned" by mechanisms that depend on neural activity. In the projection from the eye to the midbrain, for example, the structure in the adult contains a very precise mapping, connecting each point on the surface of the retina to a corresponding point in a midbrain layer. In the first stages of development, each axon from the retina is guided to the right general vicinity in the midbrain by chemical cues, but then branches very profusely and makes initial contact with a wide swath of midbrain neurons. The retina, before birth, contains special mechanisms that cause it to generate waves of activity that originate spontaneously at a random point and then propagate slowly across the retinal layer. These waves are useful because they cause neighboring neurons to be active at the same time; that is, they produce a neural activity pattern that contains information about the spatial arrangement of the neurons. This information is exploited in the midbrain by a mechanism that causes synapses to weaken, and eventually vanish, if activity in an axon is not followed by activity of the target cell. The result of this sophisticated process is a gradual tuning and tightening of the map, leaving it finally in its precise adult form.
Similar things happen in other brain areas: an initial synaptic matrix is generated as a result of genetically determined chemical guidance, but then gradually refined by activity-dependent mechanisms, partly driven by internal dynamics, partly by external sensory inputs. In some cases, as with the retina-midbrain system, activity patterns depend on mechanisms that operate only in the developing brain, and apparently exist solely to guide development.
In humans and many other mammals, new neurons are created mainly before birth, and the infant brain contains substantially more neurons than the adult brain. There are, however, a few areas where new neurons continue to be generated throughout life. The two areas for which adult neurogenesis is well established are the olfactory bulb, which is involved in the sense of smell, and the dentate gyrus of the hippocampus, where there is evidence that the new neurons play a role in storing newly acquired memories. With these exceptions, however, the set of neurons that is present in early childhood is the set that is present for life. Glial cells are different: as with most types of cells in the body, they are generated throughout the lifespan.
There has long been debate about whether the qualities of mind, personality, and intelligence can be attributed to heredity or to upbringing—this is the nature and nurture controversy. Although many details remain to be settled, neuroscience research has clearly shown that both factors are important. Genes determine the general form of the brain, and genes determine how the brain reacts to experience. Experience, however, is required to refine the matrix of synaptic connections, which in its developed form contains far more information than the genome does. In some respects, all that matters is the presence or absence of experience during critical periods of development. In other respects, the quantity and quality of experience are important; for example, there is substantial evidence that animals raised in enriched environments have thicker cerebral cortices, indicating a higher density of synaptic connections, than animals whose levels of stimulation are restricted.
Physiology
The functions of the brain depend on the ability of neurons to transmit electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received from other cells. The electrical properties of neurons are controlled by a wide variety of biochemical and metabolic processes, most notably the interactions between neurotransmitters and receptors that take place at synapses.
Neurotransmitters and receptors
Neurotransmitters are chemicals that are released at synapses when the local membrane is depolarised and Ca2+ enters into the cell, typically when an action potential arrives at the synapse – neurotransmitters attach themselves to receptor molecules on the membrane of the synapse's target cell (or cells), and thereby alter the electrical or chemical properties of the receptor molecules. With few exceptions, each neuron in the brain releases the same chemical neurotransmitter, or combination of neurotransmitters, at all the synaptic connections it makes with other neurons; this rule is known as Dale's principle. Thus, a neuron can be characterized by the neurotransmitters that it releases. The great majority of psychoactive drugs exert their effects by altering specific neurotransmitter systems. This applies to drugs such as cannabinoids, nicotine, heroin, cocaine, alcohol, fluoxetine, chlorpromazine, and many others.
The two neurotransmitters that are most widely found in the vertebrate brain are glutamate, which almost always exerts excitatory effects on target neurons, and gamma-aminobutyric acid (GABA), which is almost always inhibitory. Neurons using these transmitters can be found in nearly every part of the brain. Because of their ubiquity, drugs that act on glutamate or GABA tend to have broad and powerful effects. Some general anesthetics act by reducing the effects of glutamate; most tranquilizers exert their sedative effects by enhancing the effects of GABA.
There are dozens of other chemical neurotransmitters that are used in more limited areas of the brain, often areas dedicated to a particular function. Serotonin, for example—the primary target of many antidepressant drugs and many dietary aids—comes exclusively from a small brainstem area called the raphe nuclei. Norepinephrine, which is involved in arousal, comes exclusively from a nearby small area called the locus coeruleus. Other neurotransmitters such as acetylcholine and dopamine have multiple sources in the brain but are not as ubiquitously distributed as glutamate and GABA.
Electrical activity
As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task, called beta and gamma waves. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology.
Metabolism
All vertebrates have a blood–brain barrier that allows metabolism inside the brain to operate differently from metabolism in other parts of the body. The neurovascular unit regulates cerebral blood flow so that activated neurons can be supplied with energy. Glial cells play a major role in brain metabolism by controlling the chemical composition of the fluid that surrounds neurons, including levels of ions and nutrients.
Brain tissue consumes a large amount of energy in proportion to its volume, so large brains place severe metabolic demands on animals. The need to limit body weight in order, for example, to fly, has apparently led to selection for a reduction of brain size in some species, such as bats. Most of the brain's energy consumption goes into sustaining the electric charge (membrane potential) of neurons. Most vertebrate species devote between 2% and 8% of basal metabolism to the brain. In primates, however, the percentage is much higher—in humans it rises to 20–25%. The energy consumption of the brain does not vary greatly over time, but active regions of the cerebral cortex consume somewhat more energy than inactive regions; this forms the basis for the functional brain imaging methods of PET, fMRI, and NIRS. The brain typically gets most of its energy from oxygen-dependent metabolism of glucose (i.e., blood sugar), but ketones provide a major alternative source, together with contributions from medium chain fatty acids (caprylic and heptanoic acids), lactate, acetate, and possibly amino acids.
Function
Information from the sense organs is collected in the brain. There it is used to determine what actions the organism is to take. The brain processes the raw data to extract information about the structure of the environment. Next it combines the processed information with information about the current needs of the animal and with memory of past circumstances. Finally, on the basis of the results, it generates motor response patterns. These signal-processing tasks require intricate interplay between a variety of functional subsystems.
The function of the brain is to provide coherent control over the actions of an animal. A centralized brain allows groups of muscles to be co-activated in complex patterns; it also allows stimuli impinging on one part of the body to evoke responses in other parts, and it can prevent different parts of the body from acting at cross-purposes to each other.
Perception
The human brain is provided with information about light, sound, the chemical composition of the atmosphere, temperature, the position of the body in space (proprioception), the chemical composition of the bloodstream, and more. In other animals additional senses are present, such as the infrared heat-sense of snakes, the magnetic field sense of some birds, or the electric field sense mainly seen in aquatic animals.
Each sensory system begins with specialized receptor cells, such as photoreceptor cells in the retina of the eye, or vibration-sensitive hair cells in the cochlea of the ear. The axons of sensory receptor cells travel into the spinal cord or brain, where they transmit their signals to a first-order sensory nucleus dedicated to one specific sensory modality. This primary sensory nucleus sends information to higher-order sensory areas that are dedicated to the same modality. Eventually, via a way-station in the thalamus, the signals are sent to the cerebral cortex, where they are processed to extract the relevant features, and integrated with signals coming from other sensory systems.
Motor control
Motor systems are areas of the brain that are involved in initiating body movements, that is, in activating muscles. Except for the muscles that control the eye, which are driven by nuclei in the midbrain, all the voluntary muscles in the body are directly innervated by motor neurons in the spinal cord and hindbrain. Spinal motor neurons are controlled both by neural circuits intrinsic to the spinal cord, and by inputs that descend from the brain. The intrinsic spinal circuits implement many reflex responses, and contain pattern generators for rhythmic movements such as walking or swimming. The descending connections from the brain allow for more sophisticated control.
The brain contains several motor areas that project directly to the spinal cord. At the lowest level are motor areas in the medulla and pons, which control stereotyped movements such as walking, breathing, or swallowing. At a higher level are areas in the midbrain, such as the red nucleus, which is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex, a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to the subcortical motor areas, but also sends a massive projection directly to the spinal cord, through the pyramidal tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements. Other motor-related brain areas exert secondary effects by projecting to the primary motor areas. Among the most important secondary areas are the premotor cortex, supplementary motor area, basal ganglia, and cerebellum. In addition to all of the above, the brain and spinal cord contain extensive circuitry to control the autonomic nervous system which controls the movement of the smooth muscle of the body.
Sleep
Many animals alternate between sleeping and waking in a daily cycle. Arousal and alertness are also modulated on a finer time scale by a network of brain areas. A key component of the sleep system is the suprachiasmatic nucleus (SCN), a tiny part of the hypothalamus located directly above the point at which the optic nerves from the two eyes cross. The SCN contains the body's central biological clock. Neurons there show activity levels that rise and fall with a period of about 24 hours, circadian rhythms: these activity fluctuations are driven by rhythmic changes in expression of a set of "clock genes". The SCN continues to keep time even if it is excised from the brain and placed in a dish of warm nutrient solution, but it ordinarily receives input from the optic nerves, through the retinohypothalamic tract (RHT), that allows daily light-dark cycles to calibrate the clock.
The SCN projects to a set of areas in the hypothalamus, brainstem, and midbrain that are involved in implementing sleep-wake cycles. An important component of the system is the reticular formation, a group of neuron-clusters scattered diffusely through the core of the lower brain. Reticular neurons send signals to the thalamus, which in turn sends activity-level-controlling signals to every part of the cortex. Damage to the reticular formation can produce a permanent state of coma.
Sleep involves great changes in brain activity. Until the 1950s it was generally believed that the brain essentially shuts off during sleep, but this is now known to be far from true; activity continues, but patterns become very different. There are two types of sleep: REM sleep (with dreaming) and NREM (non-REM, usually without dreaming) sleep, which repeat in slightly varying patterns throughout a sleep episode. Three broad types of distinct brain activity patterns can be measured: REM, light NREM and deep NREM. During deep NREM sleep, also called slow wave sleep, activity in the cortex takes the form of large synchronized waves, whereas in the waking state it is noisy and desynchronized. Levels of the neurotransmitters norepinephrine and serotonin drop during slow wave sleep, and fall almost to zero during REM sleep; levels of acetylcholine show the reverse pattern.
Homeostasis
For any animal, survival requires maintaining a variety of parameters of bodily state within a limited range of variation: these include temperature, water content, salt concentration in the bloodstream, blood glucose levels, blood oxygen level, and others. The ability of an animal to regulate the internal environment of its body—the milieu intérieur, as the pioneering physiologist Claude Bernard called it—is known as homeostasis (Greek for "standing still"). Maintaining homeostasis is a crucial function of the brain. The basic principle that underlies homeostasis is negative feedback: any time a parameter diverges from its set-point, sensors generate an error signal that evokes a response that causes the parameter to shift back toward its optimum value. (This principle is widely used in engineering, for example in the control of temperature using a thermostat.)
In vertebrates, the part of the brain that plays the greatest role is the hypothalamus, a small region at the base of the forebrain whose size does not reflect its complexity or the importance of its function. The hypothalamus is a collection of small nuclei, most of which are involved in basic biological functions. Some of these functions relate to arousal or to social interactions such as sexuality, aggression, or maternal behaviors; but many of them relate to homeostasis. Several hypothalamic nuclei receive input from sensors located in the lining of blood vessels, conveying information about temperature, sodium level, glucose level, blood oxygen level, and other parameters. These hypothalamic nuclei send output signals to motor areas that can generate actions to rectify deficiencies. Some of the outputs also go to the pituitary gland, a tiny gland attached to the brain directly underneath the hypothalamus. The pituitary gland secretes hormones into the bloodstream, where they circulate throughout the body and induce changes in cellular activity.
Motivation
The individual animals need to express survival-promoting behaviors, such as seeking food, water, shelter, and a mate. The motivational system in the brain monitors the current state of satisfaction of these goals, and activates behaviors to meet any needs that arise. The motivational system works largely by a reward–punishment mechanism. When a particular behavior is followed by favorable consequences, the reward mechanism in the brain is activated, which induces structural changes inside the brain that cause the same behavior to be repeated later, whenever a similar situation arises. Conversely, when a behavior is followed by unfavorable consequences, the brain's punishment mechanism is activated, inducing structural changes that cause the behavior to be suppressed when similar situations arise in the future.
Most organisms studied to date use a reward–punishment mechanism: for instance, worms and insects can alter their behavior to seek food sources or to avoid dangers. In vertebrates, the reward-punishment system is implemented by a specific set of brain structures, at the heart of which lie the basal ganglia, a set of interconnected areas at the base of the forebrain. The basal ganglia are the central site at which decisions are made: the basal ganglia exert a sustained inhibitory control over most of the motor systems in the brain; when this inhibition is released, a motor system is permitted to execute the action it is programmed to carry out. Rewards and punishments function by altering the relationship between the inputs that the basal ganglia receive and the decision-signals that are emitted. The reward mechanism is better understood than the punishment mechanism, because its role in drug abuse has caused it to be studied very intensively. Research has shown that the neurotransmitter dopamine plays a central role: addictive drugs such as cocaine, amphetamine, and nicotine either cause dopamine levels to rise or cause the effects of dopamine inside the brain to be enhanced.
Learning and memory
Almost all animals are capable of modifying their behavior as a result of experience—even the most primitive types of worms. Because behavior is driven by brain activity, changes in behavior must somehow correspond to changes inside the brain. Already in the late 19th century theorists like Santiago Ramón y Cajal argued that the most plausible explanation is that learning and memory are expressed as changes in the synaptic connections between neurons. Until 1970, however, experimental evidence to support the synaptic plasticity hypothesis was lacking. In 1971 Tim Bliss and Terje Lømo published a paper on a phenomenon now called long-term potentiation: the paper showed clear evidence of activity-induced synaptic changes that lasted for at least several days. Since then technical advances have made these sorts of experiments much easier to carry out, and thousands of studies have been made that have clarified the mechanism of synaptic change, and uncovered other types of activity-driven synaptic change in a variety of brain areas, including the cerebral cortex, hippocampus, basal ganglia, and cerebellum. Brain-derived neurotrophic factor (BDNF) and physical activity appear to play a beneficial role in the process.
Neuroscientists currently distinguish several types of learning and memory that are implemented by the brain in distinct ways:
Working memory is the ability of the brain to maintain a temporary representation of information about the task that an animal is currently engaged in. This sort of dynamic memory is thought to be mediated by the formation of cell assemblies—groups of activated neurons that maintain their activity by constantly stimulating one another.
Episodic memory is the ability to remember the details of specific events. This sort of memory can last for a lifetime. Much evidence implicates the hippocampus in playing a crucial role: people with severe damage to the hippocampus sometimes show amnesia, that is, inability to form new long-lasting episodic memories.
Semantic memory is the ability to learn facts and relationships. This sort of memory is probably stored largely in the cerebral cortex, mediated by changes in connections between cells that represent specific types of information.
Instrumental learning is the ability for rewards and punishments to modify behavior. It is implemented by a network of brain areas centered on the basal ganglia.
Motor learning is the ability to refine patterns of body movement by practicing, or more generally by repetition. A number of brain areas are involved, including the premotor cortex, basal ganglia, and especially the cerebellum, which functions as a large memory bank for microadjustments of the parameters of movement.
Research
The field of neuroscience encompasses all approaches that seek to understand the brain and the rest of the nervous system. Psychology seeks to understand mind and behavior, and neurology is the medical discipline that diagnoses and treats diseases of the nervous system. The brain is also the most important organ studied in psychiatry, the branch of medicine that works to study, prevent, and treat mental disorders. Cognitive science seeks to unify neuroscience and psychology with other fields that concern themselves with the brain, such as computer science (artificial intelligence and similar fields) and philosophy.
The oldest method of studying the brain is anatomical, and until the middle of the 20th century, much of the progress in neuroscience came from the development of better cell stains and better microscopes. Neuroanatomists study the large-scale structure of the brain as well as the microscopic structure of neurons and their components, especially synapses. Among other tools, they employ a plethora of stains that reveal neural structure, chemistry, and connectivity. In recent years, the development of immunostaining techniques has allowed investigation of neurons that express specific sets of genes. Also, functional neuroanatomy uses medical imaging techniques to correlate variations in human brain structure with differences in cognition or behavior.
Neurophysiologists study the chemical, pharmacological, and electrical properties of the brain: their primary tools are drugs and recording devices. Thousands of experimentally developed drugs affect the nervous system, some in highly specific ways. Recordings of brain activity can be made using electrodes, either glued to the scalp as in EEG studies, or implanted inside the brains of animals for extracellular recordings, which can detect action potentials generated by individual neurons. Because the brain does not contain pain receptors, it is possible using these techniques to record brain activity from animals that are awake and behaving without causing distress. The same techniques have occasionally been used to study brain activity in human patients with intractable epilepsy, in cases where there was a medical necessity to implant electrodes to localize the brain area responsible for epileptic seizures. Functional imaging techniques such as fMRI are also used to study brain activity; these techniques have mainly been used with human subjects, because they require a conscious subject to remain motionless for long periods of time, but they have the great advantage of being noninvasive.
Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior.
Computational neuroscience encompasses two approaches: first, the use of computers to study the brain; second, the study of how brains perform computation. On one hand, it is possible to write a computer program to simulate the operation of a group of neurons by making use of systems of equations that describe their electrochemical activity; such simulations are known as biologically realistic neural networks. On the other hand, it is possible to study algorithms for neural computation by simulating, or mathematically analyzing, the operations of simplified "units" that have some of the properties of neurons but abstract out much of their biological complexity. The computational functions of the brain are studied both by computer scientists and neuroscientists.
Computational neurogenetic modeling is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes.
Recent years have seen increasing applications of genetic and genomic techniques to the study of the brain and a focus on the roles of neurotrophic factors and physical activity in neuroplasticity. The most common subjects are mice, because of the availability of technical tools. It is now possible with relative ease to "knock out" or mutate a wide variety of genes, and then examine the effects on brain function. More sophisticated approaches are also being used: for example, using Cre-Lox recombination it is possible to activate or deactivate genes in specific parts of the brain, at specific times.
History
The oldest brain to have been discovered was in Armenia in the Areni-1 cave complex. The brain, estimated to be over 5,000 years old, was found in the skull of a 12 to 14-year-old girl. Although the brains were shriveled, they were well preserved due to the climate found inside the cave.
Early philosophers were divided as to whether the seat of the soul lies in the brain or heart. Aristotle favored the heart, and thought that the function of the brain was merely to cool the blood. Democritus, the inventor of the atomic theory of matter, argued for a three-part soul, with intellect in the head, emotion in the heart, and lust near the liver. The unknown author of On the Sacred Disease, a medical treatise in the Hippocratic Corpus, came down unequivocally in favor of the brain, writing:
The Roman physician Galen also argued for the importance of the brain, and theorized in some depth about how it might work. Galen traced out the anatomical relationships among brain, nerves, and muscles, demonstrating that all muscles in the body are connected to the brain through a branching network of nerves. He postulated that nerves activate muscles mechanically by carrying a mysterious substance he called pneumata psychikon, usually translated as "animal spirits". Galen's ideas were widely known during the Middle Ages, but not much further progress came until the Renaissance, when detailed anatomical study resumed, combined with the theoretical speculations of René Descartes and those who followed him. Descartes, like Galen, thought of the nervous system in hydraulic terms. He believed that the highest cognitive functions are carried out by a non-physical res cogitans, but that the majority of behaviors of humans, and all behaviors of animals, could be explained mechanistically.
The first real progress toward a modern understanding of nervous function, though, came from the investigations of Luigi Galvani (1737–1798), who discovered that a shock of static electricity applied to an exposed nerve of a dead frog could cause its leg to contract. Since that time, each major advance in understanding has followed more or less directly from the development of a new technique of investigation. Until the early years of the 20th century, the most important advances were derived from new methods for staining cells. Particularly critical was the invention of the Golgi stain, which (when correctly used) stains only a small fraction of neurons, but stains them in their entirety, including cell body, dendrites, and axon. Without such a stain, brain tissue under a microscope appears as an impenetrable tangle of protoplasmic fibers, in which it is impossible to determine any structure. In the hands of Camillo Golgi, and especially of the Spanish neuroanatomist Santiago Ramón y Cajal, the new stain revealed hundreds of distinct types of neurons, each with its own unique dendritic structure and pattern of connectivity.
In the first half of the 20th century, advances in electronics enabled investigation of the electrical properties of nerve cells, culminating in work by Alan Hodgkin, Andrew Huxley, and others on the biophysics of the action potential, and the work of Bernard Katz and others on the electrochemistry of the synapse. These studies complemented the anatomical picture with a conception of the brain as a dynamic entity. Reflecting the new understanding, in 1942 Charles Sherrington visualized the workings of the brain waking from sleep:
The invention of electronic computers in the 1940s, along with the development of mathematical information theory, led to a realization that brains can potentially be understood as information processing systems. This concept formed the basis of the field of cybernetics, and eventually gave rise to the field now known as computational neuroscience. The earliest attempts at cybernetics were somewhat crude in that they treated the brain as essentially a digital computer in disguise, as for example in John von Neumann's 1958 book, The Computer and the Brain. Over the years, though, accumulating information about the electrical responses of brain cells recorded from behaving animals has steadily moved theoretical concepts in the direction of increasing realism.
One of the most influential early contributions was a 1959 paper titled What the frog's eye tells the frog's brain: the paper examined the visual responses of neurons in the retina and optic tectum of frogs, and came to the conclusion that some neurons in the tectum of the frog are wired to combine elementary responses in a way that makes them function as "bug perceivers". A few years later David Hubel and Torsten Wiesel discovered cells in the primary visual cortex of monkeys that become active when sharp edges move across specific points in the field of view—a discovery for which they won a Nobel Prize. Follow-up studies in higher-order visual areas found cells that detect binocular disparity, color, movement, and aspects of shape, with areas located at increasing distances from the primary visual cortex showing increasingly complex responses. Other investigations of brain areas unrelated to vision have revealed cells with a wide variety of response correlates, some related to memory, some to abstract types of cognition such as space.
Theorists have worked to understand these response patterns by constructing mathematical models of neurons and neural networks, which can be simulated using computers. Some useful models are abstract, focusing on the conceptual structure of neural algorithms rather than the details of how they are implemented in the brain; other models attempt to incorporate data about the biophysical properties of real neurons. No model on any level is yet considered to be a fully valid description of brain function, though. The essential difficulty is that sophisticated computation by neural networks requires distributed processing in which hundreds or thousands of neurons work cooperatively—current methods of brain activity recording are only capable of isolating action potentials from a few dozen neurons at a time.
Furthermore, even single neurons appear to be complex and capable of performing computations. So, brain models that do not reflect this are too abstract to be representative of brain operation; models that do try to capture this are very computationally expensive and arguably intractable with present computational resources. However, the Human Brain Project is trying to build a realistic, detailed computational model of the entire human brain. The wisdom of this approach has been publicly contested, with high-profile scientists on both sides of the argument.
In the second half of the 20th century, developments in chemistry, electron microscopy, genetics, computer science, functional brain imaging, and other fields progressively opened new windows into brain structure and function. In the United States, the 1990s were officially designated as the "Decade of the Brain" to commemorate advances made in brain research, and to promote funding for such research.
In the 21st century, these trends have continued, and several new approaches have come into prominence, including multielectrode recording, which allows the activity of many brain cells to be recorded all at the same time; genetic engineering, which allows molecular components of the brain to be altered experimentally; genomics, which allows variations in brain structure to be correlated with variations in DNA properties and neuroimaging.
Society and culture
As food
Animal brains are used as food in numerous cuisines.
In rituals
Some archaeological evidence suggests that the mourning rituals of European Neanderthals also involved the consumption of the brain.
The Fore people of Papua New Guinea are known to eat human brains. In funerary rituals, those close to the dead would eat the brain of the deceased to create a sense of immortality. A prion disease called kuru has been traced to this.
See also
Brain–computer interface
Central nervous system disease
List of neuroscience databases
Neurological disorder
Optogenetics
Outline of neuroscience
Aging brain
References
External links
The Brain from Top to Bottom, at McGill University
"The Brain", BBC Radio 4 discussion with Vivian Nutton, Jonathan Sawday & Marina Wallace (In Our Time, May 8, 2008)
Our Quest to Understand the Brain – with Matthew Cobb Royal Institution lecture. Archived at Ghostarchive.
Animal anatomy
Human anatomy by organ
Organs (anatomy)
|
https://en.wikipedia.org/wiki/Bluetooth
|
Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to . It employs UHF radio waves in the ISM bands, from 2.402GHz to 2.48GHz. It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones.
Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. , 4.7 billion Bluetooth integrated circuit chips are shipped annually.
Etymology
The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the runestone of Harald Bluetooth in the book A History of the Vikings by Gwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.
According to Bluetooth's official website,
Bluetooth is the Anglicised version of the Scandinavian Blåtand/Blåtann (or in Old Norse blátǫnn). It was the epithet of King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.
The Bluetooth logo is a bind rune merging the Younger Futhark runes (ᚼ, Hagall) and (ᛒ, Bjarkan), Harald's initials.
History
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, and . Nils Rydbeck tasked Tord Wingren with specifying and Dutchman Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund. Principal design and development began in 1994 and by 1997 the team had a workable solution. From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.
In 1997, Adalio Sanchez, then head of IBM ThinkPad product R&D, approached Nils Rydbeck about collaborating on integrating a mobile phone into a ThinkPad notebook. The two assigned engineers from Ericsson and IBM studied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruited Toshiba and Nokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" at COMDEX. The first Bluetooth mobile phone was the Ericsson T36, but it was the revised Ericsson model T39 that actually made it to store shelves in 2001. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, since Wi-Fi was not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations with Motorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by the European Patent Office for the European Inventor Award.
Implementation
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, including guard bands 2MHz wide at the bottom end and 3.5MHz wide at the top. This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, with adaptive frequency-hopping (AFH) enabled. Bluetooth Low Energy uses 2MHz spacing, which accommodates 40 channels.
Originally, Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK (differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneous bit rate of 1Mbit/s is possible. The term Enhanced Data Rate (EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, each giving 2 and 3Mbit/s respectively. The combination of these (BR and EDR) modes in Bluetooth radio technology is classified as a BR/EDR radio.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSK modulation on 4 MHz channels with forward error correction (FEC).
Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625µs, and two slots make up a slot pair of 1250µs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently.
Communication and connection
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.
Uses
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, a quasi optical wireless path must be viable.
Bluetooth Classes and power use
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range. The actual range achieved by a given link will depend on the qualities of the devices at both ends of the link, as well as the air and obstacles in between. The primary hardware attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), the transmitter power, the receiver sensitivity, and the gain of both antennas.
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device. Mostly, however, the Class 1 devices have a similar sensitivity to Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.
Bluetooth profile
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
The Headset Profile (HSP) connects headphones and earbuds to a cell phone or laptop.
The Health Device Profile (HDP) can connect a cell phone to a digital thermometer, or a heart rate detector.
The Video Distribution Profile (VDP) sends a video stream from a video camera to a TV screen or a recording device.
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.
List of applications
Wireless control and communication between a mobile phone and a handsfree headset. This was one of the earliest applications to become popular.
Wireless control of audio and communication functions between a mobile phone and a Bluetooth compatible car stereo system (and sometimes between the SIM card and the car phone).
Wireless communication between a smartphone and a smart lock for unlocking doors.
Wireless control of and communication with iOS and Android device phones, tablets and portable wireless speakers.
Wireless Bluetooth headset and intercom. Idiomatically, a headset is sometimes called "a Bluetooth".
Wireless streaming of audio to headphones with or without communication capabilities.
Wireless streaming of data collected by Bluetooth-enabled fitness devices to phone or PC.
Wireless networking between PCs in a confined space and where little bandwidth is required.
Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer.
Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX and sharing directories via FTP.
Triggering the camera shutter of a smartphone using a Bluetooth controlled selfie stick.
Replacement of previous wired RS-232 serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices.
For controls where infrared was often used.
For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired.
Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices.
Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks.
Game consoles have been using Bluetooth as a wireless communications protocol for peripherals since the seventh generation, including Nintendo's Wii and Sony's PlayStation 3 which use Bluetooth for their respective controllers.
Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem.
Short-range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices.
Allowing a DECT phone to ring and answer calls on behalf of a nearby mobile phone.
Real-time location systems (RTLS) are used to track and identify the location of objects in real time using "Nodes" or "tags" attached to, or embedded in, the objects tracked, and "Readers" that receive and process the wireless signals from these tags to determine their locations.
Personal security application on mobile phones for prevention of theft or loss of items. The protected item has a Bluetooth marker (e.g., a tag) that is in constant communication with the phone. If the connection is broken (the marker is out of range of the phone) then an alarm is raised. This can also be used as a man overboard alarm.
Calgary, Alberta, Canada's Roads Traffic division uses data collected from travelers' Bluetooth devices to predict travel times and road congestion for motorists.
Wireless transmission of audio (a more reliable alternative to FM transmitters)
Live video streaming to the visual cortical implant device by Nabeel Fattah in Newcastle university 2017.
Connection of motion controllers to a PC when using VR headsets
Bluetooth vs Wi-Fi (IEEE 802.11)
Bluetooth and Wi-Fi (Wi-Fi is the brand name for products using IEEE 802.11 standards) have some similar applications: setting up networks, printing, or transferring files. Wi-Fi is intended as a replacement for high-speed cabling for general local area network access in work areas or home. This category of applications is sometimes called wireless local area networks (WLAN). Bluetooth was intended for portable equipment and its applications. The category of applications is outlined as the wireless personal area network (WPAN). Bluetooth is a replacement for cabling in various personally carried applications in any setting and also works for fixed location applications such as smart energy functionality in the home (thermostats, etc.).
Wi-Fi and Bluetooth are to some extent complementary in their applications and usage. Wi-Fi is usually access point-centered, with an asymmetrical client-server connection with all traffic routed through the access point, while Bluetooth is usually symmetrical, between two Bluetooth devices. Bluetooth serves well in simple applications where two devices need to connect with a minimal configuration like a button press, as in headsets and speakers.
Devices
Bluetooth exists in numerous products such as telephones, speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definition headsets, modems, hearing aids and even watches. Given the variety of devices which use Bluetooth, coupled with the contemporary deprecation of headphone jacks by Apple, Google, and other companies, and the lack of regulation by the FCC, the technology is prone to interference. Nonetheless, Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices. Bluetooth devices can advertise all of the services they provide. This makes using services easier, because more of the security, network address and permission configuration can be automated than with many other network types.
Computer requirements
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While some desktop computers and most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle."
Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.
Operating system implementation
For Microsoft platforms, Windows XP Service Pack 2 and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR. Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft. Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR. Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).
The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP, DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced. Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Apple products have worked with Bluetooth since Mac OSX v10.2, which was released in 2002.
Linux has two popular Bluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed by Qualcomm. Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed by Broadcom.
There is also Affix stack, developed by Nokia. It was once popular, but has not been updated since 2005.
FreeBSD has included Bluetooth since its v5.0 release, implemented through netgraph.
NetBSD has included Bluetooth since its v4.0 release. Its Bluetooth stack was ported to OpenBSD as well, however OpenBSD later removed it as unmaintained.
DragonFly BSD has had NetBSD's Bluetooth implementation since 1.11 (2008). A netgraph-based implementation from FreeBSD has also been available in the tree, possibly disabled until 2014-11-15, and may require more work.
Specifications and features
The specifications were formalized by the Bluetooth Special Interest Group (SIG) and formally announced on 20 May 1998. In 2014 it had a membership of over 30,000 companies worldwide. It was established by Ericsson, IBM, Intel, Nokia and Toshiba, and later joined by many other companies.
All versions of the Bluetooth standards support backwards compatibility. That lets the latest standard cover all older versions.
The Bluetooth Core Specification Working Group (CSWG) produces mainly 4 kinds of specifications:
The Bluetooth Core Specification, release cycle is typically a few years in between
Core Specification Addendum (CSA), release cycle can be as tight as a few times per year
Core Specification Supplements (CSS), can be released very quickly
Errata (Available with a user account: Errata login)
Bluetooth 1.0 and 1.0B
Products were not interoperable
Anonymity was not possible, preventing certain services from using Bluetooth environments
Bluetooth 1.1
Ratified as IEEE Standard 802.15.1–2002
Many errors found in the v1.0B specifications were fixed.
Added possibility of non-encrypted channels.
Received signal strength indicator (RSSI).
Bluetooth 1.2
Major enhancements include:
Faster connection and discovery
Adaptive frequency-hopping spread spectrum (AFH), which improves resistance to radio frequency interference by avoiding the use of crowded frequencies in the hopping sequence.
Higher transmission speeds in practice than in v1.1, up to 721 kbit/s.
Extended Synchronous Connections (eSCO), which improve voice quality of audio links by allowing retransmissions of corrupted packets, and may optionally increase audio latency to provide better concurrent data transfer.
Host Controller Interface (HCI) operation with three-wire UART.
Ratified as IEEE Standard 802.15.1–2005
Introduced flow control and retransmission modes for .
Bluetooth 2.0 + EDR
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for faster data transfer. The bit rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s. EDR uses a combination of GFSK and phase-shift keying modulation (PSK) with two variants, π/4-DQPSK and 8-DPSK. EDR can provide a lower power consumption through a reduced duty cycle.
The specification is published as Bluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.
Bluetooth 2.1 + EDR
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.
The headline feature of v2.1 is secure simple pairing (SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.
Version 2.1 allows various other improvements, including extended inquiry response (EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Bluetooth 3.0 + HS
Version 3.0 + HS of the Bluetooth Core Specification was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated 802.11 link.
The main new feature is (Alternative MAC/PHY), the addition of 802.11 as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0 or earlier Core Specification Addendum 1.
L2CAP Enhanced modes Enhanced Retransmission Mode (ERTM) implements reliable L2CAP channel, while Streaming Mode (SM) implements unreliable channel with no retransmission or flow control. Introduced in Core Specification Addendum 1.
Alternative MAC/PHY Enables the use of alternative MAC and PHYs for transporting Bluetooth profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration. However, when large quantities of data must be sent, the high-speed alternative MAC PHY 802.11 (typically associated with Wi-Fi) transports the data. This means that Bluetooth uses proven low power connection models when the system is idle, and the faster radio when it must send large quantities of data. AMP links require enhanced L2CAP modes.
Unicast Connectionless Data Permits sending service data without establishing an explicit L2CAP channel. It is intended for use by applications that require low latency between user action and reconnection/transmission of data. This is only appropriate for small amounts of data.
Enhanced Power Control Updates the power control feature to remove the open loop power control, and also to clarify ambiguities in power control introduced by the new modulation schemes added for EDR. Enhanced power control removes the ambiguities by specifying the behavior that is expected. The feature also adds closed loop power control, meaning RSSI filtering can start as the response is received. Additionally, a "go straight to maximum power" request has been introduced. This is expected to deal with the headset link loss issue typically observed when a user puts their phone into a pocket on the opposite side to the headset.
Ultra-wideband
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended for UWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.
On 16 March 2009, the WiMedia Alliance announced it was entering into technology transfer agreements for the WiMedia Ultra-wideband (UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG), Wireless USB Promoter Group and the USB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.
In October 2009, the Bluetooth Special Interest Group suspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of former WiMedia members had not and would not sign up to the necessary agreements for the IP transfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer term roadmap.
Bluetooth 4.0
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted . It includes Classic Bluetooth, Bluetooth high speed and Bluetooth Low Energy (BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree, is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by a coin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions. The provisional names Wibree and Bluetooth ULP (Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.
Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. In terms of lengthening the battery life of Bluetooth devices, represents a significant progression.
In a single-mode implementation, only the low energy protocol stack is implemented. Dialog Semiconductor, STMicroelectronics, AMICCOM, CSR, Nordic Semiconductor and Texas Instruments have released single mode Bluetooth Low Energy solutions.
In a dual-mode implementation, Bluetooth Smart functionality is integrated into an existing Classic Bluetooth controller. , the following semiconductor companies have announced the availability of chips meeting the standard: Qualcomm-Atheros, CSR, Broadcom and Texas Instruments. The compliant architecture shares all of Classic Bluetooth's existing radio and functionality resulting in a negligible cost increase compared to Classic Bluetooth.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services with AES Encryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
Bluetooth 4.1
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.
New features of this specification include:
Mobile wireless service coexistence signaling
Train nudging and generalized interlaced scanning
Low Duty Cycle Directed Advertising
L2CAP connection-oriented and dedicated channels with credit-based flow control
Dual Mode and Topology
LE Link Layer Topology
802.11n PAL
Audio architecture updates for Wide Band Speech
Fast data advertising interval
Limited discovery time
Notice that some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Bluetooth 4.2
Released on 2 December 2014, it introduces features for the Internet of things.
The major areas of improvement are:
Low Energy Secure Connection with Data Packet Length Extension
Link Layer Privacy with Extended Scanner Filter Policies
Internet Protocol Support Profile (IPSP) version 6 ready for Bluetooth Smart things to support connected home
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.
Bluetooth 5
The Bluetooth SIG released Bluetooth 5 on 6 December 2016. Its new features are mainly focused on new Internet of Things technology. Sony was the first to announce Bluetooth 5.0 support with its Xperia XZ Premium in Feb 2017 during the Mobile World Congress 2017. The Samsung Galaxy S8 launched with Bluetooth 5 support in April 2017. In September 2017, the iPhone 8, 8 Plus and iPhone X launched with Bluetooth 5 support as well. Apple also integrated Bluetooth 5 in its new HomePod offering released on 9 February 2018. Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0); the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, for BLE, options that can double the speed (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation of low-energy Bluetooth connections.
The major areas of improvement are:
Slot Availability Mask (SAM)
2 Mbit/s PHY for
LE Long Range
High Duty Cycle Non-Connectable Advertising
LE Advertising Extensions
LE Channel Selection Algorithm #2
Features Added in CSA5 – Integrated in v5.0:
Higher Output Power
The following features were removed in this version of the specification:
Park State
Bluetooth 5.1
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.
The major areas of improvement are:
Angle of Arrival (AoA) and Angle of Departure (AoD) which are used for locating and tracking of devices
Advertising Channel Index
GATT caching
Minor Enhancements batch 1:
HCI support for debug keys in LE Secure Connections
Sleep clock accuracy update mechanism
ADI field in scan response data
Interaction between and Flow Specification
Block Host channel classification for secondary advertising
Allow the SID to appear in scan response reports
Specify the behavior when rules are violated
Periodic Advertising Sync Transfer
Features Added in Core Specification Addendum (CSA) 6 – Integrated in v5.1:
Models
Mesh-based model hierarchy
The following features were removed in this version of the specification:
Unit keys
Bluetooth 5.2
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification Version 5.2. The new specification adds new features:
Enhanced Attribute Protocol (EATT), an improved version of the Attribute Protocol (ATT)
LE Power Control
LE Isochronous Channels
LE Audio that is built on top of the new 5.2 features. BT LE Audio was announced in January 2020 at CES by the Bluetooth SIG. Compared to regular Bluetooth Audio, Bluetooth Low Energy Audio makes lower battery consumption possible and creates a standardized way of transmitting audio over BT LE. Bluetooth LE Audio also allows one-to-many and many-to-one transmission, allowing multiple receivers from one source or one receiver for multiple sources, known as Auracast. It uses a new LC3 codec. BLE Audio will also add support for hearing aids. On 12 July 2022, the Bluetooth SIG announced the completion of Bluetooth LE Audio. The standard has a lower minimum latency claim of 20–30 ms vs Bluetooth Classic audio of 100–200 ms. At IFA in August 2023 Samsung announced support for Auracast through a software update for their Galaxy Buds2 Pro and two of their TV's. In October users started getting updates for the earbuds.
Bluetooth 5.3
The Bluetooth SIG published the Bluetooth Core Specification Version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:
Connection Subrating
Periodic Advertisement Interval
Channel Classification Enhancement
Encryption key size control enhancements
The following features were removed in this version of the specification:
Alternate MAC and PHY (AMP) Extension
Bluetooth 5.4
The Bluetooth SIG released the Bluetooth Core Specification Version 5.4 on 7 February 2023. This new version adds the following features:
Periodic Advertising with Responses (PAwR)
Encrypted Advertising Data
LE Security Levels Characteristic
Advertising Coding Selection
Technical information
Architecture
Software
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
Hardware
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g. SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is a short-range wireless device. Bluetooth devices are fabricated on RF CMOS integrated circuit (RF circuit) chips.
Bluetooth protocol stack
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols. Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols: HCI and RFCOMM.
Link Manager
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
Transmission and reception of data.
Name request
Request of the link addresses.
Establishment of the connection.
Authentication.
Negotiation of link mode and connection establishment.
Host Controller Interface
The Host Controller Interface provides a command interface between the controller and the host.
Logical Link Control and Adaptation Protocol
The Logical Link Control and Adaptation Protocol (L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
In Basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU.
In Retransmission and Flow Control modes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Enhanced Retransmission Mode (ERTM) This mode is an improved version of the original retransmission mode. This mode provides a reliable L2CAP channel.
Streaming Mode (SM) This is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel.
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
Service Discovery Protocol
The Service Discovery Protocol (SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine which Bluetooth profiles the headset can use (Headset Profile, Hands Free Profile (HFP), Advanced Audio Distribution Profile (A2DP) etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by a Universally Unique Identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications
Radio Frequency Communications (RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulates [[EIA-1325
]] (formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
Bluetooth Network Encapsulation Protocol
The Bluetooth Network Encapsulation Protocol (BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function to SNAP in Wireless LAN.
Audio/Video Control Transport Protocol
The Audio/Video Control Transport Protocol (AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
Audio/Video Distribution Transport Protocol
The Audio/Video Distribution Transport Protocol (AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over an L2CAP channel intended for video distribution profile in the Bluetooth transmission.
Telephony Control Protocol
The Telephony Control Protocol– Binary (TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Point-to-Point Protocol (PPP) Internet standard protocol for transporting IP datagrams over a point-to-point link.
TCP/IP/UDP Foundation Protocols for TCP/IP protocol suite
Object Exchange Protocol (OBEX) Session-layer protocol for the exchange of objects, providing a model for object and operation representation
Wireless Application Environment/Wireless Application Protocol (WAE/WAP) WAE specifies an application framework for wireless devices and WAP is an open standard to provide mobile users access to telephony and information services.
Baseband error correction
Depending on packet type, individual packets may be protected by error correction, either 1/3 rate forward error correction (FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged by automatic repeat request (ARQ).
Setting up connections
Any Bluetooth device in discoverable mode transmits the following information on demand:
Device name
Device class
List of services
Technical information (for example: device features, manufacturer, Bluetooth specification used, clock offset)
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has a unique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range named T610 (see Bluejacking).
Pairing and bonding
Motivation
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process called bonding, and a bond is generated through a process called pairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
Implementation
During pairing, the two devices establish a relationship by creating a shared secret known as a link key. If both devices store the same link key, they are said to be paired or bonded. A device that wants to communicate only with a bonded device can cryptographically authenticate the identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticated ACL link between the devices may be encrypted to protect exchanged data against eavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
Legacy pairing: This is the only method available in Bluetooth v2.0 and before. Each device must enter a PIN code; pairing is only successful if both devices enter the same PIN code. Any 16-byte UTF-8 string may be used as a PIN code; however, not all devices may be capable of entering all possible PIN codes.
Limited input devices: The obvious example of this class of device is a Bluetooth Hands-free headset, which generally have few inputs. These devices usually have a fixed PIN, for example "0000" or "1234", that are hard-coded into the device.
Numeric input devices: Mobile phones are classic examples of these devices. They allow a user to enter a numeric value up to 16 digits in length.
Alpha-numeric input devices: PCs and smartphones are examples of these devices. They allow a user to enter full UTF-8 text as a PIN code. If pairing with a less capable device the user must be aware of the input limitations on the other device; there is no mechanism available for a capable device to determine how it should limit the available input a user may use.
Secure Simple Pairing (SSP): This is required by Bluetooth v2.1, although a Bluetooth v2.1 device may only use legacy pairing to interoperate with a v2.0 or earlier device. Secure Simple Pairing uses a form of public-key cryptography, and some types can help protect against man in the middle, or MITM attacks. SSP has the following authentication mechanisms:
Just works: As the name implies, this method just works, with no user interaction. However, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with minimal IO capabilities, and is more secure than the fixed PIN mechanism this limited set of devices uses for legacy pairing. This method provides no man-in-the-middle (MITM) protection.
Numeric comparison: If both devices have a display, and at least one can accept a binary yes/no user input, they may use Numeric Comparison. This method displays a 6-digit numeric code on each device. The user should compare the numbers to ensure they are identical. If the comparison succeeds, the user(s) should confirm pairing on the device(s) that can accept an input. This method provides MITM protection, assuming the user confirms on both devices and actually performs the comparison properly.
Passkey Entry: This method may be used between a device with a display and a device with numeric keypad entry (such as a keyboard), or two devices with numeric keypad entry. In the first case, the display presents a 6-digit numeric code to the user, who then enters the code on the keypad. In the second case, the user of each device enters the same 6-digit number. Both of these cases provide MITM protection.
Out of band (OOB): This method uses an external means of communication, such as near-field communication (NFC) to exchange some information used in the pairing process. Pairing is completed using the Bluetooth radio, but requires information from the OOB mechanism. This provides only the level of MITM protection that is present in the OOB mechanism.
SSP is considered simple for the following reasons:
In most cases, it does not require a user to generate a passkey.
For use cases not requiring MITM protection, user interaction can be eliminated.
For numeric comparison, MITM protection can be achieved with a simple equality comparison by the user.
Using OOB with NFC enables pairing when devices simply get close, rather than requiring a lengthy discovery process.
Security concerns
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simple XOR attacks to retrieve the encryption key.
Turning off encryption is required for several normal operations, so it is problematic to detect if encryption is disabled for a valid reason or a security attack.
Bluetooth v2.1 addresses this in the following ways:
Encryption is required for all non-SDP (Service Discovery Protocol) connections
A new Encryption Pause and Resume feature is used for all normal operations that require that encryption be disabled. This enables easy identification of normal operation from security attacks.
The encryption key must be refreshed before it expires.
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Security
Overview
Bluetooth implements confidentiality, authentication and key derivation with custom algorithms based on the SAFER+ block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.
The E0 stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.
In September 2008, the National Institute of Standards and Technology (NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See the pairing mechanisms section for more about these changes.
The most common cyber attacks of Bluetooth devices
When a Bluetooth device transmits unwanted spam and phishing messages to another Bluetooth device, it is known as bluejacking. Bluesnarfing is a malicious hack that uses a Bluetooth connection to steal information from one's device. Bluesmacking is a denial of service (DoS) attack that attempts to overload one's device and shut it down. Bluebugging is a sort of attack in which a cybercriminal uses a hidden Bluetooth connection to acquire backdoor access to one's device. Car whispering is a Bluetooth security flaw that affects Bluetooth-enabled car radios.
Cybersecurity compliance of bluetooth devices
RED
The Radio Equipment Directive 2014/53/EU (RED) regulates radio equipment's electromagnetic compatibility, safety, health, and radio spectrum efficiency. The Directive's Article 3(3) replaces radio-specific equipment cybersecurity with common interface standards. Radio equipment marketed in the EU must meet cybersecurity criteria in Article 3 (3) of the Radio Equipment Directive (RED).
Consumer IoT devices
ETSI EN 303 645 prepares the devices to guard against the most prevalent cybersecurity risks and prevent large-scale assaults on connected devices. It lays the groundwork for IoT certification. It has 13 cybersecurity categories and data protection regulations. In addition to device security requirements, it offers advice for managing security risks, including identification, assessment, deployment of controls, and continuing monitoring.
Bluejacking
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!" Bluejacking does not involve the removal or alteration of any data from the device.
Some form of DoS is also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
History of security concerns
2001–2004
In 2001, Jakobsson and Wetzel from Bell Laboratories discovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme. In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data. In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at the CeBIT fairgrounds, showing the importance of the problem to the world. A new attack called BlueBug was used for this experiment. In 2004 the first purported virus using Bluetooth to spread itself among mobile phones appeared on the Symbian OS.
The virus was first described by Kaspersky Lab and requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology or Symbian OS since the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see also Bluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to with directional antennas and signal amplifiers.
This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.
2005
In January 2005, a mobile malware worm known as Lasco surfaced. The worm began targeting mobile phones using Symbian OS (Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other .SIS files on the device, allowing replication to another device through the use of removable media (Secure Digital, CompactFlash, etc.). The worm can render the mobile device unstable.
In April 2005, Cambridge University security researchers published results of their actual implementation of passive attacks against the PIN-based pairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.
In June 2005, Yaniv Shaked and Avishai Wool published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.
In August 2005, police in Cambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.
2006
In April 2006, researchers from Secure Network and F-Secure published a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.
In October 2006, at the Luxemburgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.
2017
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, including Microsoft Windows, Linux, Apple iOS, and Google Android. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.
2018
In July 2018, Lior Neumann and Eli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.
2019
In August 2019, security researchers at the Singapore University of Technology and Design, Helmholtz Center for Information Security, and University of Oxford discovered a vulnerability, called KNOB (Key Negotiation Of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".
Google released an Android security patch on 5 August 2019, which removed this vulnerability.
Health concerns
Bluetooth uses the radio frequency spectrum in the 2.402GHz to 2.480GHz range, which is non-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included by IARC in the possible carcinogen list. Maximum power output from a Bluetooth radio is 100mW for class 1, 2.5mW for class 2, and 1mW for class 3 devices. Even the maximum power output of class1 is a lower level than the lowest-powered mobile phones. UMTS and W-CDMA output 250mW, GSM1800/1900 outputs 1000mW, and GSM850/900 outputs 2000mW.
Award programs
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World. The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.
See also
ANT+
Bluetooth stack – building blocks that make up the various implementations of the Bluetooth protocol
List of Bluetooth profiles – features used within the Bluetooth stack
Bluesniping
BlueSoleil – proprietary Bluetooth driver
Bluetooth Low Energy beacons (AltBeacon, iBeacon, Eddystone)
Bluetooth mesh networking
Continua Health Alliance
DASH7
Headset (audio)
Wi-Fi hotspot
Java APIs for Bluetooth
Key finder
Li-Fi
List of Bluetooth protocols
MyriaNed
Near-field communication
NearLink
RuBee – secure wireless protocol alternative
Tethering
Thread (network protocol)
Wi-Fi HaLow
Zigbee – low-power lightweight wireless protocol in the ISM band based on IEEE 802.15.4
Notes
References
External links
Specifications at Bluetooth SIG
Bluetooth
Mobile computers
Networking standards
Wireless communication systems
Telecommunications-related introductions in 1989
Swedish inventions
|
https://en.wikipedia.org/wiki/Boron
|
Boron is a chemical element with the symbol B and atomic number 5. In its crystalline form it is a brittle, dark, lustrous metalloid; in its amorphous form it is a brown powder. As the lightest element of the boron group it has three valence electrons for forming covalent bonds, resulting in many compounds such as boric acid, the mineral sodium borate, and the ultra-hard crystals of boron carbide and boron nitride.
Boron is synthesized entirely by cosmic ray spallation and supernovae and not by stellar nucleosynthesis, so it is a low-abundance element in the Solar System and in the Earth's crust. It constitutes about 0.001 percent by weight of Earth's crust. It is concentrated on Earth by the water-solubility of its more common naturally occurring compounds, the borate minerals. These are mined industrially as evaporites, such as borax and kernite. The largest known deposits are in Turkey, the largest producer of boron minerals.
Elemental boron is a metalloid that is found in small amounts in meteoroids but chemically uncombined boron is not otherwise found naturally on Earth. Industrially, the very pure element is produced with difficulty because of contamination by carbon or other elements that resist removal. Several allotropes exist: amorphous boron is a brown powder; crystalline boron is silvery to black, extremely hard (about 9.5 on the Mohs scale), and a poor electrical conductor at room temperature. The primary use of the element itself is as boron filaments with applications similar to carbon fibers in some high-strength materials.
Boron is primarily used in chemical compounds. About half of all production consumed globally is an additive in fiberglass for insulation and structural materials. The next leading use is in polymers and ceramics in high-strength, lightweight structural and heat-resistant materials. Borosilicate glass is desired for its greater strength and thermal shock resistance than ordinary soda lime glass. As sodium perborate, it is used as a bleach. A small amount is used as a dopant in semiconductors, and reagent intermediates in the synthesis of organic fine chemicals. A few boron-containing organic pharmaceuticals are used or are in study. Natural boron is composed of two stable isotopes, one of which (boron-10) has a number of uses as a neutron-capturing agent.
The intersection of boron with biology is very small. Consensus on it as essential for mammalian life is lacking. Borates have low toxicity in mammals (similar to table salt) but are more toxic to arthropods and are occasionally used as insecticides. Boron-containing organic antibiotics are known. Although only traces are required, it is an essential plant nutrient.
History
The word boron was coined from borax, the mineral from which it was isolated, by analogy with carbon, which boron resembles chemically.
Borax in its mineral form (then known as tincal) first saw use as a glaze, beginning in China circa 300 AD. Some crude borax traveled westward, and was apparently mentioned by the alchemist Jabir ibn Hayyan around 700 AD. Marco Polo brought some glazes back to Italy in the 13th century. Georgius Agricola, in around 1600, reported the use of borax as a flux in metallurgy. In 1777, boric acid was recognized in the hot springs (soffioni) near Florence, Italy, at which point it became known as sal sedativum, with ostensible medical benefits. The mineral was named sassolite, after Sasso Pisano in Italy. Sasso was the main source of European borax from 1827 to 1872, when American sources replaced it. Boron compounds were relatively rarely used until the late 1800s when Francis Marion Smith's Pacific Coast Borax Company first popularized and produced them in volume at low cost.
Boron was not recognized as an element until it was isolated by Sir Humphry Davy and by Joseph Louis Gay-Lussac and Louis Jacques Thénard. In 1808 Davy observed that electric current sent through a solution of borates produced a brown precipitate on one of the electrodes. In his subsequent experiments, he used potassium to reduce boric acid instead of electrolysis. He produced enough boron to confirm a new element and named it boracium. Gay-Lussac and Thénard used iron to reduce boric acid at high temperatures. By oxidizing boron with air, they showed that boric acid is its oxidation product. Jöns Jacob Berzelius identified it as an element in 1824. Pure boron was arguably first produced by the American chemist Ezekiel Weintraub in 1909.
Preparation of elemental boron in the laboratory
The earliest routes to elemental boron involved the reduction of boric oxide with metals such as magnesium or aluminium. However, the product is almost always contaminated with borides of those metals. Pure boron can be prepared by reducing volatile boron halides with hydrogen at high temperatures. Ultrapure boron for use in the semiconductor industry is produced by the decomposition of diborane at high temperatures and then further purified by the zone melting or Czochralski processes.
The production of boron compounds does not involve the formation of elemental boron, but exploits the convenient availability of borates.
Characteristics
Allotropes
Boron is similar to carbon in its capability to form stable covalently bonded molecular networks. Even nominally disordered (amorphous) boron contains regular boron icosahedra which are bonded randomly to each other without long-range order. Crystalline boron is a very hard, black material with a melting point of above 2000 °C. It forms four major allotropes: α-rhombohedral and β-rhombohedral (α-R and β-R), γ-orthorhombic (γ) and β-tetragonal (β-T). All four phases are stable at ambient conditions, and β-rhombohedral is the most common and stable. An α-tetragonal phase also exists (α-T), but is very difficult to produce without significant contamination. Most of the phases are based on B12 icosahedra, but the γ phase can be described as a rocksalt-type arrangement of the icosahedra and B2 atomic pairs. It can be produced by compressing other boron phases to 12–20 GPa and heating to 1500–1800 °C; it remains stable after releasing the temperature and pressure. The β-T phase is produced at similar pressures, but higher temperatures of 1800–2200 °C. The α-T and β-T phases might coexist at ambient conditions, with the β-T phase being the more stable. Compressing boron above 160 GPa produces a boron phase with an as yet unknown structure, and this phase is a superconductor at temperatures below 6–12 K. Borospherene (fullerene-like B40 molecules) and borophene (proposed graphene-like structure) were described in 2014.
Chemistry of the element
Elemental boron is rare and poorly studied because the pure material is extremely difficult to prepare. Most studies of "boron" involve samples that contain small amounts of carbon. The chemical behavior of boron resembles that of silicon more than aluminium. Crystalline boron is chemically inert and resistant to attack by boiling hydrofluoric or hydrochloric acid. When finely divided, it is attacked slowly by hot concentrated hydrogen peroxide, hot concentrated nitric acid, hot sulfuric acid or hot mixture of sulfuric and chromic acids.
The rate of oxidation of boron depends on the crystallinity, particle size, purity and temperature. Boron does not react with air at room temperature, but at higher temperatures it burns to form boron trioxide:
4 B + 3 O2 → 2 B2O3
Boron undergoes halogenation to give trihalides; for example,
2 B + 3 Br2 → 2 BBr3
The trichloride in practice is usually made from the oxide.
Atomic structure
Boron is the lightest element having an electron in a p-orbital in its ground state. Unlike most other p-elements, it rarely obeys the octet rule and usually places only six electrons (in three molecular orbitals) onto its valence shell. Boron is the prototype for the boron group (the IUPAC group 13), although the other members of this group are metals and more typical p-elements (only aluminium to some extent shares boron's aversion to the octet rule).
Boron also has much lower electronegativity than subsequent period 2 elements. For the latter, lithium salts are common e.g. lithium fluoride, lithium hydroxide, lithium amide, and methyllithium, but lithium boryllides are extraordinarily rare. Strong bases do not deprotonate a borohydride R2BH to the boryl anion R2B−, instead forming the octet-complete adduct R2HB-base.
Chemical compounds
In the most familiar compounds, boron has the formal oxidation state III. These include oxides, sulfides, nitrides, and halides.
The trihalides adopt a planar trigonal structure. These compounds are Lewis acids in that they readily form adducts with electron-pair donors, which are called Lewis bases. For example, fluoride (F−) and boron trifluoride (BF3) combined to give the tetrafluoroborate anion, BF4−. Boron trifluoride is used in the petrochemical industry as a catalyst. The halides react with water to form boric acid.
It is found in nature on Earth almost entirely as various oxides of B(III), often associated with other elements. More than one hundred borate minerals contain boron in oxidation state +3. These minerals resemble silicates in some respect, although it is often found not only in a tetrahedral coordination with oxygen, but also in a trigonal planar configuration. Unlike silicates, boron minerals never contain it with coordination number greater than four. A typical motif is exemplified by the tetraborate anions of the common mineral borax, shown at left. The formal negative charge of the tetrahedral borate center is balanced by metal cations in the minerals, such as the sodium (Na+) in borax. The tourmaline group of borate-silicates is also a very important boron-bearing mineral group, and a number of borosilicates are also known to exist naturally.
Boranes
Boranes are chemical compounds of boron and hydrogen, with the generic formula of BxHy. These compounds do not occur in nature. Many of the boranes readily oxidise on contact with air, some violently. The parent member BH3 is called borane, but it is known only in the gaseous state, and dimerises to form diborane, B2H6. The larger boranes all consist of boron clusters that are polyhedral, some of which exist as isomers. For example, isomers of B20H26 are based on the fusion of two 10-atom clusters.
The most important boranes are diborane B2H6 and two of its pyrolysis products, pentaborane B5H9 and decaborane B10H14. A large number of anionic boron hydrides are known, e.g. [B12H12]2−.
The formal oxidation number in boranes is positive, and is based on the assumption that hydrogen is counted as −1 as in active metal hydrides. The mean oxidation number for the borons is then simply the ratio of hydrogen to boron in the molecule. For example, in diborane B2H6, the boron oxidation state is +3, but in decaborane B10H14, it is 7/5 or +1.4. In these compounds the oxidation state of boron is often not a whole number.
Boron nitrides
The boron nitrides are notable for the variety of structures that they adopt. They exhibit structures analogous to various allotropes of carbon, including graphite, diamond, and nanotubes. In the diamond-like structure, called cubic boron nitride (tradename Borazon), boron atoms exist in the tetrahedral structure of carbon atoms in diamond, but one in every four B-N bonds can be viewed as a coordinate covalent bond, wherein two electrons are donated by the nitrogen atom which acts as the Lewis base to a bond to the Lewis acidic boron(III) centre. Cubic boron nitride, among other applications, is used as an abrasive, as it has a hardness comparable with diamond (the two substances are able to produce scratches on each other). In the BN compound analogue of graphite, hexagonal boron nitride (h-BN), the positively charged boron and negatively charged nitrogen atoms in each plane lie adjacent to the oppositely charged atom in the next plane. Consequently, graphite and h-BN have very different properties, although both are lubricants, as these planes slip past each other easily. However, h-BN is a relatively poor electrical and thermal conductor in the planar directions.
Organoboron chemistry
A large number of organoboron compounds are known and many are useful in organic synthesis. Many are produced from hydroboration, which employs diborane, B2H6, a simple borane chemical, or carboboration. Organoboron(III) compounds are usually tetrahedral or trigonal planar, for example, tetraphenylborate, [B(C6H5)4]− vs. triphenylborane, B(C6H5)3. However, multiple boron atoms reacting with each other have a tendency to form novel dodecahedral (12-sided) and icosahedral (20-sided) structures composed completely of boron atoms, or with varying numbers of carbon heteroatoms.
Organoboron chemicals have been employed in uses as diverse as boron carbide (see below), a complex very hard ceramic composed of boron-carbon cluster anions and cations, to carboranes, carbon-boron cluster chemistry compounds that can be halogenated to form reactive structures including carborane acid, a superacid. As one example, carboranes form useful molecular moieties that add considerable amounts of boron to other biochemicals in order to synthesize boron-containing compounds for boron neutron capture therapy for cancer.
Compounds of B(I) and B(II)
As anticipated by its hydride clusters, boron forms a variety of stable compounds with formal oxidation state less than three. B2F4 and B4Cl4 are well characterized.
Binary metal-boron compounds, the metal borides, contain boron in negative oxidation states. Illustrative is magnesium diboride (MgB2). Each boron atom has a formal −1 charge and magnesium is assigned a formal charge of +2. In this material, the boron centers are trigonal planar with an extra double bond for each boron, forming sheets akin to the carbon in graphite. However, unlike hexagonal boron nitride, which lacks electrons in the plane of the covalent atoms, the delocalized electrons in magnesium diboride allow it to conduct electricity similar to isoelectronic graphite. In 2001, this material was found to be a high-temperature superconductor. It is a superconductor under active development. A project at CERN to make MgB2 cables has resulted in superconducting test cables able to carry 20,000 amperes for extremely high current distribution applications, such as the contemplated high luminosity version of the Large Hadron Collider.
Certain other metal borides find specialized applications as hard materials for cutting tools. Often the boron in borides has fractional oxidation states, such as −1/3 in calcium hexaboride (CaB6).
From the structural perspective, the most distinctive chemical compounds of boron are the hydrides. Included in this series are the cluster compounds dodecaborate (), decaborane (B10H14), and the carboranes such as C2B10H12. Characteristically such compounds contain boron with coordination numbers greater than four.
Isotopes
Boron has two naturally occurring and stable isotopes, 11B (80.1%) and 10B (19.9%). The mass difference results in a wide range of δ11B values, which are defined as a fractional difference between the 11B and 10B and traditionally expressed in parts per thousand, in natural waters ranging from −16 to +59. There are 13 known isotopes of boron; the shortest-lived isotope is 7B which decays through proton emission and alpha decay with a half-life of 3.5×10−22 s. Isotopic fractionation of boron is controlled by the exchange reactions of the boron species B(OH)3 and [B(OH)4]−. Boron isotopes are also fractionated during mineral crystallization, during H2O phase changes in hydrothermal systems, and during hydrothermal alteration of rock. The latter effect results in preferential removal of the [10B(OH)4]− ion onto clays. It results in solutions enriched in 11B(OH)3 and therefore may be responsible for the large 11B enrichment in seawater relative to both oceanic crust and continental crust; this difference may act as an isotopic signature.
The exotic 17B exhibits a nuclear halo, i.e. its radius is appreciably larger than that predicted by the liquid drop model.
The 10B isotope is useful for capturing thermal neutrons (see neutron cross section#Typical cross sections). The nuclear industry enriches natural boron to nearly pure 10B. The less-valuable by-product, depleted boron, is nearly pure 11B.
Commercial isotope enrichment
Because of its high neutron cross-section, boron-10 is often used to control fission in nuclear reactors as a neutron-capturing substance. Several industrial-scale enrichment processes have been developed; however, only the fractionated vacuum distillation of the dimethyl ether adduct of boron trifluoride (DME-BF3) and column chromatography of borates are being used.
Enriched boron (boron-10)
Enriched boron or 10B is used in both radiation shielding and is the primary nuclide used in neutron capture therapy of cancer. In the latter ("boron neutron capture therapy" or BNCT), a compound containing 10B is incorporated into a pharmaceutical which is selectively taken up by a malignant tumor and tissues near it. The patient is then treated with a beam of low energy neutrons at a relatively low neutron radiation dose. The neutrons, however, trigger energetic and short-range secondary alpha particle and lithium-7 heavy ion radiation that are products of the boron-neutron nuclear reaction, and this ion radiation additionally bombards the tumor, especially from inside the tumor cells.
In nuclear reactors, 10B is used for reactivity control and in emergency shutdown systems. It can serve either function in the form of borosilicate control rods or as boric acid. In pressurized water reactors, 10B boric acid is added to the reactor coolant when the plant is shut down for refueling. It is then slowly filtered out over many months as fissile material is used up and the fuel becomes less reactive.
In future crewed interplanetary spacecraft, 10B has a theoretical role as structural material (as boron fibers or BN nanotube material) which would also serve a special role in the radiation shield. One of the difficulties in dealing with cosmic rays, which are mostly high energy protons, is that some secondary radiation from interaction of cosmic rays and spacecraft materials is high energy spallation neutrons. Such neutrons can be moderated by materials high in light elements, such as polyethylene, but the moderated neutrons continue to be a radiation hazard unless actively absorbed in the shielding. Among light elements that absorb thermal neutrons, 6Li and 10B appear as potential spacecraft structural materials which serve both for mechanical reinforcement and radiation protection.
Depleted boron (boron-11)
Radiation-hardened semiconductors
Cosmic radiation will produce secondary neutrons if it hits spacecraft structures. Those neutrons will be captured in 10B, if it is present in the spacecraft's semiconductors, producing a gamma ray, an alpha particle, and a lithium ion. Those resultant decay products may then irradiate nearby semiconductor "chip" structures, causing data loss (bit flipping, or single event upset). In radiation-hardened semiconductor designs, one countermeasure is to use depleted boron, which is greatly enriched in 11B and contains almost no 10B. This is useful because 11B is largely immune to radiation damage. Depleted boron is a byproduct of the nuclear industry (see above).
Proton-boron fusion
11B is also a candidate as a fuel for aneutronic fusion. When struck by a proton with energy of about 500 keV, it produces three alpha particles and 8.7 MeV of energy. Most other fusion reactions involving hydrogen and helium produce penetrating neutron radiation, which weakens reactor structures and induces long-term radioactivity, thereby endangering operating personnel. The alpha particles from 11B fusion can be turned directly into electric power, and all radiation stops as soon as the reactor is turned off.
NMR spectroscopy
Both 10B and 11B possess nuclear spin. The nuclear spin of 10B is 3 and that of 11B is . These isotopes are, therefore, of use in nuclear magnetic resonance spectroscopy; and spectrometers specially adapted to detecting the boron-11 nuclei are available commercially. The 10B and 11B nuclei also cause splitting in the resonances of attached nuclei.
Occurrence
Boron is rare in the Universe and solar system due to trace formation in the Big Bang and in stars. It is formed in minor amounts in cosmic ray spallation nucleosynthesis and may be found uncombined in cosmic dust and meteoroid materials.
In the high oxygen environment of Earth, boron is always found fully oxidized to borate. Boron does not appear on Earth in elemental form. Extremely small traces of elemental boron were detected in Lunar regolith.
Although boron is a relatively rare element in the Earth's crust, representing only 0.001% of the crust mass, it can be highly concentrated by the action of water, in which many borates are soluble.
It is found naturally combined in compounds such as borax and boric acid (sometimes found in volcanic spring waters). About a hundred borate minerals are known.
On 5 September 2017, scientists reported that the Curiosity rover detected boron, an essential ingredient for life on Earth, on the planet Mars. Such a finding, along with previous discoveries that water may have been present on ancient Mars, further supports the possible early habitability of Gale Crater on Mars.
Production
Economically important sources of boron are the minerals colemanite, rasorite (kernite), ulexite and tincal. Together these constitute 90% of mined boron-containing ore. The largest global borax deposits known, many still untapped, are in Central and Western Turkey, including the provinces of Eskişehir, Kütahya and Balıkesir. Global proven boron mineral mining reserves exceed one billion metric tonnes, against a yearly production of about four million tonnes.
Turkey and the United States are the largest producers of boron products. Turkey produces about half of the global yearly demand, through Eti Mine Works () a Turkish state-owned mining and chemicals company focusing on boron products. It holds a government monopoly on the mining of borate minerals in Turkey, which possesses 72% of the world's known deposits. In 2012, it held a 47% share of production of global borate minerals, ahead of its main competitor, Rio Tinto Group.
Almost a quarter (23%) of global boron production comes from the single Rio Tinto Borax Mine (also known as the U.S. Borax Boron Mine) near Boron, California.
Market trend
The average cost of crystalline elemental boron is US$5/g. Elemental boron is chiefly used in making boron fibers, where it is deposited by chemical vapor deposition on a tungsten core (see below). Boron fibers are used in lightweight composite applications, such as high strength tapes. This use is a very small fraction of total boron use. Boron is introduced into semiconductors as boron compounds, by ion implantation.
Estimated global consumption of boron (almost entirely as boron compounds) was about 4 million tonnes of B2O3 in 2012. As compounds such as borax and kernite its cost was US$377/tonne in 2019. Boron mining and refining capacities are considered to be adequate to meet expected levels of growth through the next decade.
The form in which boron is consumed has changed in recent years. The use of ores like colemanite has declined following concerns over arsenic content. Consumers have moved toward the use of refined borates and boric acid that have a lower pollutant content.
Increasing demand for boric acid has led a number of producers to invest in additional capacity. Turkey's state-owned Eti Mine Works opened a new boric acid plant with the production capacity of 100,000 tonnes per year at Emet in 2003. Rio Tinto Group increased the capacity of its boron plant from 260,000 tonnes per year in 2003 to 310,000 tonnes per year by May 2005, with plans to grow this to 366,000 tonnes per year in 2006. Chinese boron producers have been unable to meet rapidly growing demand for high quality borates. This has led to imports of sodium tetraborate (borax) growing by a hundredfold between 2000 and 2005 and boric acid imports increasing by 28% per year over the same period.
The rise in global demand has been driven by high growth rates in glass fiber, fiberglass and borosilicate glassware production. A rapid increase in the manufacture of reinforcement-grade boron-containing fiberglass in Asia, has offset the development of boron-free reinforcement-grade fiberglass in Europe and the US. The recent rises in energy prices may lead to greater use of insulation-grade fiberglass, with consequent growth in the boron consumption. Roskill Consulting Group forecasts that world demand for boron will grow by 3.4% per year to reach 21 million tonnes by 2010. The highest growth in demand is expected to be in Asia where demand could rise by an average 5.7% per year.
Applications
Nearly all boron ore extracted from the Earth is destined for refinement into boric acid and sodium tetraborate pentahydrate. In the United States, 70% of the boron is used for the production of glass and ceramics.
The major global industrial-scale use of boron compounds (about 46% of end-use) is in production of glass fiber for boron-containing insulating and structural fiberglasses, especially in Asia. Boron is added to the glass as borax pentahydrate or boron oxide, to influence the strength or fluxing qualities of the glass fibers. Another 10% of global boron production is for borosilicate glass as used in high strength glassware. About 15% of global boron is used in boron ceramics, including super-hard materials discussed below. Agriculture consumes 11% of global boron production, and bleaches and detergents about 6%.
Elemental boron fiber
Boron fibers (boron filaments) are high-strength, lightweight materials that are used chiefly for advanced aerospace structures as a component of composite materials, as well as limited production consumer and sporting goods such as golf clubs and fishing rods. The fibers can be produced by chemical vapor deposition of boron on a tungsten filament.
Boron fibers and sub-millimeter sized crystalline boron springs are produced by laser-assisted chemical vapor deposition. Translation of the focused laser beam allows production of even complex helical structures. Such structures show good mechanical properties (elastic modulus 450 GPa, fracture strain 3.7%, fracture stress 17 GPa) and can be applied as reinforcement of ceramics or in micromechanical systems.
Boronated fiberglass
Fiberglass is a fiber reinforced polymer made of plastic reinforced by glass fibers, commonly woven into a mat. The glass fibers used in the material are made of various types of glass depending upon the fiberglass use. These glasses all contain silica or silicate, with varying amounts of oxides of calcium, magnesium, and sometimes boron. The boron is present as borosilicate, borax, or boron oxide, and is added to increase the strength of the glass, or as a fluxing agent to decrease the melting temperature of silica, which is too high to be easily worked in its pure form to make glass fibers.
The highly boronated glasses used in fiberglass are E-glass (named for "Electrical" use, but now the most common fiberglass for general use). E-glass is alumino-borosilicate glass with less than 1% w/w alkali oxides, mainly used for glass-reinforced plastics. Other common high-boron glasses include C-glass, an alkali-lime glass with high boron oxide content, used for glass staple fibers and insulation, and D-glass, a borosilicate glass, named for its low dielectric constant.
Not all fiberglasses contain boron, but on a global scale, most of the fiberglass used does contain it. Because of the ubiquitous use of fiberglass in construction and insulation, boron-containing fiberglasses consume half the global production of boron, and are the single largest commercial boron market.
Borosilicate glass
Borosilicate glass, which is typically 12–15% B2O3, 80% SiO2, and 2% Al2O3, has a low coefficient of thermal expansion, giving it a good resistance to thermal shock. Schott AG's "Duran" and Owens-Corning's trademarked Pyrex are two major brand names for this glass, used both in laboratory glassware and in consumer cookware and bakeware, chiefly for this resistance.
Boron carbide ceramic
Several boron compounds are known for their extreme hardness and toughness.
Boron carbide is a ceramic material which is obtained by decomposing B2O3 with carbon in an electric furnace:
2 B2O3 + 7 C → B4C + 6 CO
Boron carbide's structure is only approximately B4C, and it shows a clear depletion of carbon from this suggested stoichiometric ratio. This is due to its very complex structure. The substance can be seen with empirical formula B12C3 (i.e., with B12 dodecahedra being a motif), but with less carbon, as the suggested C3 units are replaced with C-B-C chains, and some smaller (B6) octahedra are present as well (see the boron carbide article for structural analysis). The repeating polymer plus semi-crystalline structure of boron carbide gives it great structural strength per weight. It is used in tank armor, bulletproof vests, and numerous other structural applications.
Boron carbide's ability to absorb neutrons without forming long-lived radionuclides (especially when doped with extra boron-10) makes the material attractive as an absorbent for neutron radiation arising in nuclear power plants. Nuclear applications of boron carbide include shielding, control rods and shut-down pellets. Within control rods, boron carbide is often powdered, to increase its surface area.
High-hardness and abrasive compounds
Boron carbide and cubic boron nitride powders are widely used as abrasives. Boron nitride is a material isoelectronic to carbon. Similar to carbon, it has both hexagonal (soft graphite-like h-BN) and cubic (hard, diamond-like c-BN) forms. h-BN is used as a high temperature component and lubricant. c-BN, also known under commercial name borazon, is a superior abrasive. Its hardness is only slightly smaller than, but its chemical stability is superior, to that of diamond. Heterodiamond (also called BCN) is another diamond-like boron compound.
Metallurgy
Boron is added to boron steels at the level of a few parts per million to increase hardenability. Higher percentages are added to steels used in the nuclear industry due to boron's neutron absorption ability.
Boron can also increase the surface hardness of steels and alloys through boriding. Additionally metal borides are used for coating tools through chemical vapor deposition or physical vapor deposition. Implantation of boron ions into metals and alloys, through ion implantation or ion beam deposition, results in a spectacular increase in surface resistance and microhardness. Laser alloying has also been successfully used for the same purpose. These borides are an alternative to diamond coated tools, and their (treated) surfaces have similar properties to those of the bulk boride.
For example, rhenium diboride can be produced at ambient pressures, but is rather expensive because of rhenium. The hardness of ReB2 exhibits considerable anisotropy because of its hexagonal layered structure. Its value is comparable to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride.
Similarly, AlMgB14 + TiB2 composites possess high hardness and wear resistance and are used in either bulk form or as coatings for components exposed to high temperatures and wear loads.
Detergent formulations and bleaching agents
Borax is used in various household laundry and cleaning products, including the "20 Mule Team Borax" laundry booster and "Boraxo" powdered hand soap. It is also present in some tooth bleaching formulas.
Sodium perborate serves as a source of active oxygen in many detergents, laundry detergents, cleaning products, and laundry bleaches. However, despite its name, "Borateem" laundry bleach no longer contains any boron compounds, using sodium percarbonate instead as a bleaching agent.
Insecticides
Boric acid is used as an insecticide, notably against ants, fleas, and cockroaches.
Semiconductors
Boron is a useful dopant for such semiconductors as silicon, germanium, and silicon carbide. Having one fewer valence electron than the host atom, it donates a hole resulting in p-type conductivity. Traditional method of introducing boron into semiconductors is via its atomic diffusion at high temperatures. This process uses either solid (B2O3), liquid (BBr3), or gaseous boron sources (B2H6 or BF3). However, after the 1970s, it was mostly replaced by ion implantation, which relies mostly on BF3 as a boron source. Boron trichloride gas is also an important chemical in semiconductor industry, however, not for doping but rather for plasma etching of metals and their oxides. Triethylborane is also injected into vapor deposition reactors as a boron source. Examples are the plasma deposition of boron-containing hard carbon films, silicon nitride–boron nitride films, and for doping of diamond film with boron.
Magnets
Boron is a component of neodymium magnets (Nd2Fe14B), which are among the strongest type of permanent magnet. These magnets are found in a variety of electromechanical and electronic devices, such as magnetic resonance imaging (MRI) medical imaging systems, in compact and relatively small motors and actuators. As examples, computer HDDs (hard disk drives), CD (compact disk) and DVD (digital versatile disk) players rely on neodymium magnet motors to deliver intense rotary power in a remarkably compact package. In mobile phones 'Neo' magnets provide the magnetic field which allows tiny speakers to deliver appreciable audio power.
Shielding and neutron absorber in nuclear reactors
Boron shielding is used as a control for nuclear reactors, taking advantage of its high cross-section for neutron capture.
In pressurized water reactors a variable concentration of boronic acid in the cooling water is used as a neutron poison to compensate the variable reactivity of the fuel. When new rods are inserted the concentration of boronic acid is maximal, and is reduced during the lifetime.
Other nonmedical uses
Because of its distinctive green flame, amorphous boron is used in pyrotechnic flares.
In the 1950s, there were several studies of the use of boranes as energy-increasing "Zip fuel" additives for jet fuel.
Starch and casein-based adhesives contain sodium tetraborate decahydrate (Na2B4O7·10 H2O)
Some anti-corrosion systems contain borax.
Sodium borates are used as a flux for soldering silver and gold and with ammonium chloride for welding ferrous metals. They are also fire retarding additives to plastics and rubber articles.
Boric acid (also known as orthoboric acid) H3BO3 is used in the production of textile fiberglass and flat panel displays and in many PVAc- and PVOH-based adhesives.
Triethylborane is a substance which ignites the JP-7 fuel of the Pratt & Whitney J58 turbojet/ramjet engines powering the Lockheed SR-71 Blackbird. It was also used to ignite the F-1 Engines on the Saturn V Rocket utilized by NASA's Apollo and Skylab programs from 1967 until 1973. Today SpaceX uses it to ignite the engines on their Falcon 9 rocket. Triethylborane is suitable for this because of its pyrophoric properties, especially the fact that it burns with a very high temperature. Triethylborane is an industrial initiator in radical reactions, where it is effective even at low temperatures.
Borates are used as environmentally benign wood preservatives.
Pharmaceutical and biological applications
Boron plays a role in pharmaceutical and biological applications as it is found in various bacteria-produced antibiotics, such as boromycins, aplasmomycins, borophycins, and tartrolons. These antibiotics have shown inhibitory effects on certain bacteria, fungi, and protozoa growth. Boron is also being studied for its potential medicinal applications, including its incorporation into biologically active molecules for therapies like boron neutron capture therapy for brain tumors. Some boron-containing biomolecules may act as signaling molecules interacting with cell surfaces, suggesting a role in cellular communication.
Boric acid has antiseptic, antifungal, and antiviral properties and, for these reasons, is applied as a water clarifier in swimming pool water treatment. Mild solutions of boric acid have been used as eye antiseptics.
Bortezomib (marketed as Velcade and Cytomib). Boron appears as an active element in the organic pharmaceutical bortezomib, a new class of drug called the proteasome inhibitor, for treating myeloma and one form of lymphoma (it is currently in experimental trials against other types of lymphoma). The boron atom in bortezomib binds the catalytic site of the 26S proteasome with high affinity and specificity.
A number of potential boronated pharmaceuticals using boron-10, have been prepared for use in boron neutron capture therapy (BNCT).
Some boron compounds show promise in treating arthritis, though none have as yet been generally approved for the purpose.
Tavaborole (marketed as Kerydin) is an Aminoacyl tRNA synthetase inhibitor which is used to treat toenail fungus. It gained FDA approval in July 2014.
Dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies or red blood cells, which allows for positron emission tomography (PET) imaging of cancer and hemorrhages, respectively. A Human-Derived, Genetic, Positron-emitting and Fluorescent (HD-GPF) reporter system uses a human protein, PSMA and non-immunogenic, and a small molecule that is positron-emitting (boron bound 18F) and fluorescence for dual modality PET and fluorescent imaging of genome modified cells, e.g. cancer, CRISPR/Cas9, or CAR T-cells, in an entire mouse. The dual-modality small molecule targeting PSMA was tested in humans and found the location of primary and metastatic prostate cancer, fluorescence-guided removal of cancer, and detects single cancer cells in tissue margins.
In neutron capture therapy (BNCT) for malignant brain tumors, boron is researched to be used for selectively targeting and destroying tumor cells. The goal is to deliver higher concentrations of the non-radioactive boron isotope (10B) to the tumor cells than to the surrounding normal tissues. When these 10B-containing cells are irradiated with low-energy thermal neutrons, they undergo nuclear capture reactions, releasing high linear energy transfer (LET) particles such as α-particles and lithium-7 nuclei within a limited path length. These high-LET particles can destroy the adjacent tumor cells without causing significant harm to nearby normal cells. Boron acts as a selective agent due to its ability to absorb thermal neutrons and produce short-range physical effects primarily affecting the targeted tissue region. This binary approach allows for precise tumor cell killing while sparing healthy tissues. The effective delivery of boron involves administering boron compounds or carriers capable of accumulating selectively in tumor cells compared to surrounding tissue. BSH and BPA have been used clinically, but research continues to identify more optimal carriers. Accelerator-based neutron sources have also been developed recently as an alternative to reactor-based sources, leading to improved efficiency and enhanced clinical outcomes in BNCT. By employing the properties of boron isotopes and targeted irradiation techniques, BNCT offers a potential approach to treating malignant brain tumors by selectively killing cancer cells while minimizing the damage caused by traditional radiation therapies.
BNCT has shown promising results in clinical trials for various other malignancies, including glioblastoma, head and neck cancer, cutaneous melanoma, hepatocellular carcinoma, lung cancer, and extramammary Paget's disease. The treatment involves a nuclear reaction between nonradioactive boron-10 isotope and low-energy thermal or high-energy epithermal neutrons to generate α particles and lithium nuclei that selectively destroy DNA in tumor cells. The primary challenge lies in developing efficient boron agents with higher content and specific targeting properties tailored for BNCT. Integration of tumor-targeting strategies with BNCT could potentially establish it as a practical personalized treatment option for different types of cancers. Ongoing research explores new boron compounds, optimization strategies, theranostic agents, and radiobiological advances to overcome limitations and cost-effectively improve patient outcomes.
Research areas
Magnesium diboride is an important superconducting material with the transition temperature of 39 K. MgB2 wires are produced with the powder-in-tube process and applied in superconducting magnets.
Amorphous boron is used as a melting point depressant in nickel-chromium braze alloys.
Hexagonal boron nitride forms atomically thin layers, which have been used to enhance the electron mobility in graphene devices. It also forms nanotubular structures (BNNTs), which have high strength, high chemical stability, and high thermal conductivity, among its list of desirable properties.
Boron has multiple applications in nuclear fusion research. It is commonly used for conditioning the walls in fusion reactors by depositing boron coatings on plasma-facing components and walls to reduce the release of hydrogen and impurities from the surfaces. It is also being used for the dissipation of energy in the fusion plasma boundary to suppress excessive energy bursts and heat fluxes to the walls.
Biological role
Boron is an essential plant nutrient, required primarily for maintaining the integrity of cell walls. However, high soil concentrations of greater than 1.0 ppm lead to marginal and tip necrosis in leaves as well as poor overall growth performance. Levels as low as 0.8 ppm produce these same symptoms in plants that are particularly sensitive to boron in the soil. Nearly all plants, even those somewhat tolerant of soil boron, will show at least some symptoms of boron toxicity when soil boron content is greater than 1.8 ppm. When this content exceeds 2.0 ppm, few plants will perform well and some may not survive.
It is thought that boron plays several essential roles in animals, including humans, but the exact physiological role is poorly understood. A small human trial published in 1987 reported on postmenopausal women first made boron deficient and then repleted with 3 mg/day. Boron supplementation markedly reduced urinary calcium excretion and elevated the serum concentrations of 17 beta-estradiol and testosterone.
Boron is not classified as an essential human nutrient because research has not established a clear biological function for boron. Still, studies suggest that boron may exert beneficial effects on reproduction and development, calcium metabolism, bone formation, brain function, insulin and energy substrate metabolism, immunity, and steroid hormone (including estrogen) and vitamin D function, among other functions. The U.S. Food and Nutrition Board (FNB) found the existing data insufficient to derive a Recommended Dietary Allowance (RDA), Adequate Intake (AI), or Estimated Average Requirement (EAR) for boron. The U.S. Food and Drug Administration (FDA) has not established a Daily Value for boron for food and dietary supplement labeling purposes. While low boron status can be detrimental to health, probably increasing the risk of osteoporosis, poor immune function, and cognitive decline; high boron levels are associated with cell damage and toxicity. The exact mechanism by which boron exerts its physiological effects is not fully understood, but may involve interactions with adenosine monophosphate (ADP) and S-adenosyl methionine (SAM-e), two compounds involved in important cellular functions. Furthermore, boron appears to inhibit cyclic ADP-ribose, thereby affecting the release of calcium ions from the endoplasmic reticulum and affecting various biological processes. Some studies suggest that boron may reduce levels of inflammatory biomarkers.
In humans, boron is usually consumed with food that contains boron, such as fruits, leafy vegetables, and nuts. Foods that are particularly rich in boron include avocados, dried fruits such as raisins, peanuts, pecans, prune juice, grape juice, wine and chocolate powder. According to 2-day food records from the respondents to the Third National Health and Nutrition Examination Survey (NHANES III), adult dietary intake was recorded at 0.9 to 1.4 mg/day.
In 2013, a hypothesis suggested it was possible that boron and molybdenum catalyzed the production of RNA on Mars with life being transported to Earth via a meteorite around 3 billion years ago.
There exist several known boron-containing natural antibiotics. The first one found was boromycin, isolated from streptomyces in the 1960s. Others are tartrolons, a group of antibiotics discovered in the 1990s from culture broth of the myxobacterium Sorangium cellulosum.
Congenital endothelial dystrophy type 2, a rare form of corneal dystrophy, is linked to mutations in SLC4A11 gene that encodes a transporter reportedly regulating the intracellular concentration of boron.
Analytical quantification
For determination of boron content in food or materials, the colorimetric curcumin method is used. Boron is converted to boric acid or borates and on reaction with curcumin in acidic solution, a red colored boron-chelate complex, rosocyanine, is formed.
Health issues and toxicity
Elemental boron, boron oxide, boric acid, borates, and many organoboron compounds are relatively nontoxic to humans and animals (with toxicity similar to that of table salt). The LD50 (dose at which there is 50% mortality) for animals is about 6 g per kg of body weight. Substances with LD50 above 2 g/kg are considered nontoxic. An intake of 4 g/day of boric acid was reported without incident, but more than this is considered toxic in more than a few doses. Intakes of more than 0.5 grams per day for 50 days cause minor digestive and other problems suggestive of toxicity. Dietary supplementation of boron may be helpful for bone growth, wound healing, and antioxidant activity, and insufficient amount of boron in diet may result in boron deficiency.
Single medical doses of 20 g of boric acid for neutron capture therapy have been used without undue toxicity.
Boric acid is more toxic to insects than to mammals, and is routinely used as an insecticide.
The boranes (boron hydrogen compounds) and similar gaseous compounds are quite poisonous. As usual, boron is not an element that is intrinsically poisonous, but the toxicity of these compounds depends on structure (for another example of this phenomenon, see phosphine). The boranes are also highly flammable and require special care when handling, some combinations of boranes and other compounds are highly explosive. Sodium borohydride presents a fire hazard owing to its reducing nature and the liberation of hydrogen on contact with acid. Boron halides are corrosive.
Boron is necessary for plant growth, but an excess of boron is toxic to plants, and occurs particularly in acidic soil. It presents as a yellowing from the tip inwards of the oldest leaves and black spots in barley leaves, but it can be confused with other stresses such as magnesium deficiency in other plants.
See also
Allotropes of boron
Boron deficiency
Boron oxide
Boron nitride
Boron neutron capture therapy
Boronic acid
Hydroboration-oxidation reaction
Suzuki coupling
References
External links
Boron at The Periodic Table of Videos (University of Nottingham)
J. B. Calvert: Boron, 2004, private website (archived version)
Chemical elements
Metalloids
Neutron poisons
Pyrotechnic fuels
Rocket fuels
Nuclear fusion fuels
Dietary minerals
Reducing agents
Articles containing video clips
Chemical elements with rhombohedral structure
|
https://en.wikipedia.org/wiki/Bromine
|
Bromine is a chemical element with the symbol Br and atomic number 35. It is a volatile red-brown liquid at room temperature that evaporates readily to form a similarly coloured vapour. Its properties are intermediate between those of chlorine and iodine. Isolated independently by two chemists, Carl Jacob Löwig (in 1825) and Antoine Jérôme Balard (in 1826), its name was derived from the Ancient Greek (bromos) meaning "stench", referring to its sharp and pungent smell.
Elemental bromine is very reactive and thus does not occur as a free element in nature. Instead, it can be isolated from colourless soluble crystalline mineral halide salts analogous to table salt, a property it shares with the other halogens. While it is rather rare in the Earth's crust, the high solubility of the bromide ion (Br) has caused its accumulation in the oceans. Commercially the element is easily extracted from brine evaporation ponds, mostly in the United States and Israel. The mass of bromine in the oceans is about one three-hundredth that of chlorine.
At standard conditions for temperature and pressure it is a liquid; the only other element that is liquid under these conditions is mercury. At high temperatures, organobromine compounds readily dissociate to yield free bromine atoms, a process that stops free radical chemical chain reactions. This effect makes organobromine compounds useful as fire retardants, and more than half the bromine produced worldwide each year is put to this purpose. The same property causes ultraviolet sunlight to dissociate volatile organobromine compounds in the atmosphere to yield free bromine atoms, causing ozone depletion. As a result, many organobromine compounds—such as the pesticide methyl bromide—are no longer used. Bromine compounds are still used in well drilling fluids, in photographic film, and as an intermediate in the manufacture of organic chemicals.
Large amounts of bromide salts are toxic from the action of soluble bromide ions, causing bromism. However, bromine is beneficial for human eosinophils, and is an essential trace element for collagen development in all animals. Hundreds of known organobromine compounds are generated by terrestrial and marine plants and animals, and some serve important biological roles. As a pharmaceutical, the simple bromide ion (Br) has inhibitory effects on the central nervous system, and bromide salts were once a major medical sedative, before replacement by shorter-acting drugs. They retain niche uses as antiepileptics.
History
Bromine was discovered independently by two chemists, Carl Jacob Löwig and Antoine Balard, in 1825 and 1826, respectively.
Löwig isolated bromine from a mineral water spring from his hometown Bad Kreuznach in 1825. Löwig used a solution of the mineral salt saturated with chlorine and extracted the bromine with diethyl ether. After evaporation of the ether, a brown liquid remained. With this liquid as a sample of his work he applied for a position in the laboratory of Leopold Gmelin in Heidelberg. The publication of the results was delayed and Balard published his results first.
Balard found bromine chemicals in the ash of seaweed from the salt marshes of Montpellier. The seaweed was used to produce iodine, but also contained bromine. Balard distilled the bromine from a solution of seaweed ash saturated with chlorine. The properties of the resulting substance were intermediate between those of chlorine and iodine; thus he tried to prove that the substance was iodine monochloride (ICl), but after failing to do so he was sure that he had found a new element and named it muride, derived from the Latin word ("brine").
After the French chemists Louis Nicolas Vauquelin, Louis Jacques Thénard, and Joseph-Louis Gay-Lussac approved the experiments of the young pharmacist Balard, the results were presented at a lecture of the Académie des Sciences and published in Annales de Chimie et Physique. In his publication, Balard stated that he changed the name from muride to brôme on the proposal of M. Anglada. The name brôme (bromine) derives from the Greek (, "stench"). Other sources claim that the French chemist and physicist Joseph-Louis Gay-Lussac suggested the name brôme for the characteristic smell of the vapors. Bromine was not produced in large quantities until 1858, when the discovery of salt deposits in Stassfurt enabled its production as a by-product of potash.
Apart from some minor medical applications, the first commercial use was the daguerreotype. In 1840, bromine was discovered to have some advantages over the previously used iodine vapor to create the light sensitive silver halide layer in daguerreotypy.
Potassium bromide and sodium bromide were used as anticonvulsants and sedatives in the late 19th and early 20th centuries, but were gradually superseded by chloral hydrate and then by the barbiturates. In the early years of the First World War, bromine compounds such as xylyl bromide were used as poison gas.
Properties
Bromine is the third halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to those of fluorine, chlorine, and iodine, and tend to be intermediate between those of the two neighbouring halogens, chlorine, and iodine. Bromine has the electron configuration [Ar]4s3d4p, with the seven electrons in the fourth and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between chlorine and iodine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than chlorine and more reactive than iodine. It is also a weaker oxidising agent than chlorine, but a stronger one than iodine. Conversely, the bromide ion is a weaker reducing agent than iodide, but a stronger one than chloride. These similarities led to chlorine, bromine, and iodine together being classified as one of the original triads of Johann Wolfgang Döbereiner, whose work foreshadowed the periodic law for chemical elements. It is intermediate in atomic radius between chlorine and iodine, and this leads to many of its atomic properties being similarly intermediate in value between chlorine and iodine, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X molecule (X = Cl, Br, I), ionic radius, and X–X bond length. The volatility of bromine accentuates its very penetrating, choking, and unpleasant odour.
All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of bromine are intermediate between those of chlorine and iodine. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of bromine are again intermediate between those of chlorine and iodine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: fluorine is a very pale yellow gas, chlorine is greenish-yellow, and bromine is a reddish-brown volatile liquid that melts at −7.2 °C and boils at 58.8 °C. (Iodine is a shiny black solid.) This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as bromine, results from the electron transition between the highest occupied antibonding π molecular orbital and the lowest vacant antibonding σ molecular orbital. The colour fades at low temperatures so that solid bromine at −195 °C is pale yellow.
Like solid chlorine and iodine, solid bromine crystallises in the orthorhombic crystal system, in a layered arrangement of Br molecules. The Br–Br distance is 227 pm (close to the gaseous Br–Br distance of 228 pm) and the Br···Br distance between molecules is 331 pm within a layer and 399 pm between layers (compare the van der Waals radius of bromine, 195 pm). This structure means that bromine is a very poor conductor of electricity, with a conductivity of around 5 × 10 Ω cm just below the melting point, although this is higher than the essentially undetectable conductivity of chlorine.
At a pressure of 55 GPa (roughly 540,000 times atmospheric pressure) bromine undergoes an insulator-to-metal transition. At 75 GPa it changes to a face-centered orthorhombic structure. At 100 GPa it changes to a body centered orthorhombic monatomic form.
Isotopes
Bromine has two stable isotopes, Br and Br. These are its only two natural isotopes, with Br making up 51% of natural bromine and Br making up the remaining 49%. Both have nuclear spin 3/2− and thus may be used for nuclear magnetic resonance, although Br is more favourable. The relatively 1:1 distribution of the two isotopes in nature is helpful in identification of bromine containing compounds using mass spectroscopy. Other bromine isotopes are all radioactive, with half-lives too short to occur in nature. Of these, the most important are Br (t = 17.7 min), Br (t = 4.421 h), and Br (t = 35.28 h), which may be produced from the neutron activation of natural bromine. The most stable bromine radioisotope is Br (t = 57.04 h). The primary decay mode of isotopes lighter than Br is electron capture to isotopes of selenium; that of isotopes heavier than Br is beta decay to isotopes of krypton; and Br may decay by either mode to stable Se or Kr. Br isotopes from Br-87 and heavier undergo beta decay with neutron emission and are of practical importance because they are fission products; Br-87 with half-life 55 s is notable as the longest lived delayed neutron emitter.
Chemistry and compounds
Bromine is intermediate in reactivity between chlorine and iodine, and is one of the most reactive elements. Bond energies to bromine tend to be lower than those to chlorine but higher than those to iodine, and bromine is a weaker oxidising agent than chlorine but a stronger one than iodine. This can be seen from the standard electrode potentials of the X/X couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). Bromination often leads to higher oxidation states than iodination but lower or equal oxidation states to chlorination. Bromine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Br bonds.
Hydrogen bromide
The simplest compound of bromine is hydrogen bromide, HBr. It is mainly used in the production of inorganic bromides and alkyl bromides, and as a catalyst for many reactions in organic chemistry. Industrially, it is mainly produced by the reaction of hydrogen gas with bromine gas at 200–400 °C with a platinum catalyst. However, reduction of bromine with red phosphorus is a more practical way to produce hydrogen bromide in the laboratory:
2 P + 6 HO + 3 Br → 6 HBr + 2 HPO
HPO + HO + Br → 2 HBr + HPO
At room temperature, hydrogen bromide is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative bromine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen bromide at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Aqueous hydrogen bromide is known as hydrobromic acid, which is a strong acid (pK = −9) because the hydrogen bonds to bromine are too weak to inhibit dissociation. The HBr/HO system also involves many hydrates HBr·nHO for n = 1, 2, 3, 4, and 6, which are essentially salts of bromine anions and hydronium cations. Hydrobromic acid forms an azeotrope with boiling point 124.3 °C at 47.63 g HBr per 100 g solution; thus hydrobromic acid cannot be concentrated beyond this point by distillation.
Unlike hydrogen fluoride, anhydrous liquid hydrogen bromide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into HBr and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and bromine, though its salts with very large and weakly polarising cations such as Cs and (R = Me, Et, Bu) may still be isolated. Anhydrous hydrogen bromide is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides.
Other binary bromides
Nearly all elements in the periodic table form binary bromides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the very unstable XeBr); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than bromine's (oxygen, nitrogen, fluorine, and chlorine), so that the resultant binary compounds are formally not bromides but rather oxides, nitrides, fluorides, or chlorides of bromine. (Nonetheless, nitrogen tribromide is named as a bromide as it is analogous to the other nitrogen trihalides.)
Bromination of metals with Br tends to yield lower oxidation states than chlorination with Cl when a variety of oxidation states is available. Bromides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrobromic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen bromide gas. These methods work best when the bromide product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative bromination of the element with bromine or hydrogen bromide, high-temperature bromination of a metal oxide or other halide by bromine, a volatile metal bromide, carbon tetrabromide, or an organic bromide. For example, niobium(V) oxide reacts with carbon tetrabromide at 370 °C to form niobium(V) bromide. Another method is halogen exchange in the presence of excess "halogenating reagent", for example:
FeCl + BBr (excess) → FeBr + BCl
When a lower bromide is wanted, either a higher halide may be reduced using hydrogen or a metal as a reducing agent, or thermal decomposition or disproportionation may be used, as follows:
3 WBr + Al 3 WBr + AlBr
EuBr + H → EuBr + HBr
2 TaBr TaBr + TaBr
Most metal bromides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular bromides, as do metals in high oxidation states from +3 and above. Both ionic and covalent bromides are known for metals in oxidation state +3 (e.g. scandium bromide is mostly ionic, but aluminium bromide is not). Silver bromide is very insoluble in water and is thus often used as a qualitative test for bromine.
Bromine halides
The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY, XY, and XY (where X is heavier than Y), and bromine is no exception. Bromine forms a monofluoride and monochloride, as well as a trifluoride and pentafluoride. Some cationic and anionic derivatives are also characterised, such as , , , , and . Apart from these, some pseudohalides are also known, such as cyanogen bromide (BrCN), bromine thiocyanate (BrSCN), and bromine azide (BrN).
The pale-brown bromine monofluoride (BrF) is unstable at room temperature, disproportionating quickly and irreversibly into bromine, bromine trifluoride, and bromine pentafluoride. It thus cannot be obtained pure. It may be synthesised by the direct reaction of the elements, or by the comproportionation of bromine and bromine trifluoride at high temperatures. Bromine monochloride (BrCl), a red-brown gas, quite readily dissociates reversibly into bromine and chlorine at room temperature and thus also cannot be obtained pure, though it can be made by the reversible direct reaction of its elements in the gas phase or in carbon tetrachloride. Bromine monofluoride in ethanol readily leads to the monobromination of the aromatic compounds PhX (para-bromination occurs for X = Me, Bu, OMe, Br; meta-bromination occurs for the deactivating X = –COEt, –CHO, –NO); this is due to heterolytic fission of the Br–F bond, leading to rapid electrophilic bromination by Br.
At room temperature, bromine trifluoride (BrF) is a straw-coloured liquid. It may be formed by directly fluorinating bromine at room temperature and is purified through distillation. It reacts violently with water and explodes on contact with flammable materials, but is a less powerful fluorinating reagent than chlorine trifluoride. It reacts vigorously with boron, carbon, silicon, arsenic, antimony, iodine, and sulfur to give fluorides, and will also convert most metals and many metal compounds to fluorides; as such, it is used to oxidise uranium to uranium hexafluoride in the nuclear power industry. Refractory oxides tend to be only partially fluorinated, but here the derivatives KBrF and BrFSbF remain reactive. Bromine trifluoride is a useful nonaqueous ionising solvent, since it readily dissociates to form and and thus conducts electricity.
Bromine pentafluoride (BrF) was first synthesised in 1930. It is produced on a large scale by direct reaction of bromine with excess fluorine at temperatures higher than 150 °C, and on a small scale by the fluorination of potassium bromide at 25 °C. It also reacts violently with water and is a very strong fluorinating agent, although chlorine trifluoride is still stronger.
Polybromine compounds
Although dibromine is a strong oxidising agent with a high first ionisation energy, very strong oxidisers such as peroxydisulfuryl fluoride (SOF) can oxidise it to form the cherry-red cation. A few other bromine cations are known, namely the brown and dark brown . The tribromide anion, , has also been characterised; it is analogous to triiodide.
Bromine oxides and oxoacids
Bromine oxides are not as well-characterised as chlorine oxides or iodine oxides, as they are all fairly unstable: it was once thought that they could not exist at all. Dibromine monoxide is a dark-brown solid which, while reasonably stable at −60 °C, decomposes at its melting point of −17.5 °C; it is useful in bromination reactions and may be made from the low-temperature decomposition of bromine dioxide in a vacuum. It oxidises iodine to iodine pentoxide and benzene to 1,4-benzoquinone; in alkaline solutions, it gives the hypobromite anion.
So-called "bromine dioxide", a pale yellow crystalline solid, may be better formulated as bromine perbromate, BrOBrO. It is thermally unstable above −40 °C, violently decomposing to its elements at 0 °C. Dibromine trioxide, syn-BrOBrO, is also known; it is the anhydride of hypobromous acid and bromic acid. It is an orange crystalline solid which decomposes above −40 °C; if heated too rapidly, it explodes around 0 °C. A few other unstable radical oxides are also known, as are some poorly characterised oxides, such as dibromine pentoxide, tribromine octoxide, and bromine trioxide.
The four oxoacids, hypobromous acid (HOBr), bromous acid (HOBrO), bromic acid (HOBrO), and perbromic acid (HOBrO), are better studied due to their greater stability, though they are only so in aqueous solution. When bromine dissolves in aqueous solution, the following reactions occur:
{|
|-
| Br + HO || HOBr + H + Br || K = 7.2 × 10 mol l
|-
| Br + 2 OH || OBr + HO + Br || K = 2 × 10 mol l
|}
Hypobromous acid is unstable to disproportionation. The hypobromite ions thus formed disproportionate readily to give bromide and bromate:
{|
|-
| 3 BrO 2 Br + || K = 10
|}
Bromous acids and bromites are very unstable, although the strontium and barium bromites are known. More important are the bromates, which are prepared on a small scale by oxidation of bromide by aqueous hypochlorite, and are strong oxidising agents. Unlike chlorates, which very slowly disproportionate to chloride and perchlorate, the bromate anion is stable to disproportionation in both acidic and aqueous solutions. Bromic acid is a strong acid. Bromides and bromates may comproportionate to bromine as follows:
+ 5 Br + 6 H → 3 Br + 3 HO
There were many failed attempts to obtain perbromates and perbromic acid, leading to some rationalisations as to why they should not exist, until 1968 when the anion was first synthesised from the radioactive beta decay of unstable . Today, perbromates are produced by the oxidation of alkaline bromate solutions by fluorine gas. Excess bromate and fluoride are precipitated as silver bromate and calcium fluoride, and the perbromic acid solution may be purified. The perbromate ion is fairly inert at room temperature but is thermodynamically extremely oxidising, with extremely strong oxidising agents needed to produce it, such as fluorine or xenon difluoride. The Br–O bond in is fairly weak, which corresponds to the general reluctance of the 4p elements arsenic, selenium, and bromine to attain their group oxidation state, as they come after the scandide contraction characterised by the poor shielding afforded by the radial-nodeless 3d orbitals.
Organobromine compounds
Like the other carbon–halogen bonds, the C–Br bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the bromide anion. Due to the difference of electronegativity between bromine (2.96) and carbon (2.55), the carbon atom in a C–Br bond is electron-deficient and thus electrophilic. The reactivity of organobromine compounds resembles but is intermediate between the reactivity of organochlorine and organoiodine compounds. For many applications, organobromides represent a compromise of reactivity and cost.
Organobromides are typically produced by additive or substitutive bromination of other organic precursors. Bromine itself can be used, but due to its toxicity and volatility, safer brominating reagents are normally used, such as N-bromosuccinimide. The principal reactions for organobromides include dehydrobromination, Grignard reactions, reductive coupling, and nucleophilic substitution.
Organobromides are the most common organohalides in nature, even though the concentration of bromide is only 0.3% of that for chloride in sea water, because of the easy oxidation of bromide to the equivalent of Br, a potent electrophile. The enzyme bromoperoxidase catalyzes this reaction. The oceans are estimated to release 1–2 million tons of bromoform and 56,000 tons of bromomethane annually.
An old qualitative test for the presence of the alkene functional group is that alkenes turn brown aqueous bromine solutions colourless, forming a bromohydrin with some of the dibromoalkane also produced. The reaction passes through a short-lived strongly electrophilic bromonium intermediate. This is an example of a halogen addition reaction.
Occurrence and production
Bromine is significantly less abundant in the crust than fluorine or chlorine, comprising only 2.5 parts per million of the Earth's crustal rocks, and then only as bromide salts. It is the forty-sixth most abundant element in Earth's crust. It is significantly more abundant in the oceans, resulting from long-term leaching. There, it makes up 65 parts per million, corresponding to a ratio of about one bromine atom for every 660 chlorine atoms. Salt lakes and brine wells may have higher bromine concentrations: for example, the Dead Sea contains 0.4% bromide ions. It is from these sources that bromine extraction is mostly economically feasible.
The main sources of bromine production are Israel and Jordan. The element is liberated by halogen exchange, using chlorine gas to oxidise Br to Br. This is then removed with a blast of steam or air, and is then condensed and purified. Today, bromine is transported in large-capacity metal drums or lead-lined tanks that can hold hundreds of kilograms or even tonnes of bromine. The bromine industry is about one-hundredth the size of the chlorine industry. Laboratory production is unnecessary because bromine is commercially available and has a long shelf life.
Applications
A wide variety of organobromine compounds are used in industry. Some are prepared from bromine and others are prepared from hydrogen bromide, which is obtained by burning hydrogen in bromine.
Flame retardants
Brominated flame retardants represent a commodity of growing importance, and make up the largest commercial use of bromine. When the brominated material burns, the flame retardant produces hydrobromic acid which interferes in the radical chain reaction of the oxidation reaction of the fire. The mechanism is that the highly reactive hydrogen radicals, oxygen radicals, and hydroxy radicals react with hydrobromic acid to form less reactive bromine radicals (i.e., free bromine atoms). Bromine atoms may also react directly with other radicals to help terminate the free radical chain-reactions that characterise combustion.
To make brominated polymers and plastics, bromine-containing compounds can be incorporated into the polymer during polymerisation. One method is to include a relatively small amount of brominated monomer during the polymerisation process. For example, vinyl bromide can be used in the production of polyethylene, polyvinyl chloride or polypropylene. Specific highly brominated molecules can also be added that participate in the polymerisation process For example, tetrabromobisphenol A can be added to polyesters or epoxy resins, where it becomes part of the polymer. Epoxies used in printed circuit boards are normally made from such flame retardant resins, indicated by the FR in the abbreviation of the products (FR-4 and FR-2). In some cases, the bromine-containing compound may be added after polymerisation. For example, decabromodiphenyl ether can be added to the final polymers.
A number of gaseous or highly volatile brominated halomethane compounds are non-toxic and make superior fire suppressant agents by this same mechanism, and are particularly effective in enclosed spaces such as submarines, airplanes, and spacecraft. However, they are expensive and their production and use has been greatly curtailed due to their effect as ozone-depleting agents. They are no longer used in routine fire extinguishers, but retain niche uses in aerospace and military automatic fire suppression applications. They include bromochloromethane (Halon 1011, CHBrCl), bromochlorodifluoromethane (Halon 1211, CBrClF), and bromotrifluoromethane (Halon 1301, CBrF).
Other uses
Silver bromide is used, either alone or in combination with silver chloride and silver iodide, as the light sensitive constituent of photographic emulsions.
Ethylene bromide was an additive in gasolines containing lead anti-engine knocking agents. It scavenges lead by forming volatile lead bromide, which is exhausted from the engine. This application accounted for 77% of the bromine use in 1966 in the US. This application has declined since the 1970s due to environmental regulations (see below).
Brominated vegetable oil (BVO), a complex mixture of plant-derived triglycerides that have been reacted to contain atoms of the element bromine bonded to the molecules, is used primarily to help emulsify citrus-flavored soft drinks, preventing them from separating during distribution.
Poisonous bromomethane was widely used as pesticide to fumigate soil and to fumigate housing, by the tenting method. Ethylene bromide was similarly used. These volatile organobromine compounds are all now regulated as ozone depletion agents. The Montreal Protocol on Substances that Deplete the Ozone Layer scheduled the phase out for the ozone depleting chemical by 2005, and organobromide pesticides are no longer used (in housing fumigation they have been replaced by such compounds as sulfuryl fluoride, which contain neither the chlorine or bromine organics which harm ozone). Before the Montreal protocol in 1991 (for example) an estimated 35,000 tonnes of the chemical were used to control nematodes, fungi, weeds and other soil-borne diseases.
In pharmacology, inorganic bromide compounds, especially potassium bromide, were frequently used as general sedatives in the 19th and early 20th century. Bromides in the form of simple salts are still used as anticonvulsants in both veterinary and human medicine, although the latter use varies from country to country. For example, the U.S. Food and Drug Administration (FDA) does not approve bromide for the treatment of any disease, and it was removed from over-the-counter sedative products like Bromo-Seltzer, in 1975. Commercially available organobromine pharmaceuticals include the vasodilator nicergoline, the sedative brotizolam, the anticancer agent pipobroman, and the antiseptic merbromin. Otherwise, organobromine compounds are rarely pharmaceutically useful, in contrast to the situation for organofluorine compounds. Several drugs are produced as the bromide (or equivalents, hydrobromide) salts, but in such cases bromide serves as an innocuous counterion of no biological significance.
Other uses of organobromine compounds include high-density drilling fluids, dyes (such as Tyrian purple and the indicator bromothymol blue), and pharmaceuticals. Bromine itself, as well as some of its compounds, are used in water treatment, and is the precursor of a variety of inorganic compounds with an enormous number of applications (e.g. silver bromide for photography). Zinc–bromine batteries are hybrid flow batteries used for stationary electrical power backup and storage; from household scale to industrial scale.
Bromine is used in cooling towers (in place of chlorine) for controlling bacteria, algae, fungi, and zebra mussels.
Because it has similar antiseptic qualities to chlorine, bromine can be used in the same manner as chlorine as a disinfectant or antimicrobial in applications such as swimming pools. However, bromine is usually not used outside for these applications due to it being relatively more expensive than chlorine and the absence of a stabilizer to protect it from the sun. For indoor pools, it can be a good option as it is effective at a wider pH range. It is also more stable in a heated pool or hot tub.
Biological role and toxicity
A 2014 study suggests that bromine (in the form of bromide ion) is a necessary cofactor in the biosynthesis of collagen IV, making the element essential to basement membrane architecture and tissue development in animals. Nevertheless, no clear deprivation symptoms or syndromes have been documented in mammals. In other biological functions, bromine may be non-essential but still beneficial when it takes the place of chlorine. For example, in the presence of hydrogen peroxide, HO, formed by the eosinophil, and either chloride or bromide ions, eosinophil peroxidase provides a potent mechanism by which eosinophils kill multicellular parasites (such as the nematode worms involved in filariasis) and some bacteria (such as tuberculosis bacteria). Eosinophil peroxidase is a haloperoxidase that preferentially uses bromide over chloride for this purpose, generating hypobromite (hypobromous acid), although the use of chloride is possible.
α-Haloesters are generally thought of as highly reactive and consequently toxic intermediates in organic synthesis. Nevertheless, mammals, including humans, cats, and rats, appear to biosynthesize traces of an α-bromoester, 2-octyl 4-bromo-3-oxobutanoate, which is found in their cerebrospinal fluid and appears to play a yet unclarified role in inducing REM sleep. Neutrophil myeloperoxidase can use HO and Br to brominate deoxycytidine, which could result in DNA mutations. Marine organisms are the main source of organobromine compounds, and it is in these organisms that bromine is more firmly shown to be essential. More than 1600 such organobromine compounds were identified by 1999. The most abundant is methyl bromide (CHBr), of which an estimated 56,000 tonnes is produced by marine algae each year. The essential oil of the Hawaiian alga Asparagopsis taxiformis consists of 80% bromoform. Most of such organobromine compounds in the sea are made by the action of a unique algal enzyme, vanadium bromoperoxidase.
The bromide anion is not very toxic: a normal daily intake is 2 to 8 milligrams. However, high levels of bromide chronically impair the membrane of neurons, which progressively impairs neuronal transmission, leading to toxicity, known as bromism. Bromide has an elimination half-life of 9 to 12 days, which can lead to excessive accumulation. Doses of 0.5 to 1 gram per day of bromide can lead to bromism. Historically, the therapeutic dose of bromide is about 3 to 5 grams of bromide, thus explaining why chronic toxicity (bromism) was once so common. While significant and sometimes serious disturbances occur to neurologic, psychiatric, dermatological, and gastrointestinal functions, death from bromism is rare. Bromism is caused by a neurotoxic effect on the brain which results in somnolence, psychosis, seizures and delirium.
Elemental bromine is toxic and causes chemical burns on human flesh. Inhaling bromine gas results in similar irritation of the respiratory tract, causing coughing, choking, shortness of breath, and death if inhaled in large enough amounts. Chronic exposure may lead to frequent bronchial infections and a general deterioration of health. As a strong oxidising agent, bromine is incompatible with most organic and inorganic compounds. Caution is required when transporting bromine; it is commonly carried in steel tanks lined with lead, supported by strong metal frames. The Occupational Safety and Health Administration (OSHA) of the United States has set a permissible exposure limit (PEL) for bromine at a time-weighted average (TWA) of 0.1 ppm. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 0.1 ppm and a short-term limit of 0.3 ppm. The exposure to bromine immediately dangerous to life and health (IDLH) is 3 ppm. Bromine is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
References
General and cited references
Chemical elements
Diatomic nonmetals
Gases with color
Halogens
Oxidizing agents
Reactive nonmetals
|
https://en.wikipedia.org/wiki/Barium
|
Barium is a chemical element with the symbol Ba and atomic number 56. It is the fifth element in group 2 and is a soft, silvery alkaline earth metal. Because of its high chemical reactivity, barium is never found in nature as a free element.
The most common minerals of barium are baryte (barium sulfate, BaSO4) and witherite (barium carbonate, BaCO3). The name barium originates from the alchemical derivative "baryta", from Greek (), meaning 'heavy'. Baric is the adjectival form of barium. Barium was identified as a new element in 1772, but not reduced to a metal until 1808 with the advent of electrolysis.
Barium has few industrial applications. Historically, it was used as a getter for vacuum tubes and in oxide form as the emissive coating on indirectly heated cathodes. It is a component of YBCO (high-temperature superconductors) and electroceramics, and is added to steel and cast iron to reduce the size of carbon grains within the microstructure. Barium compounds are added to fireworks to impart a green color. Barium sulfate is used as an insoluble additive to oil well drilling fluid. In a purer form it is used as X-ray radiocontrast agents for imaging the human gastrointestinal tract. Water-soluble barium compounds are poisonous and have been used as rodenticides.
Characteristics
Physical properties
Barium is a soft, silvery-white metal, with a slight golden shade when ultrapure. The silvery-white color of barium metal rapidly vanishes upon oxidation in air yielding a dark gray layer containing the oxide. Barium has a medium specific weight and high electrical conductivity. Because barium is difficult to purify, many of its properties have not been accurately determined.
At room temperature and pressure, barium metal adopts a body-centered cubic structure, with a barium–barium distance of 503 picometers, expanding with heating at a rate of approximately 1.8/°C. It is a very soft metal with a Mohs hardness of 1.25. Its melting temperature of is intermediate between those of the lighter strontium () and heavier radium (); however, its boiling point of exceeds that of strontium (). The density (3.62 g/cm3) is again intermediate between those of strontium (2.36 g/cm3) and radium (≈5 g/cm3).
Chemical reactivity
Barium is chemically similar to magnesium, calcium, and strontium, but even more reactive. It is usually found in the +2 oxidation state. Most exceptions are in a few rare and unstable molecular species that are only characterised in the gas phase such as BaF, but in 2018 a barium(I) species was reported in a graphite intercalation compound. Reactions with chalcogens are highly exothermic (release energy); the reaction with oxygen or air occurs at room temperature. For this reason, metallic barium is often stored under oil or in an inert atmosphere. Reactions with other nonmetals, such as carbon, nitrogen, phosphorus, silicon, and hydrogen, are generally exothermic and proceed upon heating. Reactions with water and alcohols are very exothermic and release hydrogen gas:
Ba + 2 ROH → Ba(OR)2 + H2↑ (R is an alkyl group or a hydrogen atom)
Barium reacts with ammonia to form complexes such as Ba(NH3)6.
The metal is readily attacked by acids. Sulfuric acid is a notable exception because passivation stops the reaction by forming the insoluble barium sulfate on the surface. Barium combines with several other metals, including aluminium, zinc, lead, and tin, forming intermetallic phases and alloys.
Compounds
Barium salts are typically white when solid and colorless when dissolved. They are denser than the strontium or calcium analogs, except for the halides (see table; zinc is given for comparison).
Barium hydroxide ("baryta") was known to alchemists, who produced it by heating barium carbonate. Unlike calcium hydroxide, it absorbs very little CO2 in aqueous solutions and is therefore insensitive to atmospheric fluctuations. This property is used in calibrating pH equipment.
Volatile barium compounds burn with a green to pale green flame, which is an efficient test to detect a barium compound. The color results from spectral lines at 455.4, 493.4, 553.6, and 611.1 nm.
Organobarium compounds are a growing field of knowledge: recently discovered are dialkylbariums and alkylhalobariums.
Isotopes
Barium found in the Earth's crust is a mixture of seven primordial nuclides, barium-130, 132, and 134 through 138. Barium-130 undergoes very slow radioactive decay to xenon-130 by double beta plus decay, with a half-life of (0.5–2.7)×1021 years (about 1011 times the age of the universe). Its abundance is ≈0.1% that of natural barium. Theoretically, barium-132 can similarly undergo double beta decay to xenon-132; this decay has not been detected. The radioactivity of these isotopes is so weak that they pose no danger to life.
Of the stable isotopes, barium-138 composes 71.7% of all barium; other isotopes have decreasing abundance with decreasing mass number.
In total, barium has 40 known isotopes, ranging in mass between 114 and 153. The most stable artificial radioisotope is barium-133 with a half-life of approximately 10.51 years. Five other isotopes have half-lives longer than a day. Barium also has 10 meta states, of which barium-133m1 is the most stable with a half-life of about 39 hours.
History
Alchemists in the early Middle Ages knew about some barium minerals. Smooth pebble-like stones of mineral baryte were found in volcanic rock near Bologna, Italy, and so were called "Bologna stones". Alchemists were attracted to them because after exposure to light they would glow for years. The phosphorescent properties of baryte heated with organics were described by V. Casciorolus in 1602.
Carl Scheele determined that baryte contained a new element in 1772, but could not isolate barium, only barium oxide. Johan Gottlieb Gahn also isolated barium oxide two years later in similar studies. Oxidized barium was at first called "barote" by Guyton de Morveau, a name that was changed by Antoine Lavoisier to baryte (in French) or baryta (in Latin). Also in the 18th century, English mineralogist William Withering noted a heavy mineral in the lead mines of Cumberland, now known to be witherite. Barium was first isolated by electrolysis of molten barium salts in 1808 by Sir Humphry Davy in England. Davy, by analogy with calcium, named "barium" after baryta, with the "-ium" ending signifying a metallic element. Robert Bunsen and Augustus Matthiessen obtained pure barium by electrolysis of a molten mixture of barium chloride and ammonium chloride.
The production of pure oxygen in the Brin process was a large-scale application of barium peroxide in the 1880s, before it was replaced by electrolysis and fractional distillation of liquefied air in the early 1900s. In this process barium oxide reacts at with air to form barium peroxide, which decomposes above by releasing oxygen:
2 BaO + O2 ⇌ 2 BaO2
Barium sulfate was first applied as a radiocontrast agent in X-ray imaging of the digestive system in 1908.
Occurrence and production
The abundance of barium is 0.0425% in the Earth's crust and 13 μg/L in sea water. The primary commercial source of barium is baryte (also called barytes or heavy spar), a barium sulfate mineral. with deposits in many parts of the world. Another commercial source, far less important than baryte, is witherite, barium carbonate. The main deposits are located in Britain, Romania, and the former USSR.
The baryte reserves are estimated between 0.7 and 2 billion tonnes. The maximum production, 8.3 million tonnes, was produced in 1981, but only 7–8% was used for barium metal or compounds. Baryte production has risen since the second half of the 1990s from 5.6 million tonnes in 1996 to 7.6 in 2005 and 7.8 in 2011. China accounts for more than 50% of this output, followed by India (14% in 2011), Morocco (8.3%), US (8.2%), Turkey (2.5%), Iran and Kazakhstan (2.6% each).
The mined ore is washed, crushed, classified, and separated from quartz. If the quartz penetrates too deeply into the ore, or the iron, zinc, or lead content is abnormally high, then froth flotation is used. The product is a 98% pure baryte (by mass); the purity should be no less than 95%, with a minimal content of iron and silicon dioxide. It is then reduced by carbon to barium sulfide:
BaSO4 + 2 C → BaS + 2 CO2
The water-soluble barium sulfide is the starting point for other compounds: treating BaS with oxygen produces the sulfate, with nitric acid the nitrate, with aqueous carbon dioxide the carbonate, and so on. The nitrate can be thermally decomposed to yield the oxide. Barium metal is produced by reduction with aluminium at . The intermetallic compound BaAl4 is produced first:
3 BaO + 14 Al → 3 BaAl4 + Al2O3
BaAl4 is an intermediate reacted with barium oxide to produce the metal. Note that not all barium is reduced.
8 BaO + BaAl4 → Ba↓ + 7 BaAl2O4
The remaining barium oxide reacts with the formed aluminium oxide:
BaO + Al2O3 → BaAl2O4
and the overall reaction is
4 BaO + 2 Al → 3 Ba↓ + BaAl2O4
Barium vapor is condensed and packed into molds in an atmosphere of argon. This method is used commercially, yielding ultrapure barium. Commonly sold barium is about 99% pure, with main impurities being strontium and calcium (up to 0.8% and 0.25%) and other contaminants contributing less than 0.1%.
A similar reaction with silicon at yields barium and barium metasilicate. Electrolysis is not used because barium readily dissolves in molten halides and the product is rather impure.
Gemstone
The barium mineral, benitoite (barium titanium silicate), occurs as a very rare blue fluorescent gemstone, and is the official state gem of California.
Barium in seawater
Barium exists in seawater as the Ba2+ ion with an average oceanic concentration of 109 nmol/kg. Barium also exists in the ocean as BaSO4, or barite. Barium has a nutrient-like profile with a residence time of 10,000 years.
Barium shows a relatively consistent concentration in upper ocean seawater, excepting regions of high river inputs and regions with strong upwelling. There is little depletion of barium concentrations in the upper ocean for an ion with a nutrient-like profile, thus lateral mixing is important. Barium isotopic values show basin-scale balances instead of local or short-term processes.
Applications
Metal and alloys
Barium, as a metal or when alloyed with aluminium, is used to remove unwanted gases (gettering) from vacuum tubes, such as TV picture tubes. Barium is suitable for this purpose because of its low vapor pressure and reactivity towards oxygen, nitrogen, carbon dioxide, and water; it can even partly remove noble gases by dissolving them in the crystal lattice. This application is gradually disappearing due to the rising popularity of the tubeless LCD, LED, and plasma sets.
Other uses of elemental barium are minor and include an additive to silumin (aluminium–silicon alloys) that refines their structure, as well as
bearing alloys;
lead–tin soldering alloys – to increase the creep resistance;
alloy with nickel for spark plugs;
additive to steel and cast iron as an inoculant;
alloys with calcium, manganese, silicon, and aluminium as high-grade steel deoxidizers.
Barium sulfate and baryte
Barium sulfate (the mineral baryte, BaSO4) is important to the petroleum industry as a drilling fluid in oil and gas wells. The precipitate of the compound (called "blanc fixe", from the French for "permanent white") is used in paints and varnishes; as a filler in ringing ink, plastics, and rubbers; as a paper coating pigment; and in nanoparticles, to improve physical properties of some polymers, such as epoxies.
Barium sulfate has a low toxicity and relatively high density of ca. 4.5 g/cm3 (and thus opacity to X-rays). For this reason it is used as a radiocontrast agent in X-ray imaging of the digestive system ("barium meals" and "barium enemas"). Lithopone, a pigment that contains barium sulfate and zinc sulfide, is a permanent white with good covering power that does not darken when exposed to sulfides.
Other barium compounds
Other compounds of barium find only niche applications, limited by the toxicity of Ba2+ ions (barium carbonate is a rat poison), which is not a problem for the insoluble BaSO4.
Barium oxide coating on the electrodes of fluorescent lamps facilitates the release of electrons.
By its great atomic density, barium carbonate increases the refractive index and luster of glass and reduces leaks of X-rays from cathode ray tubes (CRT) TV sets.
Barium, typically as barium nitrate imparts a yellow or "apple" green color to fireworks; for brilliant green barium monochloride is used.
Barium peroxide is a catalyst in the aluminothermic reaction (thermite) for welding rail tracks. It is also a green flare in tracer ammunition and a bleaching agent.
Barium titanate is a promising electroceramic.
Barium fluoride is used for optics in infrared applications because of its wide transparency range of 0.15–12 micrometers.
YBCO was the first high-temperature superconductor cooled by liquid nitrogen, with a transition temperature of that exceeded the boiling point of nitrogen ().
Ferrite, a type of sintered ceramic composed of iron oxide (Fe2O3) and barium oxide (BaO), is both electrically nonconductive and ferrimagnetic, and can be temporarily or permanently magnetized.
Palaeoceanography
The lateral mixing of barium is caused by water mass mixing and ocean circulation. Global ocean circulation reveals a strong correlation between dissolved barium and silicic acid. The large-scale ocean circulation combined with remineralization of barium show a similar correlation between dissolved barium and ocean alkalinity.
Dissolved barium's correlation with silicic acid can be seen both vertically and spatially. Particulate barium shows a strong correlation with particulate organic carbon or POC. Barium is becoming more popular to be used a base for palaeoceanographic proxies. With both dissolved and particulate barium's links with silicic acid and POC, it can be used to determine historical variations in the biological pump, carbon cycle, and global climate.
The barium particulate barite (BaSO4), as one of many proxies, can be used to provide a host of historical information on processes in different oceanic settings (water column, sediments, and hydrothermal sites). In each setting there are differences in isotopic and elemental composition of the barite particulate. Barite in the water column, known as marine or pelagic barite, reveals information on seawater chemistry variation over time. Barite in sediments, known as diagenetic or cold seeps barite, gives information about sedimentary redox processes. Barite formed via hydrothermal activity at hydrothermal vents, known as hydrothermal barite, reveals alterations in the condition of the earth's crust around those vents.
Toxicity
Because of the high reactivity of the metal, toxicological data are available only for compounds. Soluble barium compounds are poisonous. In low doses, barium ions act as a muscle stimulant, and higher doses affect the nervous system, causing cardiac irregularities, tremors, weakness, anxiety, shortness of breath, and paralysis. This toxicity may be caused by Ba2+ blocking potassium ion channels, which are critical to the proper function of the nervous system. Other organs damaged by water-soluble barium compounds (i.e., barium ions) are the eyes, immune system, heart, respiratory system, and skin causing, for example, blindness and sensitization.
Barium is not carcinogenic and does not bioaccumulate. Inhaled dust containing insoluble barium compounds can accumulate in the lungs, causing a benign condition called baritosis. The insoluble sulfate is nontoxic and is not classified as a dangerous goods in transport regulations.
To avoid a potentially vigorous chemical reaction, barium metal is kept in an argon atmosphere or under mineral oils. Contact with air is dangerous and may cause ignition. Moisture, friction, heat, sparks, flames, shocks, static electricity, and exposure to oxidizers and acids should be avoided. Anything that may contact with barium should be electrically grounded.
See also
Han purple and Han blue – synthetic barium copper silicate pigments developed and used in ancient and imperial China
References
External links
Barium at The Periodic Table of Videos (University of Nottingham)
Elementymology & Elements Multidict
3-D Holographic Display Using Strontium Barium Niobate
Chemical elements
Alkaline earth metals
Toxicology
Reducing agents
Chemical elements with body-centered cubic structure
|
https://en.wikipedia.org/wiki/Berkelium
|
Berkelium is a transuranic radioactive chemical element with the symbol Bk and atomic number 97. It is a member of the actinide and transuranium element series. It is named after the city of Berkeley, California, the location of the Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) where it was discovered in December 1949. Berkelium was the fifth transuranium element discovered after neptunium, plutonium, curium and americium.
The major isotope of berkelium, 249Bk, is synthesized in minute quantities in dedicated high-flux nuclear reactors, mainly at the Oak Ridge National Laboratory in Tennessee, United States, and at the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. The longest-lived and second-most important isotope, 247Bk, can be synthesized via irradiation of 244Cm with high-energy alpha particles.
Just over one gram of berkelium has been produced in the United States since 1967. There is no practical application of berkelium outside scientific research which is mostly directed at the synthesis of heavier transuranium elements and superheavy elements. A 22-milligram batch of berkelium-249 was prepared during a 250-day irradiation period and then purified for a further 90 days at Oak Ridge in 2009. This sample was used to synthesize the new element tennessine for the first time in 2009 at the Joint Institute for Nuclear Research, Russia, after it was bombarded with calcium-48 ions for 150 days. This was the culmination of the Russia–US collaboration on the synthesis of the heaviest elements on the periodic table.
Berkelium is a soft, silvery-white, radioactive metal. The berkelium-249 isotope emits low-energy electrons and thus is relatively safe to handle. It decays with a half-life of 330 days to californium-249, which is a strong emitter of ionizing alpha particles. This gradual transformation is an important consideration when studying the properties of elemental berkelium and its chemical compounds, since the formation of californium brings not only chemical contamination, but also free-radical effects and self-heating from the emitted alpha particles.
Characteristics
Physical
Berkelium is a soft, silvery-white, radioactive actinide metal. In the periodic table, it is located to the right of the actinide curium, to the left of the actinide californium and below the lanthanide terbium with which it shares many similarities in physical and chemical properties. Its density of 14.78 g/cm3 lies between those of curium (13.52 g/cm3) and californium (15.1 g/cm3), as does its melting point of 986 °C, below that of curium (1340 °C) but higher than that of californium (900 °C). Berkelium is relatively soft and has one of the lowest bulk moduli among the actinides, at about 20 GPa (2 Pa).
ions shows two sharp fluorescence peaks at 652 nanometers (red light) and 742 nanometers (deep red – near-infrared) due to internal transitions at the f-electron shell. The relative intensity of these peaks depends on the excitation power and temperature of the sample. This emission can be observed, for example, after dispersing berkelium ions in a silicate glass, by melting the glass in presence of berkelium oxide or halide.
Between 70 K and room temperature, berkelium behaves as a Curie–Weiss paramagnetic material with an effective magnetic moment of 9.69 Bohr magnetons (µB) and a Curie temperature of 101 K. This magnetic moment is almost equal to the theoretical value of 9.72 µB calculated within the simple atomic L-S coupling model. Upon cooling to about 34 K, berkelium undergoes a transition to an antiferromagnetic state. Enthalpy of dissolution in hydrochloric acid at standard conditions is −600 kJ/mol, from which the standard enthalpy of formation (ΔfH°) of aqueous ions is obtained as −601 kJ/mol. The standard electrode potential /Bk is −2.01 V. The ionization potential of a neutral berkelium atom is 6.23 eV.
Allotropes
At ambient conditions, berkelium assumes its most stable α form which has a hexagonal symmetry, space group P63/mmc, lattice parameters of 341 pm and 1107 pm. The crystal has a double-hexagonal close packing structure with the layer sequence ABAC and so is isotypic (having a similar structure) with α-lanthanum and α-forms of actinides beyond curium. This crystal structure changes with pressure and temperature. When compressed at room temperature to 7 GPa, α-berkelium transforms to the β modification, which has a face-centered cubic (fcc) symmetry and space group Fmm. This transition occurs without change in volume, but the enthalpy increases by 3.66 kJ/mol. Upon further compression to 25 GPa, berkelium transforms to an orthorhombic γ-berkelium structure similar to that of α-uranium. This transition is accompanied by a 12% volume decrease and delocalization of the electrons at the 5f electron shell. No further phase transitions are observed up to 57 GPa.
Upon heating, α-berkelium transforms into another phase with an fcc lattice (but slightly different from β-berkelium), space group Fmm and the lattice constant of 500 pm; this fcc structure is equivalent to the closest packing with the sequence ABC. This phase is metastable and will gradually revert to the original α-berkelium phase at room temperature. The temperature of the phase transition is believed to be quite close to the melting point.
Chemical
Like all actinides, berkelium dissolves in various aqueous inorganic acids, liberating gaseous hydrogen and converting into the state. This trivalent oxidation state (+3) is the most stable, especially in aqueous solutions, but tetravalent (+4), pentavalent (+5), and possibly divalent (+2) berkelium compounds are also known. The existence of divalent berkelium salts is uncertain and has only been reported in mixed lanthanum(III) chloride-strontium chloride melts. A similar behavior is observed for the lanthanide analogue of berkelium, terbium. Aqueous solutions of ions are green in most acids. The color of ions is yellow in hydrochloric acid and orange-yellow in sulfuric acid. Berkelium does not react rapidly with oxygen at room temperature, possibly due to the formation of a protective oxide layer surface. However, it reacts with molten metals, hydrogen, halogens, chalcogens and pnictogens to form various binary compounds.
Isotopes
Nineteen isotopes and six nuclear isomers (excited states of an isotope) of berkelium have been characterized, with mass numbers ranging from 233 to 253 (except 235 and 237). All of them are radioactive. The longest half-lives are observed for 247Bk (1,380 years), 248Bk (over 300 years), and 249Bk (330 days); the half-lives of the other isotopes range from microseconds to several days. The isotope which is the easiest to synthesize is berkelium-249. This emits mostly soft β-particles which are inconvenient for detection. Its alpha radiation is rather weak (1.45%) with respect to the β-radiation, but is sometimes used to detect this isotope. The second important berkelium isotope, berkelium-247, is an alpha-emitter, as are most actinide isotopes.
Occurrence
All berkelium isotopes have a half-life far too short to be primordial. Therefore, any primordial berkelium − that is, berkelium present on the Earth during its formation − has decayed by now.
On Earth, berkelium is mostly concentrated in certain areas, which were used for the atmospheric nuclear weapons tests between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster, Three Mile Island accident and 1968 Thule Air Base B-52 crash. Analysis of the debris at the testing site of the first United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides, including berkelium. For reasons of military secrecy, this result was not published until 1956.
Nuclear reactors produce mostly, among the berkelium isotopes, berkelium-249. During the storage and before the fuel disposal, most of it beta decays to californium-249. The latter has a half-life of 351 years, which is relatively long compared to the half-lives of other isotopes produced in the reactor, and is therefore undesirable in the disposal products.
The transuranium elements from americium to fermium, including berkelium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Berkelium is also one of the elements that have theoretically been detected in Przybylski's Star.
History
Although very small amounts of berkelium were possibly produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in December 1949 by Glenn T. Seaborg, Albert Ghiorso, Stanley Gerald Thompson, and Kenneth Street Jr. They used the 60-inch cyclotron at the University of California, Berkeley. Similar to the nearly simultaneous discovery of americium (element 95) and curium (element 96) in 1944, the new elements berkelium and californium (element 98) were both produced in 1949–1950.
The name choice for element 97 followed the previous tradition of the Californian group to draw an analogy between the newly discovered actinide and the lanthanide element positioned above it in the periodic table. Previously, americium was named after a continent as its analogue europium, and curium honored scientists Marie and Pierre Curie as the lanthanide above it, gadolinium, was named after the explorer of the rare-earth elements Johan Gadolin. Thus the discovery report by the Berkeley group reads: "It is suggested that element 97 be given the name berkelium (symbol Bk) after the city of Berkeley in a manner similar to that used in naming its chemical homologue terbium (atomic number 65) whose name was derived from the town of Ytterby, Sweden, where the rare earth minerals were first found." This tradition ended with berkelium, though, as the naming of the next discovered actinide, californium, was not related to its lanthanide analogue dysprosium, but after the discovery place.
The most difficult steps in the synthesis of berkelium were its separation from the final products and the production of sufficient quantities of americium for the target material. First, americium (241Am) nitrate solution was coated on a platinum foil, the solution was evaporated and the residue converted by annealing to americium dioxide (). This target was irradiated with 35 MeV alpha particles for 6 hours in the 60-inch cyclotron at the Lawrence Radiation Laboratory, University of California, Berkeley. The (α,2n) reaction induced by the irradiation yielded the 243Bk isotope and two free neutrons:
+ → + 2
After the irradiation, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The product was centrifugated and re-dissolved in nitric acid. To separate berkelium from the unreacted americium, this solution was added to a mixture of ammonium and ammonium sulfate and heated to convert all the dissolved americium into the oxidation state +6. Unoxidized residual americium was precipitated by the addition of hydrofluoric acid as americium(III) fluoride (). This step yielded a mixture of the accompanying product curium and the expected element 97 in form of trifluorides. The mixture was converted to the corresponding hydroxides by treating it with potassium hydroxide, and after centrifugation, was dissolved in perchloric acid.
Further separation was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH≈3.5), using ion exchange at elevated temperature. The chromatographic separation behavior was unknown for the element 97 at the time, but was anticipated by analogy with terbium. The first results were disappointing because no alpha-particle emission signature could be detected from the elution product. With further analysis, searching for characteristic X-rays and conversion electron signals, a berkelium isotope was eventually detected. Its mass number was uncertain between 243 and 244 in the initial report, but was later established as 243.
Synthesis and extraction
Preparation of isotopes
Berkelium is produced by bombarding lighter actinides uranium (238U) or plutonium (239Pu) with neutrons in a nuclear reactor. In a more common case of uranium fuel, plutonium is produced first by neutron capture (the so-called (n,γ) reaction or neutron fusion) followed by beta-decay:
^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu (the times are half-lives)
Plutonium-239 is further irradiated by a source that has a high neutron flux, several times higher than a conventional nuclear reactor, such as the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, USA. The higher flux promotes fusion reactions involving not one but several neutrons, converting 239Pu to 244Cm and then to 249Cm:
Curium-249 has a short half-life of 64 minutes, and thus its further conversion to 250Cm has a low probability. Instead, it transforms by beta-decay into 249Bk:
^{249}_{96}Cm ->[{\beta^-}][64.15 \ \ce{min}] ^{249}_{97}Bk ->[\beta^-][330 \ \ce{d}] ^{249}_{98}Cf
The thus-produced 249Bk has a long half-life of 330 days and thus can capture another neutron. However, the product, 250Bk, again has a relatively short half-life of 3.212 hours and thus does not yield any heavier berkelium isotopes. It instead decays to the californium isotope 250Cf:
^{249}_{97}Bk ->[\ce{(n,\gamma)}] ^{250}_{97}Bk ->[\beta^-][3.212 \ \ce{h}] ^{250}_{98}Cf
Although 247Bk is the most stable isotope of berkelium, its production in nuclear reactors is very difficult because its potential progenitor 247Cm has never been observed to undergo beta decay. Thus, 249Bk is the most accessible isotope of berkelium, which still is available only in small quantities (only 0.66 grams have been produced in the US over the period 1967–1983) at a high price of the order 185 USD per microgram. It is the only berkelium isotope available in bulk quantities, and thus the only berkelium isotope whose properties can be extensively studied.
The isotope 248Bk was first obtained in 1956 by bombarding a mixture of curium isotopes with 25 MeV α-particles. Although its direct detection was hindered by strong signal interference with 245Bk, the existence of a new isotope was proven by the growth of the decay product 248Cf which had been previously characterized. The half-life of 248Bk was estimated as hours, though later 1965 work gave a half-life in excess of 300 years (which may be due to an isomeric state). Berkelium-247 was produced during the same year by irradiating 244Cm with alpha-particles:
Berkelium-242 was synthesized in 1979 by bombarding 235U with 11B, 238U with 10B, 232Th with 14N or 232Th with 15N. It converts by electron capture to 242Cm with a half-life of minutes. A search for an initially suspected isotope 241Bk was then unsuccessful; 241Bk has since been synthesized.
Separation
The fact that berkelium readily assumes oxidation state +4 in solids, and is relatively stable in this state in liquids greatly assists separation of berkelium away from many other actinides. These are inevitably produced in relatively large amounts during the nuclear synthesis and often favor the +3 state. This fact was not yet known in the initial experiments, which used a more complex separation procedure. Various inorganic oxidation agents can be applied to the solutions to convert it to the +4 state, such as bromates (), bismuthates (), chromates ( and ), silver(I) thiolate (), lead(IV) oxide (), ozone (), or photochemical oxidation procedures. More recently, it has been discovered that some organic and bio-inspired molecules, such as the chelator called 3,4,3-LI(1,2-HOPO), can also oxidize Bk(III) and stabilize Bk(IV) under mild conditions. is then extracted with ion exchange, extraction chromatography or liquid-liquid extraction using HDEHP (bis-(2-ethylhexyl) phosphoric acid), amines, tributyl phosphate or various other reagents. These procedures separate berkelium from most trivalent actinides and lanthanides, except for the lanthanide cerium (lanthanides are absent in the irradiation target but are created in various nuclear fission decay chains).
A more detailed procedure adopted at the Oak Ridge National Laboratory was as follows: the initial mixture of actinides is processed with ion exchange using lithium chloride reagent, then precipitated as hydroxides, filtered and dissolved in nitric acid. It is then treated with high-pressure elution from cation exchange resins, and the berkelium phase is oxidized and extracted using one of the procedures described above. Reduction of the thus-obtained to the +3 oxidation state yields a solution, which is nearly free from other actinides (but contains cerium). Berkelium and cerium are then separated with another round of ion-exchange treatment.
Bulk metal preparation
In order to characterize chemical and physical properties of solid berkelium and its compounds, a program was initiated in 1952 at the Material Testing Reactor, Arco, Idaho, US. It resulted in preparation of an eight-gram plutonium-239 target and in the first production of macroscopic quantities (0.6 micrograms) of berkelium by Burris B. Cunningham and Stanley Gerald Thompson in 1958, after a continuous reactor irradiation of this target for six years. This irradiation method was and still is the only way of producing weighable amounts of the element, and most solid-state studies of berkelium have been conducted on microgram or submicrogram-sized samples.
The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor at the Oak Ridge National Laboratory in Tennessee, USA, and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium elements (atomic number greater than 96). These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not publicly reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium-249 and einsteinium, and picogram quantities of fermium. In total, just over one gram of berkelium-249 has been produced at Oak Ridge since 1967.
The first berkelium metal sample weighing 1.7 micrograms was prepared in 1971 by the reduction of fluoride with lithium vapor at 1000 °C; the fluoride was suspended on a tungsten wire above a tantalum crucible containing molten lithium. Later, metal samples weighing up to 0.5 milligrams were obtained with this method.
Similar results are obtained with fluoride. Berkelium metal can also be produced by the reduction of oxide with thorium or lanthanum.
Compounds
Oxides
Two oxides of berkelium are known, with the berkelium oxidation state of +3 () and +4 (). oxide is a brown solid, while oxide is a yellow-green solid with a melting point of 1920 °C and is formed from BkO2 by reduction with molecular hydrogen:
Upon heating to 1200 °C, the oxide undergoes a phase change; it undergoes another phase change at 1750 °C. Such three-phase behavior is typical for the actinide sesquioxides. oxide, BkO, has been reported as a brittle gray solid but its exact chemical composition remains uncertain.
Halides
In halides, berkelium assumes the oxidation states +3 and +4. The +3 state is the most stable, especially in solutions, while the tetravalent halides and are only known in the solid phase. The coordination of berkelium atom in its trivalent fluoride and chloride is tricapped trigonal prismatic, with the coordination number of 9. In trivalent bromide, it is bicapped trigonal prismatic (coordination 8) or octahedral (coordination 6), and in the iodide it is octahedral.
fluoride () is a yellow-green ionic solid and is isotypic with uranium tetrafluoride or zirconium tetrafluoride. fluoride () is also a yellow-green solid, but it has two crystalline structures. The most stable phase at low temperatures is isotypic with yttrium(III) fluoride, while upon heating to between 350 and 600 °C, it transforms to the structure found in lanthanum trifluoride.
Visible amounts of chloride () were first isolated and characterized in 1962, and weighed only 3 billionths of a gram. It can be prepared by introducing hydrogen chloride vapors into an evacuated quartz tube containing berkelium oxide at a temperature about 500 °C. This green solid has a melting point of 600 °C, and is isotypic with uranium(III) chloride. Upon heating to nearly melting point, converts into an orthorhombic phase.
Two forms of bromide are known: one with berkelium having coordination 6, and one with coordination 8. The latter is less stable and transforms to the former phase upon heating to about 350 °C. An important phenomenon for radioactive solids has been studied on these two crystal forms: the structure of fresh and aged 249BkBr3 samples was probed by X-ray diffraction over a period longer than 3 years, so that various fractions of berkelium-249 had beta decayed to californium-249. No change in structure was observed upon the 249BkBr3—249CfBr3 transformation. However, other differences were noted for 249BkBr3 and 249CfBr3. For example, the latter could be reduced with hydrogen to 249CfBr2, but the former could not – this result was reproduced on individual 249BkBr3 and 249CfBr3 samples, as well on the samples containing both bromides. The intergrowth of californium in berkelium occurs at a rate of 0.22% per day and is an intrinsic obstacle in studying berkelium properties. Beside a chemical contamination, 249Cf, being an alpha emitter, brings undesirable self-damage of the crystal lattice and the resulting self-heating. The chemical effect however can be avoided by performing measurements as a function of time and extrapolating the obtained results.
Other inorganic compounds
The pnictides of berkelium-249 of the type BkX are known for the elements nitrogen, phosphorus, arsenic and antimony. They crystallize in the rock-salt structure and are prepared by the reaction of either hydride () or metallic berkelium with these elements at elevated temperature (about 600 °C) under high vacuum.
sulfide, , is prepared by either treating berkelium oxide with a mixture of hydrogen sulfide and carbon disulfide vapors at 1130 °C, or by directly reacting metallic berkelium with elemental sulfur. These procedures yield brownish-black crystals.
and hydroxides are both stable in 1 molar solutions of sodium hydroxide. phosphate () has been prepared as a solid, which shows strong fluorescence under excitation with a green light. Berkelium hydrides are produced by reacting metal with hydrogen gas at temperatures about 250 °C. They are non-stoichiometric with the nominal formula (0 < x < 1). Several other salts of berkelium are known, including an oxysulfide (), and hydrated nitrate (), chloride (), sulfate () and oxalate (). Thermal decomposition at about 600 °C in an argon atmosphere (to avoid oxidation to ) of yields the crystals of oxysulfate (). This compound is thermally stable to at least 1000 °C in inert atmosphere.
Organoberkelium compounds
Berkelium forms a trigonal (η5–C5H5)3Bk metallocene complex with three cyclopentadienyl rings, which can be synthesized by reacting chloride with the molten beryllocene () at about 70 °C. It has an amber color and a density of 2.47 g/cm3. The complex is stable to heating to at least 250 °C, and sublimates without melting at about 350 °C. The high radioactivity of berkelium gradually destroys the compound (within a period of weeks). One cyclopentadienyl ring in (η5–C5H5)3Bk can be substituted by chlorine to yield . The optical absorption spectra of this compound are very similar to those of (η5–C5H5)3Bk.
Applications
There is currently no use for any isotope of berkelium outside basic scientific research. Berkelium-249 is a common target nuclide to prepare still heavier transuranium elements and superheavy elements, such as lawrencium, rutherfordium and bohrium. It is also useful as a source of the isotope californium-249, which is used for studies on the chemistry of californium in preference to the more radioactive californium-252 that is produced in neutron bombardment facilities such as the HFIR.
A 22 milligram batch of berkelium-249 was prepared in a 250-day irradiation and then purified for 90 days at Oak Ridge in 2009. This target yielded the first 6 atoms of tennessine at the Joint Institute for Nuclear Research (JINR), Dubna, Russia, after bombarding it with calcium ions in the U400 cyclotron for 150 days. This synthesis was a culmination of the Russia-US collaboration between JINR and Lawrence Livermore National Laboratory on the synthesis of elements 113 to 118 which was initiated in 1989.
Nuclear fuel cycle
The nuclear fission properties of berkelium are different from those of the neighboring actinides curium and californium, and they suggest berkelium to perform poorly as a fuel in a nuclear reactor. Specifically, berkelium-249 has a moderately large neutron capture cross section of 710 barns for thermal neutrons, 1200 barns resonance integral, but very low fission cross section for thermal neutrons. In a thermal reactor, much of it will therefore be converted to berkelium-250 which quickly decays to californium-250. In principle, berkelium-249 can sustain a nuclear chain reaction in a fast breeder reactor. Its critical mass is relatively high at 192 kg; it can be reduced with a water or steel reflector but would still exceed the world production of this isotope.
Berkelium-247 can maintain chain reaction both in a thermal-neutron and in a fast-neutron reactor, however, its production is rather complex and thus the availability is much lower than its critical mass, which is about 75.7 kg for a bare sphere, 41.2 kg with a water reflector and 35.2 kg with a steel reflector (30 cm thickness).
Health issues
Little is known about the effects of berkelium on human body, and analogies with other elements may not be drawn because of different radiation products (electrons for berkelium and alpha particles, neutrons, or both for most other actinides). The low energy of electrons emitted from berkelium-249 (less than 126 keV) hinders its detection, due to signal interference with other decay processes, but also makes this isotope relatively harmless to humans as compared to other actinides. However, berkelium-249 transforms with a half-life of only 330 days to the strong alpha-emitter californium-249, which is rather dangerous and has to be handled in a glovebox in a dedicated laboratory.
Most available berkelium toxicity data originate from research on animals. Upon ingestion by rats, only about 0.01% of berkelium ends in the blood stream. From there, about 65% goes to the bones, where it remains for about 50 years, 25% to the lungs (biological half-life about 20 years), 0.035% to the testicles or 0.01% to the ovaries where berkelium stays indefinitely. The balance of about 10% is excreted. In all these organs berkelium might promote cancer, and in the skeleton, its radiation can damage red blood cells. The maximum permissible amount of berkelium-249 in the human skeleton is 0.4 nanograms.
References
Bibliography
External links
Berkelium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Chemical elements with double hexagonal close-packed structure
Actinides
Synthetic elements
|
https://en.wikipedia.org/wiki/Bauxite
|
Bauxite is a sedimentary rock with a relatively high aluminium content. It is the world's main source of aluminium and gallium. Bauxite consists mostly of the aluminium minerals gibbsite (Al(OH)3), boehmite (γ-AlO(OH)) and diaspore (α-AlO(OH)), mixed with the two iron oxides goethite (FeO(OH)) and haematite (Fe2O3), the aluminium clay mineral kaolinite (Al2Si2O5(OH)4) and small amounts of anatase (TiO2) and ilmenite (FeTiO3 or FeO.TiO2).
Bauxite appears dull in luster and is reddish-brown, white, or tan.
In 1821, the French geologist Pierre Berthier discovered bauxite near the village of Les Baux in Provence, southern France.
Formation
Numerous classification schemes have been proposed for bauxite but, , there was no consensus.
Vadász (1951) distinguished lateritic bauxites (silicate bauxites) from karst bauxite ores (carbonate bauxites):
The carbonate bauxites occur predominantly in Europe, Guyana, Suriname, and Jamaica above carbonate rocks (limestone and dolomite), where they were formed by lateritic weathering and residual accumulation of intercalated clay layers – dispersed clays which were concentrated as the enclosing limestones gradually dissolved during chemical weathering.
The lateritic bauxites are found mostly in the countries of the tropics. They were formed by lateritization of various silicate rocks such as granite, gneiss, basalt, syenite, and shale. In comparison with the iron-rich laterites, the formation of bauxites depends even more on intense weathering conditions in a location with very good drainage. This enables the dissolution of the kaolinite and the precipitation of the gibbsite. Zones with highest aluminium content are frequently located below a ferruginous surface layer. The aluminium hydroxide in the lateritic bauxite deposits is almost exclusively gibbsite.
In the case of Jamaica, recent analysis of the soils showed elevated levels of cadmium, suggesting that the bauxite originates from Miocene volcanic ash deposits from episodes of significant volcanism in Central America.
Production and reserves
Australia is the largest producer of bauxite, followed by Guinea and China. Bauxite is usually strip mined because it is almost always found near the surface of the terrain, with little or no overburden. Increased aluminium recycling, which requires less electric power than producing aluminium from ores, will considerably extend the world's bauxite reserves.
Aluminium production
, approximately 70% to 80% of the world's dry bauxite production is processed first into alumina and then into aluminium by electrolysis. Bauxite rocks are typically classified according to their intended commercial application: metallurgical, abrasive, cement, chemical, and refractory.
Bauxite ore is usually heated in a pressure vessel along with a sodium hydroxide solution at a temperature of . At these temperatures, the aluminium is dissolved as sodium aluminate (the Bayer process). The aluminium compounds in the bauxite may be present as gibbsite(Al(OH)3), boehmite(AlOOH) or diaspore(AlOOH); the different forms of the aluminium component will dictate the extraction conditions. The undissolved waste, bauxite tailings, after the aluminium compounds are extracted contains iron oxides, silica, calcia, titania and some un-reacted alumina. After separation of the residue by filtering, pure gibbsite is precipitated when the liquid is cooled, and then seeded with fine-grained aluminium hydroxide. The gibbsite is usually converted into aluminium oxide, Al2O3, by heating in rotary kilns or fluid flash calciners to a temperature in excess of . This aluminium oxide is dissolved at a temperature of about in molten cryolite. Next, this molten substance can yield metallic aluminium by passing an electric current through it in the process of electrolysis, which is called the Hall–Héroult process, named after its American and French discoverers.
Prior to the invention of this process, and prior to the Deville process, aluminium ore was refined by heating ore along with elemental sodium or potassium in a vacuum. The method was complicated and consumed materials that were themselves expensive at that time. This made early elemental aluminium more expensive than gold.
Maritime safety
As a bulk cargo, bauxite is a Group A cargo that may liquefy if excessively moist. Liquefaction and the free surface effect can cause the cargo to shift rapidly inside the hold and make the ship unstable, potentially sinking the ship. One vessel suspected to have been sunk in this way was the MS Bulk Jupiter in 2015. One method which can demonstrate this effect is the "can test", in which a sample of the material is placed in a cylindrical can and struck against a surface many times. If a moist slurry forms in the can, then there is a likelihood for the cargo to liquefy; although conversely, even if the sample remains dry it does not conclusively prove that it will remain that way, or that it is safe for loading.
Source of gallium
Bauxite is the main source of the rare metal gallium.
During the processing of bauxite to alumina in the Bayer process, gallium accumulates in the sodium hydroxide liquor. From this it can be extracted by a variety of methods. The most recent is the use of ion-exchange resin. Achievable extraction efficiencies critically depend on the original concentration in the feed bauxite. At a typical feed concentration of 50 ppm, about 15 percent of the contained gallium is extractable. The remainder reports to the red mud and aluminium hydroxide streams.
Bauxite is also a potential source for vanadium.
See also
Bauxite, Arkansas
Rio Tinto Alcan
United Company RUSAL
MS Bulk Jupiter
References
Further reading
Bárdossy, G. (1982): Karst Bauxites: Bauxite deposits on carbonate rocks. Elsevier Sci. Publ. 441 p.
Bárdossy, G. and Aleva, G.J.J. (1990): Lateritic Bauxites. Developments in Economic Geology 27, Elsevier Sci. Publ. 624 p.
Grant, C.; Lalor, G. and Vutchkov, M. (2005) Comparison of bauxites from Jamaica, the Dominican Republic and Suriname. Journal of Radioanalytical and Nuclear Chemistry p. 385–388 Vol.266, No.3
Hanilçi, N. (2013). Geological and geochemical evolution of the Bolkardaği bauxite deposits, Karaman, Turkey: Transformation from shale to bauxite. Journal of Geochemical Exploration
External links
USGS Minerals Information: Bauxite
Mineral Information Institute
Sedimentary rocks
Aluminium minerals
Articles containing video clips
Regolith
Weathering
|
https://en.wikipedia.org/wiki/Book
|
A book is a medium for recording information in the form of writing or images, typically composed of many pages (made of papyrus, parchment, vellum, or paper) bound together and protected by a cover. It can also be a handwritten or printed work of fiction or nonfiction, usually on sheets of paper fastened or bound together within covers. The technical term for this physical arrangement is codex (plural, codices). In the history of hand-held physical supports for extended written compositions or records, the codex replaces its predecessor, the scroll. A single sheet in a codex is a leaf and each side of a leaf is a page.
As an intellectual object, a book is prototypically a composition of such great length that it takes a considerable investment of time to compose and still considered as an investment of time to read. In a restricted sense, a book is a self-sufficient section or part of a longer composition, a usage reflecting that, in antiquity, long works had to be written on several scrolls and each scroll had to be identified by the book it contained. Each part of Aristotle's Physics is called a book. In an unrestricted sense, a book is the compositional whole of which such sections, whether called books or chapters or parts, are parts.
The intellectual content in a physical book need not be a composition, nor even be called a book. Books can consist only of drawings, engravings or photographs, crossword puzzles or cut-out dolls. In a physical book, the pages can be left blank or can feature an abstract set of lines to support entries, such as in an account book, appointment book, autograph book, notebook, diary or sketchbook. Some physical books are made with pages thick and sturdy enough to support other physical objects, like a scrapbook or photograph album. Books may be distributed in electronic form as ebooks and other formats.
Although in ordinary academic parlance a monograph is understood to be a specialist academic work, rather than a reference work on a scholarly subject, in library and information science monograph denotes more broadly any non-serial publication complete in one volume (book) or a finite number of volumes (even a novel like Proust's seven-volume In Search of Lost Time), in contrast to serial publications like a magazine, journal or newspaper. An avid reader or collector of books is a bibliophile or, colloquially, "bookworm". Books are traded at both regular stores and specialized bookstores, and people can read borrowed books, often for free, at libraries. Google has estimated that by 2010, approximately 130,000,000 titles had been published.
In some wealthier nations, the sale of printed books has decreased because of the increased usage of e-books. However, in most countries, printed books continue to outsell their digital counterparts due to many people still preferring to read in a traditional way. The 21st century has also seen a rapid rise in the popularity of audiobooks, which are recordings of books being read aloud.
Etymology
The word book comes from Old English , which in turn comes from the Germanic root , cognate to 'beech'. In Slavic languages like Russian, Bulgarian, Macedonian —'letter' is cognate with 'beech'. In Russian, Serbian and Macedonian, the word () or () refers to a primary school textbook that helps young children master the techniques of reading and writing. It is thus conjectured that the earliest Indo-European writings may have been carved on beech wood. The Latin word , meaning a book in the modern sense (bound and with separate leaves), originally meant 'block of wood'.
History
Antiquity
When writing systems were created in ancient civilizations, a variety of objects, such as stone, clay, tree bark, metal sheets, and bones, were used for writing; these are studied in epigraphy.
Tablet
A tablet is a physically robust writing medium, suitable for casual transport and writing. Clay tablets were flattened and mostly dry pieces of clay that could be easily carried, and impressed with a stylus. They were used as a writing medium, especially for writing in cuneiform, throughout the Bronze Age and well into the Iron Age. Wax tablets were pieces of wood covered in a coating of wax thick enough to record the impressions of a stylus. They were the normal writing material in schools, in accounting, and for taking notes. They had the advantage of being reusable: the wax could be melted, and reformed into a blank.
The custom of binding several wax tablets together (Roman pugillares) is a possible precursor of modern bound (codex) books. The etymology of the word codex (block of wood) also suggests that it may have developed from wooden wax tablets.
Scroll
Scrolls can be made from papyrus, a thick paper-like material made by weaving the stems of the papyrus plant, then pounding the woven sheet with a hammer-like tool until it is flattened. Papyrus was used for writing in Ancient Egypt, perhaps as early as the First Dynasty, although the first evidence is from the account books of King Neferirkare Kakai of the Fifth Dynasty (about 2400 BC). Papyrus sheets were glued together to form a scroll. Tree bark such as lime and other materials were also used.
According to Herodotus (History 5:58), the Phoenicians brought writing and papyrus to Greece around the 10th or 9th century BC. The Greek word for papyrus as writing material (biblion) and book (biblos) come from the Phoenician port town Byblos, through which papyrus was exported to Greece. From Greek we also derive the word tome (), which originally meant a slice or piece and from there began to denote "a roll of papyrus". Tomus was used by the Latins with exactly the same meaning as volumen (see also below the explanation by Isidore of Seville).
Whether made from papyrus, parchment, or paper, scrolls were the dominant form of book in the Hellenistic, Roman, Chinese, Hebrew, and Macedonian cultures. The Romans and Etruscans also made 'books' out of folded linen called in Latin Libri lintei, the only extant example of which is the Etruscan Liber Linteus. The more modern codex book format form took over the Roman world by late antiquity, but the scroll format persisted much longer in Asia.
Codex
Isidore of Seville (died 636) explained the then-current relation between a codex, book, and scroll in his Etymologiae (VI.13): "A codex is composed of many books; a book is of one scroll. It is called codex by way of metaphor from the trunks (codex) of trees or vines, as if it were a wooden stock, because it contains in itself a multitude of books, as it were of branches". Modern usage differs.
A codex (in modern usage) is the first information repository that modern people would recognize as a "book": leaves of uniform size bound in some manner along one edge, and typically held between two covers made of some more robust material. The first written mention of the codex as a form of book is from Martial, in his Apophoreta CLXXXIV at the end of the first century, where he praises its compactness. However, the codex never gained much popularity in the pagan Hellenistic world, and only within the Christian community did it gain widespread use. This change happened gradually during the 3rd and 4th centuries, and the reasons for adopting the codex form of the book are several: the format is more economical, as both sides of the writing material can be used; and it is portable, searchable, and easy to conceal. A book is much easier to read, to find a page that you want, and to flip through. A scroll is more awkward to use. The Christian authors may also have wanted to distinguish their writings from the pagan and Judaic texts written on scrolls. In addition, some metal books were made, that required smaller pages of metal, instead of an impossibly long, unbending scroll of metal. A book can also be easily stored in more compact places, or side by side in a tight library or shelf space.
Manuscripts
The fall of the Roman Empire in the 5th century AD saw the decline of the culture of ancient Rome. Papyrus became difficult to obtain due to lack of contact with Egypt, and parchment, which had been used for centuries, became the main writing material. Parchment is a material made from processed animal skin and used, mainly in the past, for writing on, especially in the Middle Ages. Parchment is most commonly made of calfskin, sheepskin, or goatskin. It was historically used for writing documents, notes, or the pages of a book, and first came into use around the 200s BC. Parchment is limed, scraped and dried under tension. It is not tanned, and is thus different from leather. This makes it more suitable for writing on, but leaves it very reactive to changes in relative humidity and makes it revert to rawhide if overly wet.
Monasteries carried on the Latin writing tradition in the Western Roman Empire. Cassiodorus, in the monastery of Vivarium (established around 540), stressed the importance of copying texts. St. Benedict of Nursia, in his Rule of Saint Benedict (completed around the middle of the 6th century) later also promoted reading. The Rule of Saint Benedict (Ch. XLVIII), which set aside certain times for reading, greatly influenced the monastic culture of the Middle Ages and is one of the reasons why the clergy were the predominant readers of books. The tradition and style of the Roman Empire still dominated, but slowly the peculiar medieval book culture emerged.
Before the invention and adoption of the printing press, almost all books were copied by hand, which made books expensive and comparatively rare. Smaller monasteries usually had only a few dozen books, medium-sized perhaps a few hundred. By the 9th century, larger collections held around 500 volumes and even at the end of the Middle Ages, the papal library in Avignon and Paris library of the Sorbonne held only around 2,000 volumes.
The scriptorium of the monastery was usually located over the chapter house. Artificial light was forbidden for fear it may damage the manuscripts. There were five types of scribes:
Calligraphers, who dealt in fine book production
Copyists, who dealt with basic production and correspondence
Correctors, who collated and compared a finished book with the manuscript from which it had been produced
Illuminators, who painted illustrations
Rubricators, who painted in the red letters
The bookmaking process was long and laborious. The parchment had to be prepared, then the unbound pages were planned and ruled with a blunt tool or lead, after which the text was written by the scribe, who usually left blank areas for illustration and rubrication. Finally, the book was bound by the bookbinder.
Different types of ink were known in antiquity, usually prepared from soot and gum, and later also from gall nuts and iron vitriol. This gave writing a brownish black color, but black or brown were not the only colors used. There are texts written in red or even gold, and different colors were used for illumination. For very luxurious manuscripts the whole parchment was colored purple, and the text was written on it with gold or silver (for example, Codex Argenteus).
Irish monks introduced spacing between words in the 7th century. This facilitated reading, as these monks tended to be less familiar with Latin. However, the use of spaces between words did not become commonplace before the 12th century. It has been argued that the use of spacing between words shows the transition from semi-vocalized reading into silent reading.
The first books used parchment or vellum (calfskin) for the pages. The book covers were made of wood and covered with leather. Because dried parchment tends to assume the form it had before processing, the books were fitted with clasps or straps. During the later Middle Ages, when public libraries appeared, up to the 18th century, books were often chained to a bookshelf or a desk to prevent theft. These chained books are called libri catenati.
At first, books were copied mostly in monasteries, one at a time. With the rise of universities in the 13th century, the Manuscript culture of the time led to an increase in the demand for books, and a new system for copying books appeared. The books were divided into unbound leaves (pecia), which were lent out to different copyists, so the speed of book production was considerably increased. The system was maintained by secular stationers guilds, which produced both religious and non-religious material.
Judaism has kept the art of the scribe alive up to the present. According to Jewish tradition, the Torah scroll placed in a synagogue must be written by hand on parchment and a printed book would not do, though the congregation may use printed prayer books and printed copies of the Scriptures are used for study outside the synagogue. A sofer "scribe" is a highly respected member of many Jewish communities.
Middle East
People of various religious (Jews, Christians, Zoroastrians, Muslims) and ethnic backgrounds (Syriac, Coptic, Persian, Arab etc.) in the Middle East also produced and bound books in the Islamic Golden Age (mid 8th century to 1258), developing advanced techniques in Islamic calligraphy, miniatures and bookbinding. A number of cities in the medieval Islamic world had book production centers and book markets. Yaqubi (died 897) says that in his time Baghdad had over a hundred booksellers. Book shops were often situated around the town's principal mosque as in Marrakesh, Morocco, that has a street named Kutubiyyin or book sellers in English and the famous Koutoubia Mosque is named so because of its location in this street.
The medieval Muslim world also used a method of reproducing reliable copies of a book in large quantities known as check reading, in contrast to the traditional method of a single scribe producing only a single copy of a single manuscript. In the check reading method, only "authors could authorize copies, and this was done in public sessions in which the copyist read the copy aloud in the presence of the author, who then certified it as accurate." With this check-reading system, "an author might produce a dozen or more copies from a single reading," and with two or more readings, "more than one hundred copies of a single book could easily be produced." By using as writing material the relatively cheap paper instead of parchment or papyrus the Muslims, in the words of Pedersen "accomplished a feat of crucial significance not only to the history of the Islamic book, but also to the whole world of books".
Wood block printing
In woodblock printing, a relief image of an entire page was carved into blocks of wood, inked, and used to print copies of that page. This method originated in China, in the Han dynasty (before 220 AD), as a method of printing on textiles and later paper, and was widely used throughout East Asia. The oldest dated book printed by this method is The Diamond Sutra (868 AD). The method (called woodcut when used in art) arrived in Europe in the early 14th century. Books (known as block-books), as well as playing-cards and religious pictures, began to be produced by this method. Creating an entire book was a painstaking process, requiring a hand-carved block for each page; and the wood blocks tended to crack, if stored for long. The monks or people who wrote them were paid highly.
Movable type and incunabula
The Chinese inventor Bi Sheng made movable type of earthenware , but there are no known surviving examples of his printing. Around 1450, in what is commonly regarded as an independent invention, Johannes Gutenberg invented movable type in Europe, along with innovations in casting the type based on a matrix and hand mould. This invention gradually made books less expensive to produce and more widely available.
Early printed books, single sheets and images which were created before 1501 in Europe are known as incunables or incunabula. "A man born in 1453, the year of the fall of Constantinople, could look back from his fiftieth year on a lifetime in which about eight million books had been printed, more perhaps than all the scribes of Europe had produced since Constantine founded his city in AD 330."
19th century to 21st centuries
Steam-powered printing presses became popular in the early 19th century. These machines could print 1,100 sheets per hour, but workers could only set 2,000 letters per hour. Monotype and linotype typesetting machines were introduced in the late 19th century. They could set more than 6,000 letters per hour and an entire line of type at once. There have been numerous improvements in the printing press. As well, the conditions for freedom of the press have been improved through the gradual relaxation of restrictive censorship laws. See also intellectual property, public domain, copyright. In mid-20th century, European book production had risen to over 200,000 titles per year.
Throughout the 20th century, libraries have faced an ever-increasing rate of publishing, sometimes called an information explosion. The advent of electronic publishing and the internet means that much new information is not printed in paper books, but is made available online through a digital library, on CD-ROM, in the form of ebooks or other online media. An on-line book is an ebook that is available online through the internet. Though many books are produced digitally, most digital versions are not available to the public, and there is no decline in the rate of paper publishing. There is an effort, however, to convert books that are in the public domain into a digital medium for unlimited redistribution and infinite availability. This effort is spearheaded by Project Gutenberg combined with Distributed Proofreaders. There have also been new developments in the process of publishing books. Technologies such as POD or "print on demand", which make it possible to print as few as one book at a time, have made self-publishing (and vanity publishing) much easier and more affordable. On-demand publishing has allowed publishers, by avoiding the high costs of warehousing, to keep low-selling books in print rather than declaring them out of print.
Indian manuscripts
Goddess Saraswati image dated 132 AD excavated from Kankali tila depicts her holding a manuscript in her left hand represented as a bound and tied palm leaf or birch bark manuscript. In India a bounded manuscript made of birch bark or palm leaf existed side by side since antiquity. The text in palm leaf manuscripts was inscribed with a knife pen on rectangular cut and cured palm leaf sheets; colouring was then applied to the surface and wiped off, leaving the ink in the incised grooves. Each sheet typically had a hole through which a string could pass, and with these the sheets were tied together with a string to bind like a book.
Mesoamerican codices
The codices of pre-Columbian Mesoamerica (Mexico and Central America) had the same form as the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the local amatl paper.
Modern manufacturing
The methods used for the printing and binding of books continued fundamentally unchanged from the 15th century into the early 20th century. While there was more mechanization, a book printer in 1900 had much in common with Gutenberg. Gutenberg's invention was the use of movable metal types, assembled into words, lines, and pages and then printed by letterpress to create multiple copies. Modern paper books are printed on papers designed specifically for printed books. Traditionally, book papers are off-white or low-white papers (easier to read), are opaque to minimize the show-through of text from one side of the page to the other and are (usually) made to tighter caliper or thickness specifications, particularly for case-bound books. Different paper qualities are used depending on the type of book: Machine finished coated papers, woodfree uncoated papers, coated fine papers and special fine papers are common paper grades.
Today, the majority of books are printed by offset lithography. When a book is printed, the pages are laid out on the plate so that after the printed sheet is folded the pages will be in the correct sequence. Books tend to be manufactured nowadays in a few standard sizes. The sizes of books are usually specified as "trim size": the size of the page after the sheet has been folded and trimmed. The standard sizes result from sheet sizes (therefore machine sizes) which became popular 200 or 300 years ago, and have come to dominate the industry. British conventions in this regard prevail throughout the English-speaking world, except for the US. The European book manufacturing industry works to a completely different set of standards.
Processes
Layout
Modern bound books are organized according to a particular format called the book's layout. Although there is great variation in layout, modern books tend to adhere to a set of rules with regard to what the parts of the layout are and what their content usually includes. A basic layout will include a front cover, a back cover and the book's content which is called its body copy or content pages. The front cover often bears the book's title (and subtitle, if any) and the name of its author or editor(s). The inside front cover page is usually left blank in both hardcover and paperback books. The next section, if present, is the book's front matter, which includes all textual material after the front cover but not part of the book's content such as a foreword, a dedication, a table of contents and publisher data such as the book's edition or printing number and place of publication. Between the body copy and the back cover goes the end matter which would include any indices, sets of tables, diagrams, glossaries or lists of cited works (though an edited book with several authors usually places cited works at the end of each authored chapter). The inside back cover page, like that inside the front cover, is usually blank. The back cover is the usual place for the book's ISBN and maybe a photograph of the author(s)/ editor(s), perhaps with a short introduction to them. Also here often appear plot summaries, barcodes and excerpted reviews of the book.
The body of the books is usually divided into parts, chapters, sections and sometimes subsections that are composed of at least a paragraph or more.
Printing
Some books, particularly those with shorter runs (i.e. with fewer copies) will be printed on sheet-fed offset presses, but most books are now printed on web presses, which are fed by a continuous roll of paper, and can consequently print more copies in a shorter time. As the production line circulates, a complete "book" is collected together in one stack of pages, and another machine carries out the folding, pleating, and stitching of the pages into bundles of signatures (sections of pages) ready to go into the gathering line. The pages of a book are printed two at a time, not as one complete book. Excess numbers are printed to make up for any spoilage due to make-readies or test pages to assure final print quality.
A make-ready is the preparatory work carried out by the pressmen to get the printing press up to the required quality of impression. Included in make-ready is the time taken to mount the plate onto the machine, clean up any mess from the previous job, and get the press up to speed. As soon as the pressman decides that the printing is correct, all the make-ready sheets will be discarded, and the press will start making books. Similar make readies take place in the folding and binding areas, each involving spoilage of paper.
Binding
After the signatures are folded and gathered, they move into the bindery. In the middle of last century there were still many trade binders—stand-alone binding companies which did no printing, specializing in binding alone. At that time, because of the dominance of letterpress printing, typesetting and printing took place in one location, and binding in a different factory. When type was all metal, a typical book's worth of type would be bulky, fragile and heavy. The less it was moved in this condition the better: so printing would be carried out in the same location as the typesetting. Printed sheets on the other hand could easily be moved. Now, because of increasing computerization of preparing a book for the printer, the typesetting part of the job has flowed upstream, where it is done either by separately contracting companies working for the publisher, by the publishers themselves, or even by the authors. Mergers in the book manufacturing industry mean that it is now unusual to find a bindery which is not also involved in book printing (and vice versa).
If the book is a hardback its path through the bindery will involve more points of activity than if it is a paperback. Unsewn binding is now increasingly common. The signatures of a book can also be held together by "Smyth sewing" using needles, "McCain sewing", using drilled holes often used in schoolbook binding, or "notch binding", where gashes about an inch long are made at intervals through the fold in the spine of each signature. The rest of the binding process is similar in all instances. Sewn and notch bound books can be bound as either hardbacks or paperbacks.
Finishing
"Making cases" happens off-line and prior to the book's arrival at the binding line. In the most basic case-making, two pieces of cardboard are placed onto a glued piece of cloth with a space between them into which is glued a thinner board cut to the width of the spine of the book. The overlapping edges of the cloth (about 5/8" all round) are folded over the boards, and pressed down to adhere. After case-making the stack of cases will go to the foil stamping area for adding decorations and type.
Digital printing
Recent developments in book manufacturing include the development of digital printing. Book pages are printed, in much the same way as an office copier works, using toner rather than ink. Each book is printed in one pass, not as separate signatures. Digital printing has permitted the manufacture of much smaller quantities than offset, in part because of the absence of make readies and of spoilage. One might think of a web press as printing quantities over 2000, quantities from 250 to 2000 being printed on sheet-fed presses, and digital presses doing quantities below 250. These numbers are of course only approximate and will vary from supplier to supplier, and from book to book depending on its characteristics. Digital printing has opened up the possibility of print-on-demand, where no books are printed until after an order is received from a customer.
Ebook
In the 2000s, due to the rise in availability of affordable handheld computing devices, the opportunity to share texts through electronic means became an appealing option for media publishers. Thus, the "ebook" was made. The term ebook is a contraction of "electronic book"; which refers to a book-length publication in digital form. An ebook is usually made available through the internet, but also on CD-ROM and other forms. Ebooks may be read either via a computing device with an LED display such as a traditional computer, a smartphone, or a tablet computer; or by means of a portable e-ink display device known as an ebook reader, such as the Sony Reader, Barnes & Noble Nook, Kobo eReader, or the Amazon Kindle. Ebook readers attempt to mimic the experience of reading a print book by using the e-ink technology, since the displays on ebook readers are much less reflective.
Audiobooks
Audiobooks, or recordings of people reading books aloud, were first created in 1932 in the United States. The first audiobooks were created by the American Foundation for the Blind on vinyl records, where each side could hold 15 minutes of recording. The first recorded pieces were some of William Shakespeare's plays, the Constitution of the United States, and the novel As the Earth Turns by Gladys Hasty Carroll. Gradually over the course of the 20th century and with the dawn of cassette tapes and compact discs, audiobooks began to be sold by booksellers who often had dedicated sections. Publishers of books additionally created divisions within their companies dedicated to audiobooks. By the turn of the millennium, audiobooks were digitally distributed on devices designed around audiobooks, and audiobooks began to receive different narrators for different parts. Some companies, such as the Amazon subsidiary Audible, are tailored to work exclusively in audiobooks, and while their effectiveness is subject to wide debate, sales of audiobooks continue to skyrocket in the present day.
Design
Book design is the art of incorporating the content, style, format, design, and sequence of the various components of a book into a coherent whole. In the words of Jan Tschichold, book design "though largely forgotten today, methods and rules upon which it is impossible to improve have been developed over centuries. To produce perfect books these rules have to be brought back to life and applied." Richard Hendel describes book design as "an arcane subject" and refers to the need for a context to understand what that means. Many different creators can contribute to book design, including graphic designers, artists and editors.
Sizes
The size of a modern book is based on the printing area of a common flatbed press. The pages of type were arranged and clamped in a frame, so that when printed on a sheet of paper the full size of the press, the pages would be right side up and in order when the sheet was folded, and the folded edges trimmed.
The most common book sizes are:
Quarto (4to): the sheet of paper is folded twice, forming four leaves (eight pages) approximately 11–13 inches (c. 30 cm) tall
Octavo (8vo): the most common size for current hardcover books. The sheet is folded three times into eight leaves (16 pages) up to inches (c. 23 cm) tall.
DuoDecimo (12mo): a size between 8vo and 16mo, up to inches (c. 18 cm) tall
Sextodecimo (16mo): the sheet is folded four times, forming 16 leaves (32 pages) up to inches (c. 15 cm) tall
Sizes smaller than 16mo are:
24mo: up to inches (c. 13 cm) tall.
32mo: up to 5 inches (c. 12 cm) tall.
48mo: up to 4 inches (c. 10 cm) tall.
64mo: up to 3 inches (c. 8 cm) tall.
Small books can be called booklets.
Sizes larger than quarto are:
Folio: up to 15 inches (c. 38 cm) tall.
Elephant Folio: up to 23 inches (c. 58 cm) tall.
Atlas Folio: up to 25 inches (c. 63 cm) tall.
Double Elephant Folio: up to 50 inches (c. 127 cm) tall.
The largest extant medieval manuscript in the world is Codex Gigas 92 × 50 × 22 cm. The world's largest book is made of stone and is in Kuthodaw Pagoda (Burma).
Types
By content
A common separation by content are fiction and non-fiction books. This simple separation can be found in most collections, libraries, and bookstores. There are other types such as books of sheet music.
Fiction
Many of the books published today are "fiction", meaning that they contain invented material, and are creative literature. Other literary forms such as poetry are included in the broad category. Most fiction is additionally categorized by literary form and genre.
The novel is the most common form of fiction book. Novels are stories that typically feature a plot, setting, themes and characters. Stories and narrative are not restricted to any topic; a novel can be whimsical, serious or controversial. The novel has had a tremendous impact on entertainment and publishing markets. A novella is a term sometimes used for fiction prose typically between 17,500 and 40,000 words, and a novelette between 7,500 and 17,500. A short story may be any length up to 10,000 words, but these word lengths vary.
Comic books or graphic novels are books in which the story is illustrated. The characters and narrators use speech or thought bubbles to express verbal language.
Non-fiction
Non-fiction books are in principle based on fact, on subjects such as history, politics, social and cultural issues, as well as autobiographies and memoirs. Nearly all academic literature is non-fiction. A reference book is a general type of non-fiction book which provides information as opposed to telling a story, essay, commentary, or otherwise supporting a point of view.
An almanac is a very general reference book, usually one-volume, with lists of data and information on many topics. An encyclopedia is a book or set of books designed to have more in-depth articles on many topics. A book listing words, their etymology, meanings, and other information is called a dictionary. A book which is a collection of maps is an atlas. A more specific reference book with tables or lists of data and information about a certain topic, often intended for professional use, is often called a handbook. Books which try to list references and abstracts in a certain broad area may be called an index, such as Engineering Index, or abstracts such as chemical abstracts and biological abstracts.
Books with technical information on how to do something or how to use some equipment are called instruction manuals. Other popular how-to books include cookbooks and home improvement books.
Students typically store and carry textbooks and schoolbooks for study purposes.
Unpublished
Many types of book are private, often filled in by the owner, for a variety of personal records. Elementary school pupils often use workbooks, which are published with spaces or blanks to be filled by them for study or homework. In US higher education, it is common for a student to take an exam using a blue book.
There is a large set of books that are made only to write private ideas, notes, and accounts. These books are rarely published and are typically destroyed or remain private. Notebooks are blank papers to be written in by the user. Students and writers commonly use them for taking notes. Scientists and other researchers use lab notebooks to record their notes. They often feature spiral coil bindings at the edge so that pages may easily be torn out.
Address books, phone books, and calendar/appointment books are commonly used on a daily basis for recording appointments, meetings and personal contact information. Books for recording periodic entries by the user, such as daily information about a journey, are called logbooks or logs. A similar book for writing the owner's daily private personal events, information, and ideas is called a diary or personal journal. Businesses use accounting books such as journals and ledgers to record financial data in a practice called bookkeeping (now usually held on computers rather than in hand-written form).
Other
There are several other types of books which are not commonly found under this system. Albums are books for holding a group of items belonging to a particular theme, such as a set of photographs, card collections, and memorabilia. One common example is stamp albums, which are used by many hobbyists to protect and organize their collections of postage stamps. Such albums are often made using removable plastic pages held inside in a ringed binder or other similar holder. Picture books are books for children with pictures on every page and less text (or even no text).
Hymnals are books with collections of musical hymns that can typically be found in churches. Prayerbooks or missals are books that contain written prayers and are commonly carried by monks, nuns, and other devoted followers or clergy. Lap books are a learning tool created by students.
Decodable readers and leveling
A leveled book collection is a set of books organized in levels of difficulty from the easy books appropriate for an emergent reader to longer more complex books adequate for advanced readers. Decodable readers or books are a specialized type of leveled books that use decodable text only including controlled lists of words, sentences and stories consistent with the letters and phonics that have been taught to the emergent reader. New sounds and letters are added to higher level decodable books, as the level of instruction progresses, allowing for higher levels of accuracy, comprehension and fluency.
By physical format
Hardcover books have a stiff binding. Paperback books have cheaper, flexible covers which tend to be less durable. An alternative to paperback is the glossy cover, otherwise known as a dust cover, found on magazines, and comic books. Spiral-bound books are bound by spirals made of metal or plastic. Examples of spiral-bound books include teachers' manuals and puzzle books (crosswords, sudoku).
Publishing is a process for producing pre-printed books, magazines, and newspapers for the reader/user to buy.
Publishers may produce low-cost, pre-publication copies known as galleys or 'bound proofs' for promotional purposes, such as generating reviews in advance of publication. Galleys are usually made as cheaply as possible, since they are not intended for sale.
Dummy books
Dummy books (or faux books) are books that are designed to imitate a real book by appearance to deceive people, some books may be whole with empty pages, others may be hollow or in other cases, there may be a whole panel carved with spines which are then painted to look like books, titles of some books may also be fictitious.
There are many reasons to have dummy books on display such as; to allude visitors of the vast wealth of information in their possession and to inflate the owner's appearance of wealth, to conceal something, for shop displays or for decorative purposes.
In early 19th century at Gwrych Castle, North Wales, Lloyd Hesketh Bamford-Hesketh was known for his vast collection of books at his library, however, at the later part of that same century, the public became aware that parts of his library was a fabrication, dummy books were built and then locked behind glass doors to stop people from trying to access them, from this a proverb was born, "Like Hesky's library, all outside".
Libraries
Private or personal libraries made up of non-fiction and fiction books, (as opposed to the state or institutional records kept in archives) first appeared in classical Greece. In the ancient world, the maintaining of a library was usually (but not exclusively) the privilege of a wealthy individual. These libraries could have been either private or public, i.e. for people who were interested in using them. The difference from a modern public library lies in that they were usually not funded from public sources. It is estimated that in the city of Rome at the end of the 3rd century there were around 30 public libraries. Public libraries also existed in other cities of the ancient Mediterranean region (for example, Library of Alexandria). Later, in the Middle Ages, monasteries and universities also had libraries that could be accessible to the general public. Typically not the whole collection was available to the public; the books could not be borrowed and often were chained to reading stands to prevent theft.
The beginning of the modern public library begins around 15th century when individuals started to donate books to towns. The growth of a public library system in the United States started in the late 19th century and was much helped by donations from Andrew Carnegie. This reflected classes in a society: the poor or the middle class had to access most books through a public library or by other means, while the rich could afford to have a private library built in their homes. In the United States the Boston Public Library 1852 Report of the Trustees established the justification for the public library as a tax-supported institution intended to extend educational opportunity and provide for general culture.
The advent of paperback books in the 20th century led to an explosion of popular publishing. Paperback books made owning books affordable for many people. Paperback books often included works from genres that had previously been published mostly in pulp magazines. As a result of the low cost of such books and the spread of bookstores filled with them (in addition to the creation of a smaller market of extremely cheap used paperbacks), owning a private library ceased to be a status symbol for the rich.
The development of libraries has prompted innovations to help store and organize books on shelves. In library and booksellers' catalogues, it is common to include an abbreviation such as "Crown 8vo" to indicate the paper size from which the book is made. When rows of books are lined on a book holder, bookends are sometimes needed to keep them from slanting.
Identification and classification
During the 20th century, librarians were concerned about keeping track of the many books being added yearly to the Gutenberg Galaxy. Through a global society called the International Federation of Library Associations and Institutions (IFLA), they devised a series of tools including the International Standard Bibliographic Description (ISBD). Each book is specified by an International Standard Book Number, or ISBN, which is unique to every edition of every book produced by participating publishers, worldwide. It is managed by the ISBN Society. An ISBN has four parts: the first part is the country code, the second the publisher code, and the third the title code. The last part is a check digit, and can take values from 0–9 and X (10). The EAN Barcodes numbers for books are derived from the ISBN by prefixing 978, for Bookland, and calculating a new check digit.
Commercial publishers in industrialized countries generally assign ISBNs to their books, so buyers may presume that the ISBN is part of a total international system, with no exceptions. However, many government publishers, in industrial as well as developing countries, do not participate fully in the ISBN system, and publish books which do not have ISBNs. A large or public collection requires a catalogue. Codes called "call numbers" relate the books to the catalogue, and determine their locations on the shelves. Call numbers are based on a Library classification system. The call number is placed on the spine of the book, normally a short distance before the bottom, and inside. Institutional or national standards, such as ANSI/NISO Z39.41 – 1997, establish the correct way to place information (such as the title, or the name of the author) on book spines, and on "shelvable" book-like objects, such as containers for DVDs, video tapes and software.
One of the earliest and most widely known systems of cataloguing books is the Dewey Decimal System. Another widely known system is the Library of Congress Classification system. Both systems are biased towards subjects which were well represented in US libraries when they were developed, and hence have problems handling new subjects, such as computing, or subjects relating to other cultures. Information about books and authors can be stored in databases like online general-interest book databases. Metadata, which means "data about data" is information about a book. Metadata about a book may include its title, ISBN or other classification number (see above), the names of contributors (author, editor, illustrator) and publisher, its date and size, the language of the text, its subject matter, etc.
Classification systems
Bliss bibliographic classification (BC)
Chinese Library Classification (CLC)
Colon Classification
Dewey Decimal Classification (DDC)
Harvard-Yenching Classification
Library of Congress Classification (LCC)
New Classification Scheme for Chinese Libraries
Universal Decimal Classification (UDC)
Uses
Aside from the primary purpose of reading them, books are also used for other ends:
A book can be an artistic artifact, a piece of art; this is sometimes known as an artists' book.
A book may be evaluated by a reader or professional writer to create a book review.
A book may be read by a group of people to use as a spark for social or academic discussion, as in a book club.
A book may be studied by students as the subject of a writing and analysis exercise in the form of a book report.
Books are sometimes used for their exterior appearance to decorate a room, such as a study.
Marketing
Once the book is published, it is put on the market by distributors and bookstores. Meanwhile, its promotion comes from various media reports. Book marketing is governed by the law in many states.
Secondary spread
In recent years, the book had a second life in the form of reading aloud. This is called public readings of published works, with the assistance of professional readers (often known actors) and in close collaboration with writers, publishers, booksellers, librarians, leaders of the literary world and artists.
Many individual or collective practices exist to increase the number of readers of a book. Among them:
abandonment of books in public places, coupled or not with the use of the Internet, known as the bookcrossing;
provision of free books in third places, like bars or cafes;
itinerant or temporary libraries;
free public libraries in the area.
Industry evolution
This form of the book chain has hardly changed since the eighteenth century, and has not always been this way. Thus, the author has asserted gradually with time, and the copyright dates only from the nineteenth century. For many centuries, especially before the invention of printing, each freely copied out books that passed through his hands, adding if desired his own comments. Similarly, bookseller and publisher jobs have emerged with the invention of printing, which made the book an industrial product, requiring structures of production and marketing.
The invention of the Internet, e-readers, tablets, and projects like Wikipedia and Gutenberg, are likely to change the book industry for years to come.
Paper and conservation
Paper was first made in China as early as 200 BC, and reached Europe through Muslim territories. At first made of rags, the Industrial Revolution changed paper-making practices, allowing for paper to be made out of wood pulp. Papermaking in Europe began in the 11th century, although vellum was also common there as page material up until the beginning of the 16th century, vellum being the more expensive and durable option. Printers or publishers would often issue the same publication on both materials, to cater to more than one market.
Paper made from wood pulp became popular in the early 20th century, because it was cheaper than linen or abaca cloth-based papers. Pulp-based paper made books less expensive to the general public. This paved the way for huge leaps in the rate of literacy in industrialised nations, and enabled the spread of information during the Second Industrial Revolution.
Pulp paper, however, contains acid which eventually destroys the paper from within. Earlier techniques for making paper used limestone rollers, which neutralized the acid in the pulp. Books printed between 1850 and 1950 are primarily at risk; more recent books are often printed on acid-free or alkaline paper. Libraries today have to consider mass deacidification of their older collections in order to prevent decay.
Stability of the climate is critical to the long-term preservation of paper and book material. Good air circulation is important to keep fluctuation in climate stable. The HVAC system should be up to date and functioning efficiently. Light is detrimental to collections. Therefore, care should be given to the collections by implementing light control. General housekeeping issues can be addressed, including pest control. In addition to these helpful solutions, a library must also make an effort to be prepared if a disaster occurs, one that they cannot control. Time and effort should be given to create a concise and effective disaster plan to counteract any damage incurred through "acts of God", therefore an emergency management plan should be in place.
See also
Outline of books
Alphabet book
Artist's book
Audiobook
Bibliodiversity
Book burning
Booksellers
Lists of books
Miniature book
Open access book
Society for the History of Authorship, Reading and Publishing (SHARP)
Citations
Bibliography
"Book", in International Encyclopedia of Information and Library Science ("IEILS"), Editors: John Feather, Paul Sturges, 2003, Routledge,
Further reading
Tim Parks (August 2017), "The Books We Don't Understand", The New York Review of Books
External links
Information on Old Books, Smithsonian Libraries
"Manuscripts, Books, and Maps: The Printing Press and a Changing World"
Documents
Paper products
Media formats
|
https://en.wikipedia.org/wiki/Bauhaus
|
The Staatliches Bauhaus (), commonly known as the , was a German art school operational from 1919 to 1933 that combined crafts and the fine arts. The school became famous for its approach to design, which attempted to unify individual artistic vision with the principles of mass production and emphasis on function. Along with the doctrine of functionalism, the Bauhaus initiated the conceptual understanding of architecture and design.
The Bauhaus was founded by architect Walter Gropius in Weimar. It was grounded in the idea of creating a Gesamtkunstwerk ("comprehensive artwork") in which all the arts would eventually be brought together. The Bauhaus style later became one of the most influential currents in modern design, modernist architecture, and architectural education. The Bauhaus movement had a profound influence on subsequent developments in art, architecture, graphic design, interior design, industrial design, and typography. Staff at the Bauhaus included prominent artists such as Paul Klee, Wassily Kandinsky, and László Moholy-Nagy at various points.
The school existed in three German cities—Weimar, from 1919 to 1925; Dessau, from 1925 to 1932; and Berlin, from 1932 to 1933—under three different architect-directors: Walter Gropius from 1919 to 1928; Hannes Meyer from 1928 to 1930; and Ludwig Mies van der Rohe from 1930 until 1933, when the school was closed by its own leadership under pressure from the Nazi regime, having been painted as a centre of communist intellectualism. Internationally, former key figures of Bauhaus were successful in the United States and became known as the avant-garde for the International Style.
The changes of venue and leadership resulted in a constant shifting of focus, technique, instructors, and politics. For example, the pottery shop was discontinued when the school moved from Weimar to Dessau, even though it had been an important revenue source; when Mies van der Rohe took over the school in 1930, he transformed it into a private school and would not allow any supporters of Hannes Meyer to attend it.
Term and concept
Bauhaus is sometimes mistakenly called a style. This is not true. Bauhaus is not a style. However, several specific features are identified in its forms and shapes: simple geometric shapes like rectangles and spheres, without elaborate decorations. Buildings, furniture, and fonts often feature rounded corners and sometimes rounded walls. Other buildings are characterized by rectangular features, for example protruding balconies with flat, chunky railings facing the street, and long banks of windows. Furniture often uses chrome metal pipes that curve at corners. Some outlines can be defined as a tool for creating an ideal form, which is the basis of the architectural concept.
Bauhaus and German modernism
After Germany's defeat in World War I and the establishment of the Weimar Republic, a renewed liberal spirit allowed an upsurge of radical experimentation in all the arts, which had been suppressed by the old regime. Many Germans of left-wing views were influenced by the cultural experimentation that followed the Russian Revolution, such as constructivism. Such influences can be overstated: Gropius did not share these radical views, and said that Bauhaus was entirely apolitical. Just as important was the influence of the 19th-century English designer William Morris (1834–1896), who had argued that art should meet the needs of society and that there should be no distinction between form and function. Thus, the Bauhaus style, also known as the International Style, was marked by the absence of ornamentation and by harmony between the function of an object or a building and its design.
However, the most important influence on Bauhaus was modernism, a cultural movement whose origins lay as early as the 1880s, and which had already made its presence felt in Germany before the World War, despite the prevailing conservatism. The design innovations commonly associated with Gropius and the Bauhaus—the radically simplified forms, the rationality and functionality, and the idea that mass production was reconcilable with the individual artistic spirit—were already partly developed in Germany before the Bauhaus was founded. The German national designers' organization Deutscher Werkbund was formed in 1907 by Hermann Muthesius to harness the new potentials of mass production, with a mind towards preserving Germany's economic competitiveness with England. In its first seven years, the Werkbund came to be regarded as the authoritative body on questions of design in Germany, and was copied in other countries. Many fundamental questions of craftsmanship versus mass production, the relationship of usefulness and beauty, the practical purpose of formal beauty in a commonplace object, and whether or not a single proper form could exist, were argued out among its 1,870 members (by 1914).
German architectural modernism was known as Neues Bauen. Beginning in June 1907, Peter Behrens' pioneering industrial design work for the German electrical company AEG successfully integrated art and mass production on a large scale. He designed consumer products, standardized parts, created clean-lined designs for the company's graphics, developed a consistent corporate identity, built the modernist landmark AEG Turbine Factory, and made full use of newly developed materials such as poured concrete and exposed steel. Behrens was a founding member of the Werkbund, and both Walter Gropius and Adolf Meyer worked for him in this period.
The Bauhaus was founded at a time when the German zeitgeist had turned from emotional Expressionism to the matter-of-fact New Objectivity. An entire group of working architects, including Erich Mendelsohn, Bruno Taut and Hans Poelzig, turned away from fanciful experimentation and towards rational, functional, sometimes standardized building. Beyond the Bauhaus, many other significant German-speaking architects in the 1920s responded to the same aesthetic issues and material possibilities as the school. They also responded to the promise of a "minimal dwelling" written into the new Weimar Constitution. Ernst May, Bruno Taut and Martin Wagner, among others, built large housing blocks in Frankfurt and Berlin. The acceptance of modernist design into everyday life was the subject of publicity campaigns, well-attended public exhibitions like the Weissenhof Estate, films, and sometimes fierce public debate.
Bauhaus and Vkhutemas
The Vkhutemas, the Russian state art and technical school founded in 1920 in Moscow, has been compared to Bauhaus. Founded a year after the Bauhaus school, Vkhutemas has close parallels to the German Bauhaus in its intent, organization and scope. The two schools were the first to train artist-designers in a modern manner. Both schools were state-sponsored initiatives to merge traditional craft with modern technology, with a basic course in aesthetic principles, courses in color theory, industrial design, and architecture. Vkhutemas was a larger school than the Bauhaus, but it was less publicised outside the Soviet Union and consequently, is less familiar in the West.
With the internationalism of modern architecture and design, there were many exchanges between the Vkhutemas and the Bauhaus. The second Bauhaus director Hannes Meyer attempted to organise an exchange between the two schools, while Hinnerk Scheper of the Bauhaus collaborated with various Vkhutein members on the use of colour in architecture. In addition, El Lissitzky's book Russia: an Architecture for World Revolution published in German in 1930 featured several illustrations of Vkhutemas/Vkhutein projects there.
History of the Bauhaus
Weimar
The school was founded by Walter Gropius in Weimar on 1 April 1919, as a merger of the Grand-Ducal Saxon Academy of Fine Art and the Grand Ducal Saxon School of Arts and Crafts for a newly affiliated architecture department. Its roots lay in the arts and crafts school founded by the Grand Duke of Saxe-Weimar-Eisenach in 1906, and directed by Belgian Art Nouveau architect Henry van de Velde. When van de Velde was forced to resign in 1915 because he was Belgian, he suggested Gropius, Hermann Obrist, and August Endell as possible successors. In 1919, after delays caused by World War I and a lengthy debate over who should head the institution and the socio-economic meanings of a reconciliation of the fine arts and the applied arts (an issue which remained a defining one throughout the school's existence), Gropius was made the director of a new institution integrating the two called the Bauhaus. In the pamphlet for an April 1919 exhibition entitled Exhibition of Unknown Architects, Gropius, still very much under the influence of William Morris and the British
Arts and Crafts Movement, proclaimed his goal as being "to create a new guild of craftsmen, without the class distinctions which raise an arrogant barrier between craftsman and artist." Gropius's neologism Bauhaus references both building and the Bauhütte, a premodern guild of stonemasons. The early intention was for the Bauhaus to be a combined architecture school, crafts school, and academy of the arts. Swiss painter Johannes Itten, German-American painter Lyonel Feininger, and German sculptor Gerhard Marcks, along with Gropius, comprised the faculty of the Bauhaus in 1919. By the following year their ranks had grown to include German painter, sculptor, and designer Oskar Schlemmer who headed the theatre workshop, and Swiss painter Paul Klee, joined in 1922 by Russian painter Wassily Kandinsky. A tumultuous year at the Bauhaus, 1922 also saw the move of Dutch painter Theo van Doesburg to Weimar to promote De Stijl ("The Style"), and a visit to the Bauhaus by Russian Constructivist artist and architect El Lissitzky.
From 1919 to 1922 the school was shaped by the pedagogical and aesthetic ideas of Johannes Itten, who taught the Vorkurs or "preliminary course" that was the introduction to the ideas of the Bauhaus. Itten was heavily influenced in his teaching by the ideas of Franz Cižek and Friedrich Wilhelm August Fröbel. He was also influenced in respect to aesthetics by the work of the Der Blaue Reiter group in Munich, as well as the work of Austrian Expressionist Oskar Kokoschka. The influence of German Expressionism favoured by Itten was analogous in some ways to the fine arts side of the ongoing debate. This influence culminated with the addition of Der Blaue Reiter founding member Wassily Kandinsky to the faculty and ended when Itten resigned in late 1923. Itten was replaced by the Hungarian designer László Moholy-Nagy, who rewrote the Vorkurs with a leaning towards the New Objectivity favoured by Gropius, which was analogous in some ways to the applied arts side of the debate. Although this shift was an important one, it did not represent a radical break from the past so much as a small step in a broader, more gradual socio-economic movement that had been going on at least since 1907, when van de Velde had argued for a craft basis for design while Hermann Muthesius had begun implementing industrial prototypes.
Gropius was not necessarily against Expressionism, and in fact, himself in the same 1919 pamphlet proclaiming this "new guild of craftsmen, without the class snobbery", described "painting and sculpture rising to heaven out of the hands of a million craftsmen, the crystal symbol of the new faith of the future." By 1923, however, Gropius was no longer evoking images of soaring Romanesque cathedrals and the craft-driven aesthetic of the "Völkisch movement", instead declaring "we want an architecture adapted to our world of machines, radios and fast cars." Gropius argued that a new period of history had begun with the end of the war. He wanted to create a new architectural style to reflect this new era. His style in architecture and consumer goods was to be functional, cheap and consistent with mass production. To these ends, Gropius wanted to reunite art and craft to arrive at high-end functional products with artistic merit. The Bauhaus issued a magazine called Bauhaus and a series of books called "Bauhausbücher". Since the Weimar Republic lacked the number of raw materials available to the United States and Great Britain, it had to rely on the proficiency of a skilled labour force and an ability to export innovative and high-quality goods. Therefore, designers were needed and so was a new type of art education. The school's philosophy stated that the artist should be trained to work with the industry.
Weimar was in the German state of Thuringia, and the Bauhaus school received state support from the Social Democrat-controlled Thuringian state government. The school in Weimar experienced political pressure from conservative circles in Thuringian politics, increasingly so after 1923 as political tension rose. One condition placed on the Bauhaus in this new political environment was the exhibition of work undertaken at the school. This condition was met in 1923 with the Bauhaus' exhibition of the experimental Haus am Horn. The Ministry of Education placed the staff on six-month contracts and cut the school's funding in half. The Bauhaus issued a press release on 26 December 1924, setting the closure of the school for the end of March 1925. At this point it had already been looking for alternative sources of funding. After the Bauhaus moved to Dessau, a school of industrial design with teachers and staff less antagonistic to the conservative political regime remained in Weimar. This school was eventually known as the Technical University of Architecture and Civil Engineering, and in 1996 changed its name to Bauhaus-University Weimar.
Dessau
The Bauhaus moved to Dessau in 1925 and new facilities there were inaugurated in late 1926. Gropius's design for the Dessau facilities was a return to the futuristic Gropius of 1914 that had more in common with the International style lines of the Fagus Factory than the stripped down Neo-classical of the Werkbund pavilion or the Völkisch Sommerfeld House. During the Dessau years, there was a remarkable change in direction for the school. According to Elaine Hoffman, Gropius had approached the Dutch architect Mart Stam to run the newly founded architecture program, and when Stam declined the position, Gropius turned to Stam's friend and colleague in the ABC group, Hannes Meyer.
Meyer became director when Gropius resigned in February 1928, and brought the Bauhaus its two most significant building commissions, both of which still exist: five apartment buildings in the city of Dessau, and the Bundesschule des Allgemeinen Deutschen Gewerkschaftsbundes (ADGB Trade Union School) in Bernau bei Berlin. Meyer favoured measurements and calculations in his presentations to clients, along with the use of off-the-shelf architectural components to reduce costs. This approach proved attractive to potential clients. The school turned its first profit under his leadership in 1929.
But Meyer also generated a great deal of conflict. As a radical functionalist, he had no patience with the aesthetic program and forced the resignations of Herbert Bayer, Marcel Breuer, and other long-time instructors. Even though Meyer shifted the orientation of the school further to the left than it had been under Gropius, he didn't want the school to become a tool of left-wing party politics. He prevented the formation of a student communist cell, and in the increasingly dangerous political atmosphere, this became a threat to the existence of the Dessau school. Dessau mayor Fritz Hesse fired him in the summer of 1930. The Dessau city council attempted to convince Gropius to return as head of the school, but Gropius instead suggested Ludwig Mies van der Rohe. Mies was appointed in 1930 and immediately interviewed each student, dismissing those that he deemed uncommitted. He halted the school's manufacture of goods so that the school could focus on teaching, and appointed no new faculty other than his close confidant Lilly Reich. By 1931, the Nazi Party was becoming more influential in German politics. When it gained control of the Dessau city council, it moved to close the school.
Berlin
In late 1932, Mies rented a derelict factory in Berlin (Birkbusch Street 49) to use as the new Bauhaus with his own money. The students and faculty rehabilitated the building, painting the interior white. The school operated for ten months without further interference from the Nazi Party. In 1933, the Gestapo closed down the Berlin school. Mies protested the decision, eventually speaking to the head of the Gestapo, who agreed to allow the school to re-open. However, shortly after receiving a letter permitting the opening of the Bauhaus, Mies and the other faculty agreed to voluntarily shut down the school.
Although neither the Nazi Party nor Adolf Hitler had a cohesive architectural policy before they came to power in 1933, Nazi writers like Wilhelm Frick and Alfred Rosenberg had already labelled the Bauhaus "un-German" and criticized its modernist styles, deliberately generating public controversy over issues like flat roofs. Increasingly through the early 1930s, they characterized the Bauhaus as a front for communists and social liberals. Indeed, when Meyer was fired in 1930, a number of communist students loyal to him moved to the Soviet Union.
Even before the Nazis came to power, political pressure on Bauhaus had increased. The Nazi movement, from nearly the start, denounced the Bauhaus for its "degenerate art", and the Nazi regime was determined to crack down on what it saw as the foreign, probably Jewish, influences of "cosmopolitan modernism". Despite Gropius's protestations that as a war veteran and a patriot his work had no subversive political intent, the Berlin Bauhaus was pressured to close in April 1933. Emigrants did succeed, however, in spreading the concepts of the Bauhaus to other countries, including the "New Bauhaus" of Chicago: Mies decided to emigrate to the United States for the directorship of the School of Architecture at the Armour Institute (now Illinois Institute of Technology) in Chicago and to seek building commissions. The simple engineering-oriented functionalism of stripped-down modernism, however, did lead to some Bauhaus influences living on in Nazi Germany. When Hitler's chief engineer, Fritz Todt, began opening the new autobahns (highways) in 1935, many of the bridges and service stations were "bold examples of modernism", and among those submitting designs was Mies van der Rohe.
Architectural output
The paradox of the early Bauhaus was that, although its manifesto proclaimed that the aim of all creative activity was building, the school did not offer classes in architecture until 1927. During the years under Gropius (1919–1927), he and his partner Adolf Meyer observed no real distinction between the output of his architectural office and the school. So the built output of Bauhaus architecture in these years is the output of Gropius: the Sommerfeld house in Berlin, the Otte house in Berlin, the Auerbach house in Jena, and the competition design for the Chicago Tribune Tower, which brought the school much attention. The definitive 1926 Bauhaus building in Dessau is also attributed to Gropius. Apart from contributions to the 1923 Haus am Horn, student architectural work amounted to un-built projects, interior finishes, and craft work like cabinets, chairs and pottery.
In the next two years under Meyer, the architectural focus shifted away from aesthetics and towards functionality. There were major commissions: one from the city of Dessau for five tightly designed "Laubenganghäuser" (apartment buildings with balcony access), which are still in use today, and another for the Bundesschule des Allgemeinen Deutschen Gewerkschaftsbundes (ADGB Trade Union School) in Bernau bei Berlin. Meyer's approach was to research users' needs and scientifically develop the design solution.
Mies van der Rohe repudiated Meyer's politics, his supporters, and his architectural approach. As opposed to Gropius's "study of essentials", and Meyer's research into user requirements, Mies advocated a "spatial implementation of intellectual decisions", which effectively meant an adoption of his own aesthetics. Neither Mies van der Rohe nor his Bauhaus students saw any projects built during the 1930s.
The popular conception of the Bauhaus as the source of extensive Weimar-era working housing is not accurate. Two projects, the apartment building project in Dessau and the Törten row housing also in Dessau, fall in that category, but developing worker housing was not the first priority of Gropius nor Mies. It was the Bauhaus contemporaries Bruno Taut, Hans Poelzig and particularly Ernst May, as the city architects of Berlin, Dresden and Frankfurt respectively, who are rightfully credited with the thousands of socially progressive housing units built in Weimar Germany. The housing Taut built in south-west Berlin during the 1920s, close to the U-Bahn stop Onkel Toms Hütte, is still occupied.
Impact
The Bauhaus had a major impact on art and architecture trends in Western Europe, Canada, the United States and Israel in the decades following its demise, as many of the artists involved fled, or were exiled by the Nazi regime. In 1996, four of the major sites associated with Bauhaus in Germany were inscribed on the UNESCO World Heritage List (with two more added in 2017).
In 1928, the Hungarian painter Alexander Bortnyik founded a school of design in Budapest called Műhely, which means "the studio". Located on the seventh floor of a house on Nagymezo Street, it was meant to be the Hungarian equivalent to the Bauhaus. The literature sometimes refers to it—in an oversimplified manner—as "the Budapest Bauhaus". Bortnyik was a great admirer of László Moholy-Nagy and had met Walter Gropius in Weimar between 1923 and 1925. Moholy-Nagy himself taught at the Műhely. Victor Vasarely, a pioneer of op art, studied at this school before establishing in Paris in 1930.
Walter Gropius, Marcel Breuer, and Moholy-Nagy re-assembled in Britain during the mid-1930s and lived and worked in the Isokon housing development in Lawn Road in London before the war caught up with them. Gropius and Breuer went on to teach at the Harvard Graduate School of Design and worked together before their professional split. Their collaboration produced, among other projects, the Aluminum City Terrace in New Kensington, Pennsylvania and the Alan I W Frank House in Pittsburgh. The Harvard School was enormously influential in America in the late 1920s and early 1930s, producing such students as Philip Johnson, I. M. Pei, Lawrence Halprin and Paul Rudolph, among many others.
In the late 1930s, Mies van der Rohe re-settled in Chicago, enjoyed the sponsorship of the influential Philip Johnson, and became one of the world's pre-eminent architects. Moholy-Nagy also went to Chicago and founded the New Bauhaus school under the sponsorship of industrialist and philanthropist Walter Paepcke. This school became the Institute of Design, part of the Illinois Institute of Technology. Printmaker and painter Werner Drewes was also largely responsible for bringing the Bauhaus aesthetic to America and taught at both Columbia University and Washington University in St. Louis. Herbert Bayer, sponsored by Paepcke, moved to Aspen, Colorado in support of Paepcke's Aspen projects at the Aspen Institute. In 1953, Max Bill, together with Inge Aicher-Scholl and Otl Aicher, founded the Ulm School of Design (German: Hochschule für Gestaltung – HfG Ulm) in Ulm, Germany, a design school in the tradition of the Bauhaus. The school is notable for its inclusion of semiotics as a field of study. The school closed in 1968, but the "Ulm Model" concept continues to influence international design education. Another series of projects at the school were the Bauhaus typefaces, mostly realized in the decades afterward.
The influence of the Bauhaus on design education was significant. One of the main objectives of the Bauhaus was to unify art, craft, and technology, and this approach was incorporated into the curriculum of the Bauhaus. The structure of the Bauhaus Vorkurs (preliminary course) reflected a pragmatic approach to integrating theory and application. In their first year, students learnt the basic elements and principles of design and colour theory, and experimented with a range of materials and processes. This approach to design education became a common feature of architectural and design school in many countries. For example, the Shillito Design School in Sydney stands as a unique link between Australia and the Bauhaus. The colour and design syllabus of the Shillito Design School was firmly underpinned by the theories and ideologies of the Bauhaus. Its first year foundational course mimicked the Vorkurs and focused on the elements and principles of design plus colour theory and application. The founder of the school, Phyllis Shillito, which opened in 1962 and closed in 1980, firmly believed that "A student who has mastered the basic principles of design, can design anything from a dress to a kitchen stove". In Britain, largely under the influence of painter and teacher William Johnstone, Basic Design, a Bauhaus-influenced art foundation course, was introduced at Camberwell School of Art and the Central School of Art and Design, whence it spread to all art schools in the country, becoming universal by the early 1960s.
One of the most important contributions of the Bauhaus is in the field of modern furniture design. The characteristic Cantilever chair and Wassily Chair designed by Marcel Breuer are two examples. (Breuer eventually lost a legal battle in Germany with Dutch architect/designer Mart Stam over patent rights to the cantilever chair design. Although Stam had worked on the design of the Bauhaus's 1923 exhibit in Weimar, and guest-lectured at the Bauhaus later in the 1920s, he was not formally associated with the school, and he and Breuer had worked independently on the cantilever concept, leading to the patent dispute.) The most profitable product of the Bauhaus was its wallpaper.
The physical plant at Dessau survived World War II and was operated as a design school with some architectural facilities by the German Democratic Republic. This included live stage productions in the Bauhaus theater under the name of Bauhausbühne ("Bauhaus Stage"). After German reunification, a reorganized school continued in the same building, with no essential continuity with the Bauhaus under Gropius in the early 1920s. In 1979 Bauhaus-Dessau College started to organize postgraduate programs with participants from all over the world. This effort has been supported by the Bauhaus-Dessau Foundation which was founded in 1974 as a public institution.
Later evaluation of the Bauhaus design credo was critical of its flawed recognition of the human element, an acknowledgment of "the dated, unattractive aspects of the Bauhaus as a projection of utopia marked by mechanistic views of human nature…Home hygiene without home atmosphere."
Subsequent examples which have continued the philosophy of the Bauhaus include Black Mountain College, Hochschule für Gestaltung in Ulm and Domaine de Boisbuchet.
The White City
The White City (Hebrew: העיר הלבנה, refers to a collection of over 4,000 buildings built in the Bauhaus or International Style in Tel Aviv from the 1930s by German Jewish architects who emigrated to the British Mandate of Palestine after the rise of the Nazis. Tel Aviv has the largest number of buildings in the Bauhaus/International Style of any city in the world. Preservation, documentation, and exhibitions have brought attention to Tel Aviv's collection of 1930s architecture. In 2003, the United Nations Educational, Scientific and Cultural Organization (UNESCO) proclaimed Tel Aviv's White City a World Cultural Heritage site, as "an outstanding example of new town planning and architecture in the early 20th century." The citation recognized the unique adaptation of modern international architectural trends to the cultural, climatic, and local traditions of the city. Bauhaus Center Tel Aviv organizes regular architectural tours of the city.
Centenary year, 2019
As the centenary of the founding of Bauhaus, several events, festivals, and exhibitions were held around the world in 2019. The international opening festival at the Berlin Academy of the Arts from 16 to 24 January concentrated on "the presentation and production of pieces by contemporary artists, in which the aesthetic issues and experimental configurations of the Bauhaus artists continue to be inspiringly contagious". Original Bauhaus, The Centenary Exhibition at the Berlinische Galerie (6 September 2019 to 27 January 2020) presented 1,000 original artefacts from the Bauhaus-Archiv's collection and recounted the history behind the objects.
The New European Bauhaus
In September 2020, President of the European Commission Ursula Von der Leyen introduced the New European Bauhaus (NEB) initiative during her State of the Union address. The NEB is a creative and interdisciplinary movement that connects the European Green Deal to everyday life. It is a platform for experimentation aiming to unite citizens, experts, businesses and institutions in imagining and designing a sustainable, aesthetic and inclusive future.
Sport and physical activity were an essential part of the original Bauhaus approach. Hannes Meyer, the second director of Bauhaus Dessau, ensured that one day a week was solely devoted to sport and gymnastics. 1 In 1930, Meyer employed two physical education teachers. The Bauhaus school even applied for public funds to enhance its playing field. The inclusion of sport and physical activity in the Bauhaus curriculum had various purposes. First, as Meyer put it, sport combatted a “one-sided emphasis on brainwork.” In addition, Bauhaus instructors believed that students could better express themselves if they actively experienced the space, rhythms and movements of the body. The Bauhaus approach also considered physical activity an important contributor to wellbeing and community spirit. Sport and physical activity were essential to the interdisciplinary Bauhaus movement that developed revolutionary ideas and continues to shape our environments today.
Bauhaus staff and students
People who were educated, or who taught or worked in other capacities, at the Bauhaus.
Gallery
See also
Art Deco architecture
Bauhaus Archive
Bauhaus Center Tel Aviv
Bauhaus Dessau Foundation
Bauhaus Museum, Tel Aviv
Bauhaus Museum, Weimar
Bauhaus World Heritage Site
Constructivist architecture
Expressionist architecture
Form follows function
Haus am Horn
IIT Institute of Design
International style (architecture)
Lucia Moholy
Max-Liebling House, Tel Aviv
Modern architecture
Neues Sehen (New Vision)
New Objectivity (architecture)
Swiss Style (design)
Ulm School of Design
Vkhutemas
Women of the Bauhaus
Explanatory footnotes
The closure, and the response of Mies van der Rohe, is fully documented in Elaine Hochman's Architects of Fortune.
Google honored Bauhaus for its 100th anniversary on 12 April 2019 with a Google Doodle.
Citations
General and cited references
Olaf Thormann: Bauhaus Saxony. arnoldsche Art Publishers 2019, .
Further reading
External links
Bauhaus Everywhere — Google Arts & Culture
Collection: Artists of the Bauhaus from the University of Michigan Museum of Art
1919 establishments in Germany
1933 disestablishments in Germany
Architecture in Germany
Architecture schools
Art movements
Design schools in Germany
Expressionist architecture
German architectural styles
Graphic design
Industrial design
Modernist architecture
Bauhaus, Dessau
Visual arts education
Bauhaus
Weimar culture
World Heritage Sites in Germany
|
https://en.wikipedia.org/wiki/Biostatistics
|
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.
History
Biostatistics and genetics
Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis.
Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment of population genetics and this synthesis all relied on statistics and developed its use in biology.
Ronald Fisher worked alongside statistician Betty Allan developing several basic statistical methods in support of his work studying the crop experiments at Rothamsted Research, published in Fisher's books Statistical Methods for Research Workers (1925) and The Genetical Theory of Natural Selection (1930), as well as Allan's scientific papers. Fisher went on to give many contributions to genetics and statistics. Some of them include the ANOVA, p-value concepts, Fisher's exact test and Fisher's equation for population dynamics. He is credited for the sentence "Natural selection is a mechanism for generating an exceedingly high degree of improbability".
Sewall G. Wright developed F-statistics and methods of computing them and defined inbreeding coefficient.
J. B. S. Haldane's book, The Causes of Evolution, reestablished natural selection as the premier mechanism of evolution by explaining it in terms of the mathematical consequences of Mendelian genetics. He also developed the theory of primordial soup.
These and other biostatisticians, mathematical biologists, and statistically inclined geneticists helped bring together evolutionary biology and genetics into a consistent, coherent whole that could begin to be quantitatively modeled.
In parallel to this overall development, the pioneering work of D'Arcy Thompson in On Growth and Form also helped to add quantitative discipline to biological study.
Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been a tendency among biologists to distrust or deprecate results which are not qualitatively apparent. One anecdote describes Thomas Hunt Morgan banning the Friden calculator from his department at Caltech, saying "Well, I am like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my department waste scarce resources in placer mining."
Research planning
Any research in life sciences is proposed to answer a scientific question we might have. To answer this question with a high certainty, we need accurate results. The correct definition of the main hypothesis and the research plan will reduce errors while taking a decision in understanding a phenomenon. The research plan might include the research question, the hypothesis to be tested, the experimental design, data collection methods, data analysis perspectives and costs involved. It is essential to carry the study based on the three basic principles of experimental statistics: randomization, replication, and local control.
Research question
The research question will define the objective of a study. The research will be headed by the question, so it needs to be concise, at the same time it is focused on interesting and novel topics that may improve science and knowledge and that field. To define the way to ask the scientific question, an exhaustive literature review might be necessary. So the research can be useful to add value to the scientific community.
Hypothesis definition
Once the aim of the study is defined, the possible answers to the research question can be proposed, transforming this question into a hypothesis. The main propose is called null hypothesis (H0) and is usually based on a permanent knowledge about the topic or an obvious occurrence of the phenomena, sustained by a deep literature review. We can say it is the standard expected answer for the data under the situation in test. In general, HO assumes no association between treatments. On the other hand, the alternative hypothesis is the denial of HO. It assumes some degree of association between the treatment and the outcome. Although, the hypothesis is sustained by question research and its expected and unexpected answers.
As an example, consider groups of similar animals (mice, for example) under two different diet systems. The research question would be: what is the best diet? In this case, H0 would be that there is no difference between the two diets in mice metabolism (H0: μ1 = μ2) and the alternative hypothesis would be that the diets have different effects over animals metabolism (H1: μ1 ≠ μ2).
The hypothesis is defined by the researcher, according to his/her interests in answering the main question. Besides that, the alternative hypothesis can be more than one hypothesis. It can assume not only differences across observed parameters, but their degree of differences (i.e. higher or shorter).
Sampling
Usually, a study aims to understand an effect of a phenomenon over a population. In biology, a population is defined as all the individuals of a given species, in a specific area at a given time. In biostatistics, this concept is extended to a variety of collections possible of study. Although, in biostatistics, a population is not only the individuals, but the total of one specific component of their organisms, as the whole genome, or all the sperm cells, for animals, or the total leaf area, for a plant, for example.
It is not possible to take the measures from all the elements of a population. Because of that, the sampling process is very important for statistical inference. Sampling is defined as to randomly get a representative part of the entire population, to make posterior inferences about the population. So, the sample might catch the most variability across a population. The sample size is determined by several things, since the scope of the research to the resources available. In clinical research, the trial type, as inferiority, equivalence, and superiority is a key in determining sample size.
Experimental design
Experimental designs sustain those basic principles of experimental statistics. There are three basic experimental designs to randomly allocate treatments in all plots of the experiment. They are completely randomized design, randomized block design, and factorial designs. Treatments can be arranged in many ways inside the experiment. In agriculture, the correct experimental design is the root of a good study and the arrangement of treatments within the study is essential because environment largely affects the plots (plants, livestock, microorganisms). These main arrangements can be found in the literature under the names of "lattices", "incomplete blocks", "split plot", "augmented blocks", and many others. All of the designs might include control plots, determined by the researcher, to provide an error estimation during inference.
In clinical studies, the samples are usually smaller than in other biological studies, and in most cases, the environment effect can be controlled or measured. It is common to use randomized controlled clinical trials, where results are usually compared with observational study designs such as case–control or cohort.
Data collection
Data collection methods must be considered in research planning, because it highly influences the sample size and experimental design.
Data collection varies according to type of data. For qualitative data, collection can be done with structured questionnaires or by observation, considering presence or intensity of disease, using score criterion to categorize levels of occurrence. For quantitative data, collection is done by measuring numerical information using instruments.
In agriculture and biology studies, yield data and its components can be obtained by metric measures. However, pest and disease injuries in plats are obtained by observation, considering score scales for levels of damage. Especially, in genetic studies, modern methods for data collection in field and laboratory should be considered, as high-throughput platforms for phenotyping and genotyping. These tools allow bigger experiments, while turn possible evaluate many plots in lower time than a human-based only method for data collection.
Finally, all data collected of interest must be stored in an organized data frame for further analysis.
Analysis and data interpretation
Descriptive tools
Data can be represented through tables or graphical representation, such as line charts, bar charts, histograms, scatter plot. Also, measures of central tendency and variability can be very useful to describe an overview of the data. Follow some examples:
Frequency tables
One type of tables are the frequency table, which consists of data arranged in rows and columns, where the frequency is the number of occurrences or repetitions of data. Frequency can be:
Absolute: represents the number of times that a determined value appear;
Relative: obtained by the division of the absolute frequency by the total number;
In the next example, we have the number of genes in ten operons of the same organism.
Line graph
Line graphs represent the variation of a value over another metric, such as time. In general, values are represented in the vertical axis, while the time variation is represented in the horizontal axis.
Bar chart
A bar chart is a graph that shows categorical data as bars presenting heights (vertical bar) or widths (horizontal bar) proportional to represent values. Bar charts provide an image that could also be represented in a tabular format.
In the bar chart example, we have the birth rate in Brazil for the December months from 2010 to 2016. The sharp fall in December 2016 reflects the outbreak of Zika virus in the birth rate in Brazil.
Histograms
The histogram (or frequency distribution) is a graphical representation of a dataset tabulated and divided into uniform or non-uniform classes. It was first introduced by Karl Pearson.
Scatter plot
A scatter plot is a mathematical diagram that uses Cartesian coordinates to display values of a dataset. A scatter plot shows the data as a set of points, each one presenting the value of one variable determining the position on the horizontal axis and another variable on the vertical axis. They are also called scatter graph, scatter chart, scattergram, or scatter diagram.
Mean
The arithmetic mean is the sum of a collection of values () divided by the number of items of this collection ().
Median
The median is the value in the middle of a dataset.
Mode
The mode is the value of a set of data that appears most often.
Box plot
Box plot is a method for graphically depicting groups of numerical data. The maximum and minimum values are represented by the lines, and the interquartile range (IQR) represent 25–75% of the data. Outliers may be plotted as circles.
Correlation coefficients
Although correlations between two different kinds of data could be inferred by graphs, such as scatter plot, it is necessary validate this though numerical information. For this reason, correlation coefficients are required. They provide a numerical value that reflects the strength of an association.
Pearson correlation coefficient
Pearson correlation coefficient is a measure of association between two variables, X and Y. This coefficient, usually represented by ρ (rho) for the population and r for the sample, assumes values between −1 and 1, where ρ = 1 represents a perfect positive correlation, ρ = −1 represents a perfect negative correlation, and ρ = 0 is no linear correlation.
Inferential statistics
It is used to make inferences about an unknown population, by estimation and/or hypothesis testing. In other words, it is desirable to obtain parameters to describe the population of interest, but since the data is limited, it is necessary to make use of a representative sample in order to estimate them. With that, it is possible to test previously defined hypotheses and apply the conclusions to the entire population. The standard error of the mean is a measure of variability that is crucial to do inferences.
Hypothesis testing
Hypothesis testing is essential to make inferences about populations aiming to answer research questions, as settled in "Research planning" section. Authors defined four steps to be set:
The hypothesis to be tested: as stated earlier, we have to work with the definition of a null hypothesis (H0), that is going to be tested, and an alternative hypothesis. But they must be defined before the experiment implementation.
Significance level and decision rule: A decision rule depends on the level of significance, or in other words, the acceptable error rate (α). It is easier to think that we define a critical value that determines the statistical significance when a test statistic is compared with it. So, α also has to be predefined before the experiment.
Experiment and statistical analysis: This is when the experiment is really implemented following the appropriate experimental design, data is collected and the more suitable statistical tests are evaluated.
Inference: Is made when the null hypothesis is rejected or not rejected, based on the evidence that the comparison of p-values and α brings. It is pointed that the failure to reject H0 just means that there is not enough evidence to support its rejection, but not that this hypothesis is true.
Confidence intervals
A confidence interval is a range of values that can contain the true real parameter value in given a certain level of confidence. The first step is to estimate the best-unbiased estimate of the population parameter. The upper value of the interval is obtained by the sum of this estimate with the multiplication between the standard error of the mean and the confidence level. The calculation of lower value is similar, but instead of a sum, a subtraction must be applied.
Statistical considerations
Power and statistical error
When testing a hypothesis, there are two types of statistic errors possible: Type I error and Type II error. The type I error or false positive is the incorrect rejection of a true null hypothesis and the type II error or false negative is the failure to reject a false null hypothesis. The significance level denoted by α is the type I error rate and should be chosen before performing the test. The type II error rate is denoted by β and statistical power of the test is 1 − β.
p-value
The p-value is the probability of obtaining results as extreme as or more extreme than those observed, assuming the null hypothesis (H0) is true. It is also called the calculated probability. It is common to confuse the p-value with the significance level (α), but, the α is a predefined threshold for calling significant results. If p is less than α, the null hypothesis (H0) is rejected.
Multiple testing
In multiple tests of the same hypothesis, the probability of the occurrence of falses positives (familywise error rate) increase and some strategy are used to control this occurrence. This is commonly achieved by using a more stringent threshold to reject null hypotheses. The Bonferroni correction defines an acceptable global significance level, denoted by α* and each test is individually compared with a value of α = α*/m. This ensures that the familywise error rate in all m tests, is less than or equal to α*. When m is large, the Bonferroni correction may be overly conservative. An alternative to the Bonferroni correction is to control the false discovery rate (FDR). The FDR controls the expected proportion of the rejected null hypotheses (the so-called discoveries) that are false (incorrect rejections). This procedure ensures that, for independent tests, the false discovery rate is at most q*. Thus, the FDR is less conservative than the Bonferroni correction and have more power, at the cost of more false positives.
Mis-specification and robustness checks
The main hypothesis being tested (e.g., no association between treatments and outcomes) is often accompanied by other technical assumptions (e.g., about the form of the probability distribution of the outcomes) that are also part of the null hypothesis. When the technical assumptions are violated in practice, then the null may be frequently rejected even if the main hypothesis is true. Such rejections are said to be due to model mis-specification. Verifying whether the outcome of a statistical test does not change when the technical assumptions are slightly altered (so-called robustness checks) is the main way of combating mis-specification.
Model selection criteria
Model criteria selection will select or model that more approximate true model. The Akaike's Information Criterion (AIC) and The Bayesian Information Criterion (BIC) are examples of asymptotically efficient criteria.
Developments and big data
Recent developments have made a large impact on biostatistics. Two important changes have been the ability to collect data on a high-throughput scale, and the ability to perform much more complex analysis using computational techniques. This comes from the development in areas as sequencing technologies, Bioinformatics and Machine learning (Machine learning in bioinformatics).
Use in high-throughput data
New biomedical technologies like microarrays, next-generation sequencers (for genomics) and mass spectrometry (for proteomics) generate enormous amounts of data, allowing many tests to be performed simultaneously. Careful analysis with biostatistical methods is required to separate the signal from the noise. For example, a microarray could be used to measure many thousands of genes simultaneously, determining which of them have different expression in diseased cells compared to normal cells. However, only a fraction of genes will be differentially expressed.
Multicollinearity often occurs in high-throughput biostatistical settings. Due to high intercorrelation between the predictors (such as gene expression levels), the information of one predictor might be contained in another one. It could be that only 5% of the predictors are responsible for 90% of the variability of the response. In such a case, one could apply the biostatistical technique of dimension reduction (for example via principal component analysis). Classical statistical techniques like linear or logistic regression and linear discriminant analysis do not work well for high dimensional data (i.e. when the number of observations n is smaller than the number of features or predictors p: n < p). As a matter of fact, one can get quite high R2-values despite very low predictive power of the statistical model. These classical statistical techniques (esp. least squares linear regression) were developed for low dimensional data (i.e. where the number of observations n is much larger than the number of predictors p: n >> p). In cases of high dimensionality, one should always consider an independent validation test set and the corresponding residual sum of squares (RSS) and R2 of the validation test set, not those of the training set.
Often, it is useful to pool information from multiple predictors together. For example, Gene Set Enrichment Analysis (GSEA) considers the perturbation of whole (functionally related) gene sets rather than of single genes. These gene sets might be known biochemical pathways or otherwise functionally related genes. The advantage of this approach is that it is more robust: It is more likely that a single gene is found to be falsely perturbed than it is that a whole pathway is falsely perturbed. Furthermore, one can integrate the accumulated knowledge about biochemical pathways (like the JAK-STAT signaling pathway) using this approach.
Bioinformatics advances in databases, data mining, and biological interpretation
The development of biological databases enables storage and management of biological data with the possibility of ensuring access for users around the world. They are useful for researchers depositing data, retrieve information and files (raw or processed) originated from other experiments or indexing scientific articles, as PubMed. Another possibility is search for the desired term (a gene, a protein, a disease, an organism, and so on) and check all results related to this search. There are databases dedicated to SNPs (dbSNP), the knowledge on genes characterization and their pathways (KEGG) and the description of gene function classifying it by cellular component, molecular function and biological process (Gene Ontology). In addition to databases that contain specific molecular information, there are others that are ample in the sense that they store information about an organism or group of organisms. As an example of a database directed towards just one organism, but that contains much data about it, is the Arabidopsis thaliana genetic and molecular database – TAIR. Phytozome, in turn, stores the assemblies and annotation files of dozen of plant genomes, also containing visualization and analysis tools. Moreover, there is an interconnection between some databases in the information exchange/sharing and a major initiative was the International Nucleotide Sequence Database Collaboration (INSDC) which relates data from DDBJ, EMBL-EBI, and NCBI.
Nowadays, increase in size and complexity of molecular datasets leads to use of powerful statistical methods provided by computer science algorithms which are developed by machine learning area. Therefore, data mining and machine learning allow detection of patterns in data with a complex structure, as biological ones, by using methods of supervised and unsupervised learning, regression, detection of clusters and association rule mining, among others. To indicate some of them, self-organizing maps and k-means are examples of cluster algorithms; neural networks implementation and support vector machines models are examples of common machine learning algorithms.
Collaborative work among molecular biologists, bioinformaticians, statisticians and computer scientists is important to perform an experiment correctly, going from planning, passing through data generation and analysis, and ending with biological interpretation of the results.
Use of computationally intensive methods
On the other hand, the advent of modern computer technology and relatively cheap computing resources have enabled computer-intensive biostatistical methods like bootstrapping and re-sampling methods.
In recent times, random forests have gained popularity as a method for performing statistical classification. Random forest techniques generate a panel of decision trees. Decision trees have the advantage that you can draw them and interpret them (even with a basic understanding of mathematics and statistics). Random Forests have thus been used for clinical decision support systems.
Applications
Public health
Public health, including epidemiology, health services research, nutrition, environmental health and health care policy & management. In these medicine contents, it's important to consider the design and analysis of the clinical trials. As one example, there is the assessment of severity state of a patient with a prognosis of an outcome of a disease.
With new technologies and genetics knowledge, biostatistics are now also used for Systems medicine, which consists in a more personalized medicine. For this, is made an integration of data from different sources, including conventional patient data, clinico-pathological parameters, molecular and genetic data as well as data generated by additional new-omics technologies.
Quantitative genetics
The study of Population genetics and Statistical genetics in order to link variation in genotype with a variation in phenotype. In other words, it is desirable to discover the genetic basis of a measurable trait, a quantitative trait, that is under polygenic control. A genome region that is responsible for a continuous trait is called Quantitative trait locus (QTL). The study of QTLs become feasible by using molecular markers and measuring traits in populations, but their mapping needs the obtaining of a population from an experimental crossing, like an F2 or Recombinant inbred strains/lines (RILs). To scan for QTLs regions in a genome, a gene map based on linkage have to be built. Some of the best-known QTL mapping algorithms are Interval Mapping, Composite Interval Mapping, and Multiple Interval Mapping.
However, QTL mapping resolution is impaired by the amount of recombination assayed, a problem for species in which it is difficult to obtain large offspring. Furthermore, allele diversity is restricted to individuals originated from contrasting parents, which limit studies of allele diversity when we have a panel of individuals representing a natural population. For this reason, the Genome-wide association study was proposed in order to identify QTLs based on linkage disequilibrium, that is the non-random association between traits and molecular markers. It was leveraged by the development of high-throughput SNP genotyping.
In animal and plant breeding, the use of markers in selection aiming for breeding, mainly the molecular ones, collaborated to the development of marker-assisted selection. While QTL mapping is limited due resolution, GWAS does not have enough power when rare variants of small effect that are also influenced by environment. So, the concept of Genomic Selection (GS) arises in order to use all molecular markers in the selection and allow the prediction of the performance of candidates in this selection. The proposal is to genotype and phenotype a training population, develop a model that can obtain the genomic estimated breeding values (GEBVs) of individuals belonging to a genotype and but not phenotype population, called testing population. This kind of study could also include a validation population, thinking in the concept of cross-validation, in which the real phenotype results measured in this population are compared with the phenotype results based on the prediction, what used to check the accuracy of the model.
As a summary, some points about the application of quantitative genetics are:
This has been used in agriculture to improve crops (Plant breeding) and livestock (Animal breeding).
In biomedical research, this work can assist in finding candidates gene alleles that can cause or influence predisposition to diseases in human genetics
Expression data
Studies for differential expression of genes from RNA-Seq data, as for RT-qPCR and microarrays, demands comparison of conditions. The goal is to identify genes which have a significant change in abundance between different conditions. Then, experiments are designed appropriately, with replicates for each condition/treatment, randomization and blocking, when necessary. In RNA-Seq, the quantification of expression uses the information of mapped reads that are summarized in some genetic unit, as exons that are part of a gene sequence. As microarray results can be approximated by a normal distribution, RNA-Seq counts data are better explained by other distributions. The first used distribution was the Poisson one, but it underestimate the sample error, leading to false positives. Currently, biological variation is considered by methods that estimate a dispersion parameter of a negative binomial distribution. Generalized linear models are used to perform the tests for statistical significance and as the number of genes is high, multiple tests correction have to be considered. Some examples of other analysis on genomics data comes from microarray or proteomics experiments. Often concerning diseases or disease stages.
Other studies
Ecology, ecological forecasting
Biological sequence analysis
Systems biology for gene network inference or pathways analysis.
Clinical research and pharmaceutical development
Population dynamics, especially in regards to fisheries science.
Phylogenetics and evolution
Pharmacodynamics
Pharmacokinetics
Neuroimaging
Tools
There are a lot of tools that can be used to do statistical analysis in biological data. Most of them are useful in other areas of knowledge, covering a large number of applications (alphabetical). Here are brief descriptions of some of them:
ASReml: Another software developed by VSNi that can be used also in R environment as a package. It is developed to estimate variance components under a general linear mixed model using restricted maximum likelihood (REML). Models with fixed effects and random effects and nested or crossed ones are allowed. Gives the possibility to investigate different variance-covariance matrix structures.
CycDesigN: A computer package developed by VSNi that helps the researchers create experimental designs and analyze data coming from a design present in one of three classes handled by CycDesigN. These classes are resolvable, non-resolvable, partially replicated and crossover designs. It includes less used designs the Latinized ones, as t-Latinized design.
Orange: A programming interface for high-level data processing, data mining and data visualization. Include tools for gene expression and genomics.
R: An open source environment and programming language dedicated to statistical computing and graphics. It is an implementation of S language maintained by CRAN. In addition to its functions to read data tables, take descriptive statistics, develop and evaluate models, its repository contains packages developed by researchers around the world. This allows the development of functions written to deal with the statistical analysis of data that comes from specific applications. In the case of Bioinformatics, for example, there are packages located in the main repository (CRAN) and in others, as Bioconductor. It is also possible to use packages under development that are shared in hosting-services as GitHub.
SAS: A data analysis software widely used, going through universities, services and industry. Developed by a company with the same name (SAS Institute), it uses SAS language for programming.
PLA 3.0: Is a biostatistical analysis software for regulated environments (e.g. drug testing) which supports Quantitative Response Assays (Parallel-Line, Parallel-Logistics, Slope-Ratio) and Dichotomous Assays (Quantal Response, Binary Assays). It also supports weighting methods for combination calculations and the automatic data aggregation of independent assay data.
Weka: A Java software for machine learning and data mining, including tools and methods for visualization, clustering, regression, association rule, and classification. There are tools for cross-validation, bootstrapping and a module of algorithm comparison. Weka also can be run in other programming languages as Perl or R.
Python (programming language) image analysis, deep-learning, machine-learning
SQL databases
NoSQL
NumPy numerical python
SciPy
SageMath
LAPACK linear algebra
MATLAB
Apache Hadoop
Apache Spark
Amazon Web Services
Scope and training programs
Almost all educational programmes in biostatistics are at postgraduate level. They are most often found in schools of public health, affiliated with schools of medicine, forestry, or agriculture, or as a focus of application in departments of statistics.
In the United States, where several universities have dedicated biostatistics departments, many other top-tier universities integrate biostatistics faculty into statistics or other departments, such as epidemiology. Thus, departments carrying the name "biostatistics" may exist under quite different structures. For instance, relatively new biostatistics departments have been founded with a focus on bioinformatics and computational biology, whereas older departments, typically affiliated with schools of public health, will have more traditional lines of research involving epidemiological studies and clinical trials as well as bioinformatics. In larger universities around the world, where both a statistics and a biostatistics department exist, the degree of integration between the two departments may range from the bare minimum to very close collaboration. In general, the difference between a statistics program and a biostatistics program is twofold: (i) statistics departments will often host theoretical/methodological research which are less common in biostatistics programs and (ii) statistics departments have lines of research that may include biomedical applications but also other areas such as industry (quality control), business and economics and biological areas other than medicine.
Specialized journals
Biostatistics
International Journal of Biostatistics
Journal of Epidemiology and Biostatistics
Biostatistics and Public Health
Biometrics
Biometrika
Biometrical Journal
Communications in Biometry and Crop Science
Statistical Applications in Genetics and Molecular Biology
Statistical Methods in Medical Research
Pharmaceutical Statistics
Statistics in Medicine
See also
Bioinformatics
Epidemiological method
Epidemiology
Group size measures
Health indicator
Mathematical and theoretical biology
References
External links
The International Biometric Society
The Collection of Biostatistics Research Archive
Guide to Biostatistics (MedPageToday.com)
Biomedical Statistics
Bioinformatics
|
https://en.wikipedia.org/wiki/Braille
|
Braille ( , ) is a tactile writing system used by people who are visually impaired. It can be read either on embossed paper or by using refreshable braille displays that connect to computers and smartphone devices. Braille can be written using a slate and stylus, a braille writer, an electronic braille notetaker or with the use of a computer connected to a braille embosser.
Braille is named after its creator, Louis Braille, a Frenchman who lost his sight as a result of a childhood accident. In 1824, at the age of fifteen, he developed the braille code based on the French alphabet as an improvement on night writing. He published his system, which subsequently included musical notation, in 1829. The second revision, published in 1837, was the first binary form of writing developed in the modern era.
Braille characters are formed using a combination of six raised dots arranged in a 3 × 2 matrix, called the braille cell. The number and arrangement of these dots distinguishes one character from another. Since the various braille alphabets originated as transcription codes for printed writing, the mappings (sets of character designations) vary from language to language, and even within one; in English Braille there are 3 levels of braille: uncontracted braille a letter-by-letter transcription used for basic literacy; contracted braille an addition of abbreviations and contractions used as a space-saving mechanism; and grade 3 various non-standardized personal stenography that is less commonly used.
In addition to braille text (letters, punctuation, contractions), it is also possible to create embossed illustrations and graphs, with the lines either solid or made of series of dots, arrows, and bullets that are larger than braille dots. A full braille cell includes six raised dots arranged in two columns, each column having three dots. The dot positions are identified by numbers from one to six. There are 64 possible combinations, including no dots at all for a word space. Dot configurations can be used to represent a letter, digit, punctuation mark, or even a word.
Early braille education is crucial to literacy, education and employment among the blind. Despite the evolution of new technologies, including screen reader software that reads information aloud, braille provides blind people with access to spelling, punctuation and other aspects of written language less accessible through audio alone.
While some have suggested that audio-based technologies will decrease the need for braille, technological advancements such as braille displays have continued to make braille more accessible and available. Braille users highlight that braille remains as essential as print is to the sighted.
History
Braille was based on a tactile code, now known as night writing, developed by Charles Barbier. (The name "night writing" was later given to it when it was considered as a means for soldiers to communicate silently at night and without a light source, but Barbier's writings do not use this term and suggest that it was originally designed as a simpler form of writing and for the visually impaired.) In Barbier's system, sets of 12 embossed dots were used to encode 36 different sounds. Braille identified three major defects of the code: first, the symbols represented phonetic sounds and not letters of the alphabetthus the code was unable to render the orthography of the words. Second, the 12-dot symbols could not easily fit beneath the pad of the reading finger. This required the reading finger to move in order to perceive the whole symbol, which slowed the reading process. (This was because Barbier's system was based only on the number of dots in each of two 6-dot columns but not the pattern of the dots.) Third, the code did not include symbols for numerals or punctuation. Braille's solution was to use 6-dot cells and to assign a specific pattern to each letter of the alphabet. Braille also developed symbols for representing numerals and punctuation.
At first, Braille was a one-to-one transliteration of the French alphabet, but soon various abbreviations (contractions) and even logograms were developed, creating a system much more like shorthand.
Today, there are braille codes for over 133 languages.
In English, some variations in the braille codes have traditionally existed among English-speaking countries. In 1991, work to standardize the braille codes used in the English-speaking world began. Unified English Braille (UEB) has been adopted in all seven member countries of the International Council on English Braille (ICEB) as well as Nigeria.
For blind readers, Braille is an independent writing system, rather than a code of printed orthography.
Derivation
Braille is derived from the Latin alphabet, albeit indirectly. In Braille's original system, the dot patterns were assigned to letters according to their position within the alphabetic order of the French alphabet of the time, with accented letters and w sorted at the end.
Unlike print, which consists of mostly arbitrary symbols, the braille alphabet follows a logical sequence. The first ten letters of the alphabet, a–j, use the upper four dot positions: (black dots in the table below). These stand for the ten digits 1–9 and 0 in an alphabetic numeral system similar to Greek numerals (as well as derivations of it, including Hebrew numerals, Cyrillic numerals, Abjad numerals, also Hebrew gematria and Greek isopsephy).
Though the dots are assigned in no obvious order, the cells with the fewest dots are assigned to the first three letters (and lowest digits), abc = 123 (), and to the three vowels in this part of the alphabet, aei (), whereas the even digits, 4, 6, 8, 0 (), are corners/right angles.
The next ten letters, k–t, are identical to a–j respectively, apart from the addition of a dot at position 3 (red dots in the bottom left corner of the cell in the table below): :
{| class="wikitable" style="text-align:center"
|+ Derivation (colored dots) of the 26 braille letters of the basic Latin alphabet from the 10 numeric digits (black dots)
|-
|||||||||||||||||||
|-
|a/1||b/2||c/3||d/4||e/5||f/6||g/7||h/8||i/9||j/0
|-
|||||||||||||||||||
|-
|k||l||m||n||o||p||q||r||s||t
|-
||||||||||| colspan="4" rowspan="2" | ||
|-
|u||v||x||y||z||w
|}
The next ten letters (the next "decade") are the same again, but with dots also at both position 3 and position 6 (green dots in the bottom row of the cell in the table above). Here w was initially left out as not being a part of the official French alphabet at the time of Braille's life; the French braille order is u v x y z ç é à è ù ().
The next ten letters, ending in w, are the same again, except that for this series position 6 (purple dot in the bottom right corner of the cell in the table above) is used without a dot at position 3. In French braille these are the letters â ê î ô û ë ï ü œ w (). W had been tacked onto the end of 39 letters of the French alphabet to accommodate English.
The a–j series shifted down by one dot space () is used for punctuation. Letters a and c , which only use dots in the top row, were shifted two places for the apostrophe and hyphen: . (These are also the decade diacritics, at left in the table below, of the second and third decade.)
In addition, there are ten patterns that are based on the first two letters () with their dots shifted to the right; these were assigned to non-French letters (ì ä ò ), or serve non-letter functions: (superscript; in English the accent mark), (currency prefix), (capital, in English the decimal point), (number sign), (emphasis mark), (symbol prefix).
{| class="wikitable noresize" styel="text-align:center"
|+ The 64 modern braille cells
!colspan=2| decade || ||colspan=10| numeric sequence || ||colspan=2| shift right
|-
!1st
| ||
|
|
|
|
|
|
|
|
|
| ||
|
|
|-
!2nd
| ||
|
|
|
|
|
|
|
|
|
| ||
|
|
|-
!3rd
| ||
|
|
|
|
|
|
|
|
|
| ||
|
|
|-
!4th
| ||
|
|
|
|
|
|
|
|
|
| ||
|
|
|-
!5th
! shiftdown
|
|
|
|
|
|
|
|
|
|
| ||
|
|
|}
The first four decades are similar in respect that in those decades the decade dots are applied to the numeric sequence as a logical "inclusive OR" operation whereas the fifth decade applies a "shift down" operation to the numeric sequence.
Originally there had been nine decades. The fifth through ninth used dashes as well as dots, but proved to be impractical and were soon abandoned. These could be replaced with what we now know as the number sign (), though that only caught on for the digits (old 5th decade → modern 1st decade). The dash occupying the top row of the original sixth decade was simply dropped, producing the modern fifth decade. (See 1829 braille.)
Assignment
Historically, there have been three principles in assigning the values of a linear script (print) to Braille: Using Louis Braille's original French letter values; reassigning the braille letters according to the sort order of the print alphabet being transcribed; and reassigning the letters to improve the efficiency of writing in braille.
Under international consensus, most braille alphabets follow the French sorting order for the 26 letters of the basic Latin alphabet, and there have been attempts at unifying the letters beyond these 26 (see international braille), though differences remain, for example, in German Braille. This unification avoids the chaos of each nation reordering the braille code to match the sorting order of its print alphabet, as happened in Algerian Braille, where braille codes were numerically reassigned to match the order of the Arabic alphabet and bear little relation to the values used in other countries (compare modern Arabic Braille, which uses the French sorting order), and as happened in an early American version of English Braille, where the letters w, x, y, z were reassigned to match English alphabetical order. A convention sometimes seen for letters beyond the basic 26 is to exploit the physical symmetry of braille patterns iconically, for example, by assigning a reversed n to ñ or an inverted s to sh. (See Hungarian Braille and Bharati Braille, which do this to some extent.)
A third principle was to assign braille codes according to frequency, with the simplest patterns (quickest ones to write with a stylus) assigned to the most frequent letters of the alphabet. Such frequency-based alphabets were used in Germany and the United States in the 19th century (see American Braille), but with the invention of the braille typewriter their advantage disappeared, and none are attested in modern use they had the disadvantage that the resulting small number of dots in a text interfered with following the alignment of the letters, and consequently made texts more difficult to read than Braille's more arbitrary letter assignment. Finally, there are braille scripts that do not order the codes numerically at all, such as Japanese Braille and Korean Braille, which are based on more abstract principles of syllable composition.
Texts are sometimes written in a script of eight dots per cell rather than six, enabling them to encode a greater number of symbols. (See Gardner–Salinas braille codes.) Luxembourgish Braille has adopted eight-dot cells for general use; for example, it adds a dot below each letter to derive its capital variant.
Form
Braille was the first writing system with binary encoding. The system as devised by Braille consists of two parts:
Character encoding that mapped characters of the French alphabet to tuples of six bits (the dots).
The physical representation of those six-bit characters with raised dots in a braille cell.
Within an individual cell, the dot positions are arranged in two columns of three positions. A raised dot can appear in any of the six positions, producing 64 (26) possible patterns, including one in which there are no raised dots. For reference purposes, a pattern is commonly described by listing the positions where dots are raised, the positions being universally numbered, from top to bottom, as 1 to 3 on the left and 4 to 6 on the right. For example, dot pattern 1-3-4 describes a cell with three dots raised, at the top and bottom in the left column and at the top of the right column: that is, the letter m. The lines of horizontal braille text are separated by a space, much like visible printed text, so that the dots of one line can be differentiated from the braille text above and below. Different assignments of braille codes (or code pages) are used to map the character sets of different printed scripts to the six-bit cells. Braille assignments have also been created for mathematical and musical notation. However, because the six-dot braille cell allows only 64 (26) patterns, including space, the characters of a braille script commonly have multiple values, depending on their context. That is, character mapping between print and braille is not one-to-one. For example, the character corresponds in print to both the letter d and the digit 4.
In addition to simple encoding, many braille alphabets use contractions to reduce the size of braille texts and to increase reading speed. (See Contracted braille.)
Writing braille
Braille may be produced by hand using a slate and stylus in which each dot is created from the back of the page, writing in mirror image, or it may be produced on a braille typewriter or Perkins Brailler, or an electronic Brailler or braille notetaker. Braille users with access to smartphones may also activate the on-screen braille input keyboard, to type braille symbols on to their device by placing their fingers on to the screen according to the dot configuration of the symbols they wish to form. These symbols are automatically translated into print on the screen. The different tools that exist for writing braille allow the braille user to select the method that is best for a given task. For example, the slate and stylus is a portable writing tool, much like the pen and paper for the sighted. Errors can be erased using a braille eraser or can be overwritten with all six dots (). Interpoint refers to braille printing that is offset, so that the paper can be embossed on both sides, with the dots on one side appearing between the divots that form the dots on the other.
Using a computer or other electronic device, Braille may be produced with a braille embosser (printer) or a refreshable braille display (screen).
Eight-dot braille
Braille has been extended to an 8-dot code, particularly for use with braille embossers and refreshable braille displays. In 8-dot braille the additional dots are added at the bottom of the cell, giving a matrix 4 dots high by 2 dots wide. The additional dots are given the numbers 7 (for the lower-left dot) and 8 (for the lower-right dot). Eight-dot braille has the advantages that the case of an individual letter is directly coded in the cell containing the letter and that all the printable ASCII characters can be represented in a single cell. All 256 (28) possible combinations of 8 dots are encoded by the Unicode standard. Braille with six dots is frequently stored as Braille ASCII.
Letters
The first 25 braille letters, up through the first half of the 3rd decade, transcribe a–z (skipping w). In English Braille, the rest of that decade is rounded out with the ligatures and, for, of, the, and with. Omitting dot 3 from these forms the 4th decade, the ligatures ch, gh, sh, th, wh, ed, er, ou, ow and the letter w.
(See English Braille.)
Formatting
Various formatting marks affect the values of the letters that follow them. They have no direct equivalent in print. The most important in English Braille are:
That is, is read as capital 'A', and as the digit '1'.
Punctuation
Basic punctuation marks in English Braille include:
is both the question mark and the opening quotation mark. Its reading depends on whether it occurs before a word or after.
is used for both opening and closing parentheses. Its placement relative to spaces and other characters determines its interpretation.
Punctuation varies from language to language. For example, French Braille uses for its question mark and swaps the quotation marks and parentheses (to and ); it uses the period () for the decimal point, as in print, and the decimal point () to mark capitalization.
Contractions
Braille contractions are words and affixes that are shortened so that they take up fewer cells. In English Braille, for example, the word afternoon is written with just three letters, , much like stenoscript. There are also several abbreviation marks that create what are effectively logograms. The most common of these is dot 5, which combines with the first letter of words. With the letter m, the resulting word is mother. There are also ligatures ("contracted" letters), which are single letters in braille but correspond to more than one letter in print. The letter and, for example, is used to write words with the sequence a-n-d in them, such as hand.
Page dimensions
Most braille embossers support between 34 and 40 cells per line, and 25 lines per page.
A manually operated Perkins braille typewriter supports a maximum of 42 cells per line (its margins are adjustable), and typical paper allows 25 lines per page.
A large interlining Stainsby has 36 cells per line and 18 lines per page.
An A4-sized Marburg braille frame, which allows interpoint braille (dots on both sides of the page, offset so they do not interfere with each other), has 30 cells per line and 27 lines per page.
Braille writing machine
A Braille writing machine is a typewriter with six keys that allows the user to write braille on a regular hard copy page.
The first Braille typewriter to gain general acceptance was invented by Frank Haven Hall (Superintendent of the Illinois School for the Blind), and was presented to the public in 1892.
The Stainsby Brailler, developed by Henry Stainsby in 1903, is a mechanical writer with a sliding carriage that moves over an aluminium plate as it embosses Braille characters. An improved version was introduced around 1933.
In 1951 David Abraham, a woodworking teacher at the Perkins School for the Blind, produced a more advanced Braille typewriter, the Perkins Brailler.
Braille printers or embosser were produced in the 1950s.
In 1960 Robert Mann, a teacher in MIT, wrote DOTSYS, a software that allowed automatic braille translation, and another group created an embossing device called "M.I.T. Braillemboss". The Mitre Corporation team of Robert Gildea, Jonathan Millen, Reid Gerhart and Joseph Sullivan (now president of Duxbury Systems) developed DOTSYS III, the first braille translator written in a portable programming language. DOTSYS III was developed for the Atlanta Public Schools as a public domain program.
In 1991 Ernest Bate developed the Mountbatten Brailler, an electronic machine used to type braille on braille paper, giving it a number of additional features such as word processing, audio feedback and embossing. This version was improved in 2008 with a quiet writer that had an erase key.
In 2011 David S. Morgan produced the first SMART Brailler machine, with added text to speech function and allowed digital capture of data entered.
Braille reading
Braille is traditionally read in hardcopy form, such as with paper books written in braille, documents produced in paper braille (such as restaurant menus), and braille labels or public signage. It can also be read on a refreshable braille display either as a stand-alone electronic device or connected to a computer or smartphone. Refreshable braille displays convert what is visually shown on a computer or smartphone screen into braille through a series of pins that rise and fall to form braille symbols. Currently more than 1% of all printed books have been translated into hardcopy braille.
The fastest braille readers apply a light touch and read braille with two hands, although reading braille with one hand is also possible. Although the finger can read only one braille character at a time, the brain chunks braille at a higher level, processing words a digraph, root or suffix at a time. The processing largely takes place in the visual cortex.
Literacy
Children who are blind miss out on fundamental parts of early and advanced education if not provided with the necessary tools, such as access to educational materials in braille. Children who are blind or visually impaired can begin learning foundational braille skills from a very young age to become fluent braille readers as they get older. Sighted children are naturally exposed to written language on signs, on TV and in the books they see. Blind children require the same early exposure to literacy, through access to braille rich environments and opportunities to explore the world around them. Print-braille books, for example, present text in both print and braille and can be read by sighted parents to blind children (and vice versa), allowing blind children to develop an early love for reading even before formal reading instruction begins.
Adults who experience sight loss later in life or who did not have the opportunity to learn it when they were younger can also learn braille. In most cases, adults who learn braille were already literate in print before vision loss and so instruction focuses more on developing the tactile and motor skills needed to read braille.
While different countries publish statistics on how many readers in a given organization request braille, these numbers only provide a partial picture of braille literacy statistics. For example, this data does not survey the entire population of braille readers or always include readers who are no longer in the school system (adults) or readers who request electronic braille materials. Therefore, there are currently no reliable statistics on braille literacy rates, as described in a publication in the Journal of Visual Impairment and Blindness. Regardless of the precise percentage of braille readers, there is consensus that braille should be provided to all those who benefit from it.
Numerous factors influence access to braille literacy, including school budget constraints, technology advancements such as screen-reader software, access to qualified instruction, and different philosophical views over how blind children should be educated.
In the USA, a key turning point for braille literacy was the passage of the Rehabilitation Act of 1973, an act of Congress that moved thousands of children from specialized schools for the blind into mainstream public schools. Because only a small percentage of public schools could afford to train and hire braille-qualified teachers, braille literacy has declined since the law took effect. Braille literacy rates have improved slightly since the bill was passed, in part because of pressure from consumers and advocacy groups that has led 27 states to pass legislation mandating that children who are legally blind be given the opportunity to learn braille.
In 1998 there were 57,425 legally blind students registered in the United States, but only 10% (5,461) of them used braille as their primary reading medium.
Early Braille education is crucial to literacy for a blind or low-vision child. A study conducted in the state of Washington found that people who learned braille at an early age did just as well, if not better than their sighted peers in several areas, including vocabulary and comprehension. In the preliminary adult study, while evaluating the correlation between adult literacy skills and employment, it was found that 44% of the participants who had learned to read in braille were unemployed, compared to the 77% unemployment rate of those who had learned to read using print. Currently, among the estimated 85,000 blind adults in the United States, 90% of those who are braille-literate are employed. Among adults who do not know braille, only 33% are employed. Statistically, history has proven that braille reading proficiency provides an essential skill set that allows blind or low-vision children to compete with their sighted peers in a school environment and later in life as they enter the workforce.
Regardless of the specific percentage of braille readers, proponents point out the importance of increasing access to braille for all those who can benefit from it.
Braille transcription
Although it is possible to transcribe print by simply substituting the equivalent braille character for its printed equivalent, in English such a character-by-character transcription (known as uncontracted braille) is typically used by beginners or those who only engage in short reading tasks (such as reading household labels).
Braille characters are much larger than their printed equivalents, and the standard 11" by 11.5" (28 cm × 30 cm) page has room for only 25 lines of 43 characters. To reduce space and increase reading speed, most braille alphabets and orthographies use ligatures, abbreviations, and contractions. Virtually all English braille books in hardcopy (paper) format are transcribed in contracted braille: The Library of Congress's Instruction Manual for Braille Transcribing runs to over 300 pages, and braille transcribers must pass certification tests.
Uncontracted braille was previously known as grade 1 braille, and contracted braille was previously known as grade 2 braille. Uncontracted braille is a direct transliteration of print words (one-to-one correspondence); hence, the word "about" would contain all the same letters in uncontracted braille as it does in inkprint. Contracted braille includes short forms to save space; hence, for example, the letters "ab" when standing alone represent the word "about" in English contracted braille. In English, some braille users only learn uncontracted braille, particularly if braille is being used for shorter reading tasks such as reading household labels. However, those who plan to use braille for educational and employment purposes and longer reading texts often go on to contracted braille.
The system of contractions in English Braille begins with a set of 23 words contracted to single characters. Thus the word but is contracted to the single letter b, can to c, do to d, and so on. Even this simple rule creates issues requiring special cases; for example, d is, specifically, an abbreviation of the verb do; the noun do representing the note of the musical scale is a different word and must be spelled out.
Portions of words may be contracted, and many rules govern this process. For example, the character with dots 2-3-5 (the letter "f" lowered in the Braille cell) stands for "ff" when used in the middle of a word. At the beginning of a word, this same character stands for the word "to"; the character is written in braille with no space following it. (This contraction was removed in the Unified English Braille Code.) At the end of a word, the same character represents an exclamation point.
Some contractions are more similar than their print equivalents. For example, the contraction , meaning "letter", differs from , meaning "little", only by one dot in the second letter: little, letter. This causes greater confusion between the braille spellings of these words and can hinder the learning process of contracted braille.
The contraction rules take into account the linguistic structure of the word; thus, contractions are generally not to be used when their use would alter the usual braille form of a base word to which a prefix or suffix has been added. Some portions of the transcription rules are not fully codified and rely on the judgment of the transcriber. Thus, when the contraction rules permit the same word in more than one way, preference is given to "the contraction that more nearly approximates correct pronunciation".
"Grade 3 braille" is a variety of non-standardized systems that include many additional shorthand-like contractions. They are not used for publication, but by individuals for their personal convenience.
Braille translation software
When people produce braille, this is called braille transcription. When computer software produces braille, this is called a braille translator. Braille translation software exists to handle almost all of the common languages of the world, and many technical areas, such as mathematics (mathematical notation), for example WIMATS, music (musical notation), and tactile graphics.
Braille reading techniques
Since Braille is one of the few writing systems where tactile perception is used, as opposed to visual perception, a braille reader must develop new skills. One skill important for Braille readers is the ability to create smooth and even pressures when running one's fingers along the words. There are many different styles and techniques used for the understanding and development of braille, even though a study by B. F. Holland suggests that there is no specific technique that is superior to any other.
Another study by Lowenfield & Abel shows that braille can be read "the fastest and best... by students who read using the index fingers of both hands". Another important reading skill emphasized in this study is to finish reading the end of a line with the right hand and to find the beginning of the next line with the left hand simultaneously.
International uniformity
When Braille was first adapted to languages other than French, many schemes were adopted, including mapping the native alphabet to the alphabetical order of French – e.g. in English W, which was not in the French alphabet at the time, is mapped to braille X, X to Y, Y to Z, and Z to the first French-accented letter – or completely rearranging the alphabet such that common letters are represented by the simplest braille patterns. Consequently, mutual intelligibility was greatly hindered by this state of affairs. In 1878, the International Congress on Work for the Blind, held in Paris, proposed an international braille standard, where braille codes for different languages and scripts would be based, not on the order of a particular alphabet, but on phonetic correspondence and transliteration to Latin.
This unified braille has been applied to the languages of India and Africa, Arabic, Vietnamese, Hebrew, Russian, and Armenian, as well as nearly all Latin-script languages. In Greek, for example, γ (g) is written as Latin g, despite the fact that it has the alphabetic position of c; Hebrew ב (b), the second letter of the alphabet and cognate with the Latin letter b, is sometimes pronounced /b/ and sometimes /v/, and is written b or v accordingly; Russian ц (ts) is written as c, which is the usual letter for /ts/ in those Slavic languages that use the Latin alphabet; and Arabic ف (f) is written as f, despite being historically p and occurring in that part of the Arabic alphabet (between historic o and q).
Other braille conventions
Other systems for assigning values to braille patterns are also followed beside the simple mapping of the alphabetical order onto the original French order. Some braille alphabets start with unified braille, and then diverge significantly based on the phonology of the target languages, while others diverge even further.
In the various Chinese systems, traditional braille values are used for initial consonants and the simple vowels. In both Mandarin and Cantonese Braille, however, characters have different readings depending on whether they are placed in syllable-initial (onset) or syllable-final (rime) position. For instance, the cell for Latin k, , represents Cantonese k (g in Yale and other modern romanizations) when initial, but aak when final, while Latin j, , represents Cantonese initial j but final oei.
Novel systems of braille mapping include Korean, which adopts separate syllable-initial and syllable-final forms for its consonants, explicitly grouping braille cells into syllabic groups in the same way as hangul. Japanese, meanwhile, combines independent vowel dot patterns and modifier consonant dot patterns into a single braille cell – an abugida representation of each Japanese mora.
Uses
Braille is read by people who are blind, deafblind or who have low vision, and by both those born with a visual impairment and those who experience sight loss later in life. Braille may also be used by print impaired people, who although may be fully sighted, due to a physical disability are unable to read print. Even individuals with low vision will find that they benefit from braille, depending on level of vision or context (for example, when lighting or colour contrast is poor). Braille is used for both short and long reading tasks. Examples of short reading tasks include braille labels for identifying household items (or cards in a wallet), reading elevator buttons, accessing phone numbers, recipes, grocery lists and other personal notes. Examples of longer reading tasks include using braille to access educational materials, novels and magazines. People with access to a refreshable braille display can also use braille for reading email and ebooks, browsing the internet and accessing other electronic documents. It is also possible to adapt or purchase playing cards and board games in braille.
In India there are instances where the parliament acts have been published in braille, such as The Right to Information Act. Sylheti Braille is used in Northeast India.
In Canada, passenger safety information in braille and tactile seat row markers are required aboard planes, trains, large ferries, and interprovincial busses pursuant to the Canadian Transportation Agency's regulations.
In the United States, the Americans with Disabilities Act of 1990 requires various building signage to be in braille.
In the United Kingdom, it is required that medicines have the name of the medicine in Braille on the labeling.
Currency
The current series of Canadian banknotes has a tactile feature consisting of raised dots that indicate the denomination, allowing bills to be easily identified by blind or low vision people. It does not use standard braille numbers to identify the value. Instead, the number of full braille cells, which can be simply counted by both braille readers and non-braille readers alike, is an indicator of the value of the bill.
Mexican bank notes, Australian bank notes, Indian rupee notes, Israeli new shekel notes and Russian ruble notes also have special raised symbols to make them identifiable by persons who are blind or have low vision.
Euro coins were designed in cooperation with organisations representing blind people, and as a result they incorporate many features allowing them to be distinguished by touch alone. In addition, their visual appearance is designed to make them easy to tell apart for persons who cannot read the inscriptions on the coins. "A good design for the blind and partially sighted is a good design for everybody" was the principle behind the cooperation of the European Central Bank and the European Blind Union during the design phase of the first series Euro banknotes in the 1990s. As a result, the design of the first euro banknotes included several characteristics which aid both the blind and partially sighted to confidently use the notes.
Australia introduced the tactile feature onto their five-dollar banknote in 2016
In the United Kingdom, the front of the £10 polymer note (the side with raised print), has two clusters of raised dots in the top left hand corner, and the £20 note has three. This tactile feature helps blind and partially sighted people identify the value of the note.
In 2003 the US Mint introduced the commemorative Alabama State Quarter, which recognized State Daughter Helen Keller on the Obverse, including the name Helen Keller in both English script and Braille inscription. This appears to be the first known use of Braille on US Coin Currency, though not standard on all coins of this type.
Unicode
The Braille set was added to the Unicode Standard in version 3.0 (1999).
Most braille embossers and refreshable braille displays do not use the Unicode code points, but instead reuse the 8-bit code points that are assigned to standard ASCII for braille ASCII. (Thus, for simple material, the same bitstream may be interpreted equally as visual letter forms for sighted readers or their exact semantic equivalent in tactile patterns for blind readers. However some codes have quite different tactile versus visual interpretations and most are not even defined in Braille ASCII.)
Some embossers have proprietary control codes for 8-dot braille or for full graphics mode, where dots may be placed anywhere on the page without leaving any space between braille cells so that continuous lines can be drawn in diagrams, but these are rarely used and are not standard.
The Unicode standard encodes 6-dot and 8-dot braille glyphs according to their binary appearance, rather than following their assigned numeric order. Dot 1 corresponds to the least significant bit of the low byte of the Unicode scalar value, and dot 8 to the high bit of that byte.
The Unicode block for braille is U+2800 ... U+28FF. The mapping of patterns to characters etc. is language dependent: even for English for example, see American Braille and English Braille.
Observation
Every year on 4 January, World Braille Day is observed internationally to commemorate the birth of Louis Braille and to recognize his efforts. Although the event is not considered a public holiday, it has been recognized by the United Nations as an official day of celebration since 2019.
Braille devices
There is a variety of contemporary electronic devices that serve the needs of blind people that operate in Braille, such as refreshable braille displays and Braille e-book that use different technologies for transmitting graphic information of different types (pictures, maps, graphs, texts, etc.).
See also
("the Braille man of India")
List of binary codes
List of international common standards
Notes
References
External links
L'association Valentin Haüy (in French)
Acting for the autonomy of blind and partially sighted persons (Corporate brochure) (Microsoft Word file, in English)
Alternate Text Production Center of the California Community Colleges.
Braille Part 1 Text To Speech For The Visually Impaired YouTube
Braille information and advice – Sense UK
Braille at Omniglot
1824 introductions
Assistive technology
Augmentative and alternative communication
Character sets
Digital typography
French inventions
Latin-script representations
Writing systems introduced in the 19th century
|
https://en.wikipedia.org/wiki/Bijection
|
A bijection is a function that is both injective (one-to-one) and surjective (onto). In other words, every element in the codomain of the function is mapped to by exactly one element in the domain of the function.
Equivalently, a bijection is a binary relation between two sets, such that each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set.
A bijection is also called as a bijective function, one-to-one correspondence, or invertible function.
The term one-to-one correspondence must not be confused with one-to-one function, which refers to an injective function (see examples on figures).
A bijection from a set X to a set Y has an inverse function from Y to X. There exists a bijection between two sets if and only if they have the same cardinal number, which, in the case of finite sets is simply the number of their elements.
A bijective function from a set to itself is also called a permutation, and the set of all permutations of a set forms its symmetric group.
Some bijections with further properties have received specific names, which include automorphisms, isomorphisms, homeomorphisms, diffeomorphisms, permutation groups, and most geometric transformations. Galois correspondences are bijections between sets of mathematical objects of apparently very different nature.
Definition
For a pairing between X and Y (where Y need not be different from X) to be a bijection, four properties must hold:
each element of X must be paired with at least one element of Y,
no element of X may be paired with more than one element of Y,
each element of Y must be paired with at least one element of X, and
no element of Y may be paired with more than one element of X.
Satisfying properties (1) and (2) means that a pairing is a function with domain X. It is more common to see properties (1) and (2) written as a single statement: Every element of X is paired with exactly one element of Y. Functions which satisfy property (3) are said to be "onto Y " and are called surjections (or surjective functions). Functions which satisfy property (4) are said to be "one-to-one functions" and are called injections (or injective functions). With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both "one-to-one" and "onto".
Examples
Batting line-up of a baseball or cricket team
Consider the batting line-up of a baseball or cricket team (or any list of all the players of any sports team where every player holds a specific spot in a line-up). The set X will be the players on the team (of size nine in the case of baseball) and the set Y will be the positions in the batting order (1st, 2nd, 3rd, etc.) The "pairing" is given by which player is in what position in this order. Property (1) is satisfied since each player is somewhere in the list. Property (2) is satisfied since no player bats in two (or more) positions in the order. Property (3) says that for each position in the order, there is some player batting in that position and property (4) states that two or more players are never batting in the same position in the list.
Seats and students of a classroom
In a classroom there are a certain number of seats. A bunch of students enter the room and the instructor asks them to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. What the instructor observed in order to reach this conclusion was that:
Every student was in a seat (there was no one standing),
No student was in more than one seat,
Every seat had someone sitting there (there were no empty seats), and
No seat had more than one student in it.
The instructor was able to conclude that there were just as many seats as there were students, without having to count either set.
More mathematical examples
For any set X, the identity function 1X: X → X, 1X(x) = x is bijective.
The function f: R → R, f(x) = 2x + 1 is bijective, since for each y there is a unique x = (y − 1)/2 such that f(x) = y. More generally, any linear function over the reals, f: R → R, f(x) = ax + b (where a is non-zero) is a bijection. Each real number y is obtained from (or paired with) the real number x = (y − b)/a.
The function f: R → (−π/2, π/2), given by f(x) = arctan(x) is bijective, since each real number x is paired with exactly one angle y in the interval (−π/2, π/2) so that tan(y) = x (that is, y = arctan(x)). If the codomain (−π/2, π/2) was made larger to include an integer multiple of π/2, then this function would no longer be onto (surjective), since there is no real number which could be paired with the multiple of π/2 by this arctan function.
The exponential function, g: R → R, g(x) = ex, is not bijective: for instance, there is no x in R such that g(x) = −1, showing that g is not onto (surjective). However, if the codomain is restricted to the positive real numbers , then g would be bijective; its inverse (see below) is the natural logarithm function ln.
The function h: R → R+, h(x) = x2 is not bijective: for instance, h(−1) = h(1) = 1, showing that h is not one-to-one (injective). However, if the domain is restricted to , then h would be bijective; its inverse is the positive square root function.
By Schröder–Bernstein theorem, given any two sets X and Y, and two injective functions f: X → Y and g: Y → X, there exists a bijective function h: X → Y.
Inverses
A bijection f with domain X (indicated by f: X → Y in functional notation) also defines a converse relation starting in Y and going to X (by turning the arrows around). The process of "turning the arrows around" for an arbitrary function does not, in general, yield a function, but properties (3) and (4) of a bijection say that this inverse relation is a function with domain Y. Moreover, properties (1) and (2) then say that this inverse function is a surjection and an injection, that is, the inverse function exists and is also a bijection. Functions that have inverse functions are said to be invertible. A function is invertible if and only if it is a bijection.
Stated in concise mathematical notation, a function f: X → Y is bijective if and only if it satisfies the condition
for every y in Y there is a unique x in X with y = f(x).
Continuing with the baseball batting line-up example, the function that is being defined takes as input the name of one of the players and outputs the position of that player in the batting order. Since this function is a bijection, it has an inverse function which takes as input a position in the batting order and outputs the player who will be batting in that position.
Composition
The composition of two bijections f: X → Y and g: Y → Z is a bijection, whose inverse is given by is .
Conversely, if the composition of two functions is bijective, it only follows that f is injective and g is surjective.
Cardinality
If X and Y are finite sets, then there exists a bijection between the two sets X and Y if and only if X and Y have the same number of elements. Indeed, in axiomatic set theory, this is taken as the definition of "same number of elements" (equinumerosity), and generalising this definition to infinite sets leads to the concept of cardinal number, a way to distinguish the various sizes of infinite sets.
Properties
A function f: R → R is bijective if and only if its graph meets every horizontal and vertical line exactly once.
If X is a set, then the bijective functions from X to itself, together with the operation of functional composition (∘), form a group, the symmetric group of X, which is denoted variously by S(X), SX, or X! (X factorial).
Bijections preserve cardinalities of sets: for a subset A of the domain with cardinality |A| and subset B of the codomain with cardinality |B|, one has the following equalities:
|f(A)| = |A| and |f−1(B)| = |B|.
If X and Y are finite sets with the same cardinality, and f: X → Y, then the following are equivalent:
f is a bijection.
f is a surjection.
f is an injection.
For a finite set S, there is a bijection between the set of possible total orderings of the elements and the set of bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number of total orderings of that set—namely, n!.
Category theory
Bijections are precisely the isomorphisms in the category Set of sets and set functions. However, the bijections are not always the isomorphisms for more complex categories. For example, in the category Grp of groups, the morphisms must be homomorphisms since they must preserve the group structure, so the isomorphisms are group isomorphisms which are bijective homomorphisms.
Generalization to partial functions
The notion of one-to-one correspondence generalizes to partial functions, where they are called partial bijections, although partial bijections are only required to be injective. The reason for this relaxation is that a (proper) partial function is already undefined for a portion of its domain; thus there is no compelling reason to constrain its inverse to be a total function, i.e. defined everywhere on its domain. The set of all partial bijections on a given base set is called the symmetric inverse semigroup.
Another way of defining the same notion is to say that a partial bijection from A to B is any relation
R (which turns out to be a partial function) with the property that R is the graph of a bijection f:A′→B′, where A′ is a subset of A and B′ is a subset of B.
When the partial bijection is on the same set, it is sometimes called a one-to-one partial transformation. An example is the Möbius transformation simply defined on the complex plane, rather than its completion to the extended complex plane.
Gallery
See also
Ax–Grothendieck theorem
Bijection, injection and surjection
Bijective numeration
Bijective proof
Category theory
Multivalued function
Notes
References
This topic is a basic concept in set theory and can be found in any text which includes an introduction to set theory. Almost all texts that deal with an introduction to writing proofs will include a section on set theory, so the topic may be found in any of these:
External links
Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history of Injection and related terms.
Functions and mappings
Basic concepts in set theory
Mathematical relations
Types of functions
|
https://en.wikipedia.org/wiki/Biochemistry
|
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems, in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology.
History
At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists.
The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister.
It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level.
Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression.
Starting materials: the chemical elements of life
Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts).
Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more.
Biomolecules
The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.
Carbohydrates
Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications.
The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose.
In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare.
Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance.
When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals.
Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).
Lipids
Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid.
Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain).
Most lipids have some polar character in addition to being largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.
Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, which are the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilisers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome).
Proteins
Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that made each amino acid different, and the properties of the side-chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.
Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain.
The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.
The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.
Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.
If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein.
A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release the ammonia into the water where it is quickly diluted. In general, mammals convert the ammonia into urea, via the urea cycle.
In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function.
Nucleic acids
Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group.
The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid (similar to a zipper). Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine and Thymine & Adenine and Uracil contains two hydrogen Bonds, while Hydrogen Bonds formed between cytosine and guanine are three in number.
Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.
Metabolism
Carbohydrates as energy source
Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.
Glycolysis (anaerobic)
Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.
Aerobic
In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.
Gluconeogenesis
In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate.
The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle.
Relationship to other "molecular-scale" biological sciences
Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields:
Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level.
Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies.
Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA.
Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules).
See also
Lists
Important publications in biochemistry (chemistry)
List of biochemistry topics
List of biochemists
List of biomolecules
See also
Fundamental Concepts And Processes In Biochemistry
Astrobiology
Biochemistry (journal)
Biological Chemistry (journal)
Biophysics
Chemical ecology
Computational biomodeling
Dedicated bio-based chemical
EC number
Hypothetical types of biochemistry
International Union of Biochemistry and Molecular Biology
Metabolome
Metabolomics
Molecular biology
Molecular medicine
Plant biochemistry
Proteolysis
Small molecule
Structural biology
TCA cycle
Notes
a. Fructose is not the only sugar found in fruits. Glucose and sucrose are also found in varying quantities in various fruits, and sometimes exceed the fructose present. For example, 32% of the edible portion of a date is glucose, compared with 24% fructose and 8% sucrose. However, peaches contain more sucrose (6.66%) than they do fructose (0.93%) or glucose (1.47%).
References
Cited literature
Further reading
Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999.
Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell
4th Edition, Routledge, March, 2002, hardcover, 1616 pp.
3rd Edition, Garland, 1994,
2nd Edition, Garland, 1989,
Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982.
External links
The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI.
SystemsX.ch – The Swiss Initiative in Systems Biology
Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook.
Biotechnology
Molecular biology
|
https://en.wikipedia.org/wiki/Bicycle
|
A bicycle, also called a pedal cycle, bike, push-bike or cycle, is a human-powered or motor-powered assisted, pedal-driven, single-track vehicle, having two wheels attached to a frame, one behind the other. A is called a cyclist, or bicyclist.
Bicycles were introduced in the 19th century in Europe. By the early 21st century there were more than 1 billion. These numbers far exceed the number of cars, both in total and ranked by the number of individual models produced. They are the principal means of transportation in many regions. They also provide a popular form of recreation, and have been adapted for use as children's toys, general fitness, military and police applications, courier services, bicycle racing, and bicycle stunts.
The basic shape and configuration of a typical upright or "safety bicycle", has changed little since the first chain-driven model was developed around 1885. However, many details have been improved, especially since the advent of modern materials and computer-aided design. These have allowed for a proliferation of specialized designs for many types of cycling. In the 21st century electric bicycles have become popular.
The bicycle's invention has had an enormous effect on society, both in terms of culture and of advancing modern industrial methods. Several components that played a key role in the development of the automobile were initially invented for use in the bicycle, including ball bearings, pneumatic tires, chain-driven sprockets and tension-spoked wheels.
Etymology
The word bicycle first appeared in English print in The Daily News in 1868, to describe "Bysicles and trysicles" on the "Champs Elysées and Bois de Boulogne". The word was first used in 1847 in a French publication to describe an unidentified two-wheeled vehicle, possibly a carriage. The design of the bicycle was an advance on the velocipede, although the words were used with some degree of overlap for a time.
Other words for bicycle include "bike", "pushbike", "pedal cycle", or "cycle". In Unicode, the code point for "bicycle" is 0x1F6B2. The entity 🚲 in HTML produces 🚲.
Although bike and cycle are used interchangeably to refer mostly to two types of two-wheelers, the terms still vary across the world. In India, for example, a cycle refers only to a two-wheeler using pedal power whereas the term bike is used to describe a two-wheeler using internal combustion engine or electric motors as a source of motive power instead of motorcycle/motorbike.
History
The "dandy horse", also called Draisienne or Laufmaschine ("running machine"), was the first human means of transport to use only two wheels in tandem and was invented by the German Baron Karl von Drais. It is regarded as the first bicycle and von Drais is seen as the "father of the bicycle", but it did not have pedals. Von Drais introduced it to the public in Mannheim in 1817 and in Paris in 1818. Its rider sat astride a wooden frame supported by two in-line wheels and pushed the vehicle along with his or her feet while steering the front wheel.
The first mechanically propelled, two-wheeled vehicle may have been built by Kirkpatrick MacMillan, a Scottish blacksmith, in 1839, although the claim is often disputed. He is also associated with the first recorded instance of a cycling traffic offense, when a Glasgow newspaper in 1842 reported an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a little girl in Glasgow and was fined five shillings ().
In the early 1860s, Frenchmen Pierre Michaux and Pierre Lallement took bicycle design in a new direction by adding a mechanical crank drive with pedals on an enlarged front wheel (the velocipede). This was the first in mass production. Another French inventor named Douglas Grasso had a failed prototype of Pierre Lallement's bicycle several years earlier. Several inventions followed using rear-wheel drive, the best known being the rod-driven velocipede by Scotsman Thomas McCall in 1869. In that same year, bicycle wheels with wire spokes were patented by Eugène Meyer of Paris. The French vélocipède, made of iron and wood, developed into the "penny-farthing" (historically known as an "ordinary bicycle", a retronym, since there was then no other kind). It featured a tubular steel frame on which were mounted wire-spoked wheels with solid rubber tires. These bicycles were difficult to ride due to their high seat and poor weight distribution. In 1868 Rowley Turner, a sales agent of the Coventry Sewing Machine Company (which soon became the Coventry Machinists Company), brought a Michaux cycle to Coventry, England. His uncle, Josiah Turner, and business partner James Starley, used this as a basis for the 'Coventry Model' in what became Britain's first cycle factory.
The dwarf ordinary addressed some of these faults by reducing the front wheel diameter and setting the seat further back. This, in turn, required gearing—effected in a variety of ways—to efficiently use pedal power. Having to both pedal and steer via the front wheel remained a problem. Englishman J.K. Starley (nephew of James Starley), J.H. Lawson, and Shergold solved this problem by introducing the chain drive (originated by the unsuccessful "bicyclette" of Englishman Henry Lawson), connecting the frame-mounted cranks to the rear wheel. These models were known as safety bicycles, dwarf safeties, or upright bicycles for their lower seat height and better weight distribution, although without pneumatic tires the ride of the smaller-wheeled bicycle would be much rougher than that of the larger-wheeled variety. Starley's 1885 Rover, manufactured in Coventry is usually described as the first recognizably modern bicycle. Soon the seat tube was added which created the modern bike's double-triangle diamond frame.
Further innovations increased comfort and ushered in a second bicycle craze, the 1890s Golden Age of Bicycles. In 1888, Scotsman John Boyd Dunlop introduced the first practical pneumatic tire, which soon became universal. Willie Hume demonstrated the supremacy of Dunlop's tyres in 1889, winning the tyre's first-ever races in Ireland and then England. Soon after, the rear freewheel was developed, enabling the rider to coast. This refinement led to the 1890s invention of coaster brakes. Dérailleur gears and hand-operated Bowden cable-pull brakes were also developed during these years, but were only slowly adopted by casual riders.
The Svea Velocipede with vertical pedal arrangement and locking hubs was introduced in 1892 by the Swedish engineers Fredrik Ljungström and Birger Ljungström. It attracted attention at the World Fair and was produced in a few thousand units.
In the 1870s many cycling clubs flourished. They were popular in a time when there were no cars on the market and the principal mode of transportation was horse-drawn vehicles, such the horse and buggy or the horsecar. Among the earliest clubs was The Bicycle Touring Club, which has operated since 1878. By the turn of the century, cycling clubs flourished on both sides of the Atlantic, and touring and racing became widely popular. The Raleigh Bicycle Company was founded in Nottingham, England in 1888. It became the biggest bicycle manufacturing company in the world, making over two million bikes per year.
Bicycles and horse buggies were the two mainstays of private transportation just prior to the automobile, and the grading of smooth roads in the late 19th century was stimulated by the widespread advertising, production, and use of these devices. More than 1 billion bicycles have been manufactured worldwide as of the early 21st century. Bicycles are the most common vehicle of any kind in the world, and the most numerous model of any kind of vehicle, whether human-powered or motor vehicle, is the Chinese Flying Pigeon, with numbers exceeding 500 million. The next most numerous vehicle, the Honda Super Cub motorcycle, has more than 100 million units made, while most produced car, the Toyota Corolla, has reached 44 million and counting.
Uses
Bicycles are used for transportation, bicycle commuting, and utility cycling. They are also used professionally by mail carriers, paramedics, police, messengers, and general delivery services. Military uses of bicycles include communications, reconnaissance, troop movement, supply of provisions, and patrol, such as in bicycle infantries.
They are also used for recreational purposes, including bicycle touring, mountain biking, physical fitness, and play. Bicycle sports include racing, BMX racing, track racing, criterium, roller racing, sportives and time trials. Major multi-stage professional events are the Giro d'Italia, the Tour de France, the Vuelta a España, the Tour de Pologne, and the Volta a Portugal. They are also used for entertainment and pleasure in other ways, such as in organised mass rides, artistic cycling and freestyle BMX.
Technical aspects
The bicycle has undergone continual adaptation and improvement since its inception. These innovations have continued with the advent of modern materials and computer-aided design, allowing for a proliferation of specialized bicycle types, improved bicycle safety, and riding comfort.
Types
Bicycles can be categorized in many different ways: by function, by number of riders, by general construction, by gearing or by means of propulsion. The more common types include utility bicycles, mountain bicycles, racing bicycles, touring bicycles, hybrid bicycles, cruiser bicycles, and BMX bikes. Less common are tandems, low riders, tall bikes, fixed gear, folding models, amphibious bicycles, cargo bikes, recumbents and electric bicycles.
Unicycles, tricycles and quadracycles are not strictly bicycles, as they have respectively one, three and four wheels, but are often referred to informally as "bikes" or "cycles".
Dynamics
A bicycle stays upright while moving forward by being steered so as to keep its center of mass over the wheels. This steering is usually provided by the rider, but under certain conditions may be provided by the bicycle itself.
The combined center of mass of a bicycle and its rider must lean into a turn to successfully navigate it. This lean is induced by a method known as countersteering, which can be performed by the rider turning the handlebars directly with the hands or indirectly by leaning the bicycle.
Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the front wheel to flip longitudinally. The act of purposefully using this force to lift the rear wheel and balance on the front without tipping over is a trick known as a stoppie, endo, or front wheelie.
Performance
The bicycle is extraordinarily efficient in both biological and mechanical terms. The bicycle is the most efficient human-powered means of transportation in terms of energy a person must expend to travel a given distance. From a mechanical viewpoint, up to 99% of the energy delivered by the rider into the pedals is transmitted to the wheels, although the use of gearing mechanisms may reduce this by 10–15%. In terms of the ratio of cargo weight a bicycle can carry to total weight, it is also an efficient means of cargo transportation.
A human traveling on a bicycle at low to medium speeds of around uses only the power required to walk. Air drag, which is proportional to the square of speed, requires dramatically higher power outputs as speeds increase. If the rider is sitting upright, the rider's body creates about 75% of the total drag of the bicycle/rider combination. Drag can be reduced by seating the rider in a more aerodynamically streamlined position. Drag can also be reduced by covering the bicycle with an aerodynamic fairing. The fastest recorded unpaced speed on a flat surface is .
In addition, the carbon dioxide generated in the production and transportation of the food required by the bicyclist, per mile traveled, is less than that generated by energy efficient motorcars.
Parts
Frame
The great majority of modern bicycles have a frame with upright seating that looks much like the first chain-driven bike. These upright bicycles almost always feature the diamond frame, a truss consisting of two triangles: the front triangle and the rear triangle. The front triangle consists of the head tube, top tube, down tube, and seat tube. The head tube contains the headset, the set of bearings that allows the fork to turn smoothly for steering and balance. The top tube connects the head tube to the seat tube at the top, and the down tube connects the head tube to the bottom bracket. The rear triangle consists of the seat tube and paired chain stays and seat stays. The chain stays run parallel to the chain, connecting the bottom bracket to the rear dropout, where the axle for the rear wheel is held. The seat stays connect the top of the seat tube (at or near the same point as the top tube) to the rear fork ends.
Historically, women's bicycle frames had a top tube that connected in the middle of the seat tube instead of the top, resulting in a lower standover height at the expense of compromised structural integrity, since this places a strong bending load in the seat tube, and bicycle frame members are typically weak in bending. This design, referred to as a step-through frame or as an open frame, allows the rider to mount and dismount in a dignified way while wearing a skirt or dress. While some women's bicycles continue to use this frame style, there is also a variation, the mixte, which splits the top tube laterally into two thinner top tubes that bypass the seat tube on each side and connect to the rear fork ends. The ease of stepping through is also appreciated by those with limited flexibility or other joint problems. Because of its persistent image as a "women's" bicycle, step-through frames are not common for larger frames.
Step-throughs were popular partly for practical reasons and partly for social mores of the day. For most of the history of bicycles' popularity women have worn long skirts, and the lower frame accommodated these better than the top-tube. Furthermore, it was considered "unladylike" for women to open their legs to mount and dismount—in more conservative times women who rode bicycles at all were vilified as immoral or immodest. These practices were akin to the older practice of riding horse sidesaddle.
Another style is the recumbent bicycle. These are inherently more aerodynamic than upright versions, as the rider may lean back onto a support and operate pedals that are on about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but this type was banned from competition in 1934 by the Union Cycliste Internationale.
Historically, materials used in bicycles have followed a similar pattern as in aircraft, the goal being high strength and low weight. Since the late 1930s alloy steels have been used for frame and fork tubes in higher quality machines. By the 1980s aluminum welding techniques had improved to the point that aluminum tube could safely be used in place of steel. Since then aluminum alloy frames and other components have become popular due to their light weight, and most mid-range bikes are now principally aluminum alloy of some kind. More expensive bikes use carbon fibre due to its significantly lighter weight and profiling ability, allowing designers to make a bike both stiff and compliant by manipulating the lay-up. Virtually all professional racing bicycles now use carbon fibre frames, as they have the best strength to weight ratio. A typical modern carbon fiber frame can weighs less than .
Other exotic frame materials include titanium and advanced alloys. Bamboo, a natural composite material with high strength-to-weight ratio and stiffness has been used for bicycles since 1894. Recent versions use bamboo for the primary frame with glued metal connections and parts, priced as exotic models.
Drivetrain and gearing
The drivetrain begins with pedals which rotate the cranks, which are held in axis by the bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A very small number of bicycles use a shaft drive to transmit power, or special belts. Hydraulic bicycle transmissions have been built, but they are currently inefficient and complex.
Since cyclists' legs are most efficient over a narrow range of pedaling speeds, or cadence, a variable gear ratio helps a cyclist to maintain an optimum pedalling speed while covering varied terrain. Some, mainly utility, bicycles use hub gears with between 3 and 14 ratios, but most use the generally more efficient dérailleur system, by which the chain is moved between different cogs called chainrings and sprockets to select a ratio. A dérailleur system normally has two dérailleurs, or mechs, one at the front to select the chainring and another at the back to select the sprocket. Most bikes have two or three chainrings, and from 5 to 11 sprockets on the back, with the number of theoretical gears calculated by multiplying front by back. In reality, many gears overlap or require the chain to run diagonally, so the number of usable gears is fewer.
An alternative to chaindrive is to use a synchronous belt. These are toothed and work much the same as a chain—popular with commuters and long distance cyclists they require little maintenance. They cannot be shifted across a cassette of sprockets, and are used either as single speed or with a hub gear.
Different gears and ranges of gears are appropriate for different people and styles of cycling. Multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. This allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals.
With a chain drive transmission, a chainring attached to a crank drives the chain, which in turn rotates the rear wheel via the rear sprocket(s) (cassette or freewheel). There are four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings, up to 11 sprockets, hub gear built into rear wheel (3-speed to 14-speed). The most common options are either a rear hub or multiple chain rings combined with multiple sprockets (other combinations of options are possible but less common).
Steering
The handlebars connect to the stem that connects to the fork that connects to the front wheel, and the whole assembly connects to the bike and rotates about the steering axis via the headset bearings. Three styles of handlebar are common. Upright handlebars, the norm in Europe and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip and comfortable upright position. Drop handlebars "drop" as they curve forward and down, offering the cyclist best braking power from a more aerodynamic "crouched" position, as well as more upright positions in which the hands grip the brake lever mounts, the forward curves, or the upper flat sections for increasingly upright postures. Mountain bikes generally feature a 'straight handlebar' or 'riser bar' with varying degrees of sweep backward and centimeters rise upwards, as well as wider widths which can provide better handling due to increased leverage against the wheel.
Seating
Saddles also vary with rider preference, from the cushioned ones favored by short-distance riders to narrower saddles which allow more room for leg swings. Comfort depends on riding position. With comfort bikes and hybrids, cyclists sit high over the seat, their weight directed down onto the saddle, such that a wider and more cushioned saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly distributed between the handlebars and saddle, the hips are flexed, and a narrower and harder saddle is more efficient. Differing saddle designs exist for male and female cyclists, accommodating the genders' differing anatomies and sit bone width measurements, although bikes typically are sold with saddles most appropriate for men. Suspension seat posts and seat springs provide comfort by absorbing shock but can add to the overall weight of the bicycle.
A recumbent bicycle has a reclined chair-like seat that some riders find more comfortable than a saddle, especially riders who suffer from certain types of seat, back, neck, shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seat steering.
Brakes
Bicycle brakes may be rim brakes, in which friction pads are compressed against the wheel rims; hub brakes, where the mechanism is contained within the wheel hub, or disc brakes, where pads act on a rotor attached to the hub. Most road bicycles use rim brakes, but some use disk brakes. Disc brakes are more common for mountain bikes, tandems and recumbent bicycles than on other types of bicycles, due to their increased power, coupled with an increased weight and complexity.
With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables or hydraulic lines to the friction pads, which apply pressure to the braking surface, causing friction which slows the bicycle down. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedal coaster brakes which were popular in North America until the 1960s.
Track bicycles do not have brakes, because all riders ride in the same direction around a track which does not necessitate sharp deceleration. Track riders are still able to slow down because all track bicycles are fixed-gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so when the rear wheel is moving, the cranks are moving. To slow down, the rider applies resistance to the pedals, acting as a braking system which can be as effective as a conventional rear wheel brake, but not as effective as a front wheel brake.
Suspension
Bicycle suspension refers to the system or systems used to suspend the rider and all or part of the bicycle. This serves two purposes: to keep the wheels in continuous contact with the ground, improving control, and to isolate the rider and luggage from jarring due to rough surfaces, improving comfort.
Bicycle suspensions are used primarily on mountain bicycles, but are also common on hybrid bicycles, as they can help deal with problematic vibration from poor surfaces. Suspension is especially important on recumbent bicycles, since while an upright bicycle rider can stand on the pedals to achieve some of the benefits of suspension, a recumbent rider cannot.
Basic mountain bicycles and hybrids usually have front suspension only, whilst more sophisticated ones also have rear suspension. Road bicycles tend to have no suspension.
Wheels and tires
The wheel axle fits into fork ends in the frame and fork. A pair of wheels may be called a wheelset, especially in the context of ready-built "off the shelf", performance-oriented wheels.
Tires vary enormously depending on their intended purpose. Road bicycles use tires 18 to 25 millimeters wide, most often completely smooth, or slick, and inflated to high pressure to roll fast on smooth surfaces. Off-road tires are usually between wide, and have treads for gripping in muddy conditions or metal studs for ice.
Groupset
Groupset generally refers to all of the components that make up a bicycle excluding the bicycle frame, fork, stem, wheels, tires, and rider contact points, such as the saddle and handlebars.
Accessories
Some components, which are often optional accessories on sports bicycles, are standard features on utility bicycles to enhance their usefulness, comfort, safety and visibility. Fenders with spoilers (mudflaps) protect the cyclist and moving parts from spray when riding through wet areas. In some countries (e.g. Germany, UK), fenders are called mudguards. The chainguards protect clothes from oil on the chain while preventing clothing from being caught between the chain and crankset teeth. Kick stands keep bicycles upright when parked, and bike locks deter theft. Front-mounted baskets, front or rear luggage carriers or racks, and panniers mounted above either or both wheels can be used to carry equipment or cargo. Pegs can be fastened to one, or both of the wheel hubs to either help the rider perform certain tricks, or allow a place for extra riders to stand, or rest. Parents sometimes add rear-mounted child seats, an auxiliary saddle fitted to the crossbar, or both to transport children. Bicycles can also be fitted with a hitch to tow a trailer for carrying cargo, a child, or both.
Toe-clips and toestraps and clipless pedals help keep the foot locked in the proper pedal position and enable cyclists to pull and push the pedals. Technical accessories include cyclocomputers for measuring speed, distance, heart rate, GPS data etc. Other accessories include lights, reflectors, mirrors, racks, trailers, bags, water bottles and cages, and bell. Bicycle lights, reflectors, and helmets are required by law in some geographic regions depending on the legal code. It is more common to see bicycles with bottle generators, dynamos, lights, fenders, racks and bells in Europe. Bicyclists also have specialized form fitting and high visibility clothing.
Children's bicycles may be outfitted with cosmetic enhancements such as bike horns, streamers, and spoke beads. Training wheels are sometimes used when learning to ride, but a dedicated balance bike teaches independent riding more effectively.
Bicycle helmets can reduce injury in the event of a collision or accident, and a suitable helmet is legally required of riders in many jurisdictions. Helmets may be classified as an accessory or as an item of clothing.
Bike trainers are used to enable cyclists to cycle while the bike remains stationary. They are frequently used to warm up before races or indoors when riding conditions are unfavorable.
Standards
A number of formal and industry standards exist for bicycle components to help make spare parts exchangeable and to maintain a minimum product safety.
The International Organization for Standardization (ISO) has a special technical committee for cycles, TC149, that has the scope of "Standardization in the field of cycles, their components and accessories with particular reference to terminology, testing methods and requirements for performance and safety, and interchangeability".
The European Committee for Standardization (CEN) also has a specific Technical Committee, TC333, that defines European standards for cycles. Their mandate states that EN cycle standards shall harmonize with ISO standards. Some CEN cycle standards were developed before ISO published their standards, leading to strong European influences in this area. European cycle standards tend to describe minimum safety requirements, while ISO standards have historically harmonized parts geometry.
Maintenance and repair
Like all devices with mechanical moving parts, bicycles require a certain amount of regular maintenance and replacement of worn parts. A bicycle is relatively simple compared with a car, so some cyclists choose to do at least part of the maintenance themselves. Some components are easy to handle using relatively simple tools, while other components may require specialist manufacturer-dependent tools.
Many bicycle components are available at several different price/quality points; manufacturers generally try to keep all components on any particular bike at about the same quality level, though at the very cheap end of the market there may be some skimping on less obvious components (e.g. bottom bracket).
There are several hundred assisted-service Community Bicycle Organizations worldwide. At a Community Bicycle Organization, laypeople bring in bicycles needing repair or maintenance; volunteers teach them how to do the required steps.
Full service is available from bicycle mechanics at a local bike shop.
In areas where it is available, some cyclists purchase roadside assistance from companies such as the Better World Club or the American Automobile Association.
Maintenance
The most basic maintenance item is keeping the tires correctly inflated; this can make a noticeable difference as to how the bike feels to ride. Bicycle tires usually have a marking on the sidewall indicating the pressure appropriate for that tire. Bicycles use much higher pressures than cars: car tires are normally in the range of , whereas bicycle tires are normally in the range of .
Another basic maintenance item is regular lubrication of the chain and pivot points for derailleurs and brake components. Most of the bearings on a modern bike are sealed and grease-filled and require little or no attention; such bearings will usually last for or more. The crank bearings require periodic maintenance, which involves removing, cleaning and repacking with the correct grease.
The chain and the brake blocks are the components which wear out most quickly, so these need to be checked from time to time, typically every or so. Most local bike shops will do such checks for free. Note that when a chain becomes badly worn it will also wear out the rear cogs/cassette and eventually the chain ring(s), so replacing a chain when only moderately worn will prolong the life of other components.
Over the longer term, tires do wear out, after ; a rash of punctures is often the most visible sign of a worn tire.
Repair
Very few bicycle components can actually be repaired; replacement of the failing component is the normal practice.
The most common roadside problem is a puncture. After removing the offending nail/tack/thorn/glass shard/etc., there are two approaches: either mend the puncture by the roadside, or replace the inner tube and then mend the puncture in the comfort of home. Some brands of tires are much more puncture-resistant than others, often incorporating one or more layers of Kevlar; the downside of such tires is that they may be heavier and/or more difficult to fit and remove.
Tools
There are specialized bicycle tools for use both in the shop and at the roadside. Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may contain any combination of a hand pump or CO2 pump, tire levers, spare tubes, self-adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal grater (for roughing the tube surface to be patched) and sometimes even a block of French chalk), wrenches, hex keys, screwdrivers, and a chain tool. Special, thin wrenches are often required for maintaining various screw-fastened parts, specifically, the frequently lubricated ball-bearing "cones". There are also cycling-specific multi-tools that combine many of these implements into a single compact device. More specialized bicycle components may require more complex tools, including proprietary tools specific for a given manufacturer.
Social and historical aspects
The bicycle has had a considerable effect on human society, in both the cultural and industrial realms.
In daily life
Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements by allowing workers to commute from more spacious dwellings in the suburbs. They also reduced dependence on horses. Bicycles allowed people to travel for leisure into the country, since bicycles were three times as energy efficient as walking and three to four times as fast.
In built-up cities around the world, urban planning uses cycling infrastructure like bikeways to reduce traffic congestion and air pollution. A number of cities around the world have implemented schemes known as bicycle sharing systems or community bicycle programs. The first of these was the White Bicycle plan in Amsterdam in 1965. It was followed by yellow bicycles in La Rochelle and green bicycles in Cambridge. These initiatives complement public transport systems and offer an alternative to motorized traffic to help reduce congestion and pollution. In Europe, especially in the Netherlands and parts of Germany and Denmark, bicycle commuting is common. In Copenhagen, a cyclists' organization runs a Cycling Embassy that promotes biking for commuting and sightseeing. The United Kingdom has a tax break scheme (IR 176) that allows employees to buy a new bicycle tax free to use for commuting.
In the Netherlands all train stations offer free bicycle parking, or a more secure parking place for a small fee, with the larger stations also offering bicycle repair shops. Cycling is so popular that the parking capacity may be exceeded, while in some places such as Delft the capacity is usually exceeded. In Trondheim in Norway, the Trampe bicycle lift has been developed to encourage cyclists by giving assistance on a steep hill. Buses in many cities have bicycle carriers mounted on the front.
There are towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal.
In cities where bicycles are not integrated into the public transportation system, commuters often use bicycles as elements of a mixed-mode commute, where the bike is used to travel to and from train stations or other forms of rapid transit. Some students who commute several miles drive a car from home to a campus parking lot, then ride a bicycle to class. Folding bicycles are useful in these scenarios, as they are less cumbersome when carried aboard. Los Angeles removed a small amount of seating on some trains to make more room for bicycles and wheel chairs.
Some US companies, notably in the tech sector, are developing both innovative cycle designs and cycle-friendliness in the workplace. Foursquare, whose CEO Dennis Crowley "pedaled to pitch meetings ... [when he] was raising money from venture capitalists" on a two-wheeler, chose a new location for its New York headquarters "based on where biking would be easy". Parking in the office was also integral to HQ planning. Mitchell Moss, who runs the Rudin Center for Transportation Policy & Management at New York University, said in 2012: "Biking has become the mode of choice for the educated high tech worker".
Bicycles offer an important mode of transport in many developing countries. Until recently, bicycles have been a staple of everyday life throughout Asian countries. They are the most frequently used method of transport for commuting to work, school, shopping, and life in general. In Europe, bicycles are commonly used. They also offer a degree of exercise to keep individuals healthy.
Bicycles are also celebrated in the visual arts. An example of this is the Bicycle Film Festival, a film festival hosted all around the world.
Poverty alleviation
Female emancipation
The safety bicycle gave women unprecedented mobility, contributing to their emancipation in Western nations. As bicycles became safer and cheaper, more women had access to the personal freedom that bicycles embodied, and so the bicycle came to symbolize the New Woman of the late 19th century, especially in Britain and the United States. The bicycle craze in the 1890s also led to a movement for so-called rational dress, which helped liberate women from corsets and ankle-length skirts and other restrictive garments, substituting the then-shocking bloomers.
The bicycle was recognized by 19th-century feminists and suffragists as a "freedom machine" for women. American Susan B. Anthony said in a New York World interview on 2 February 1896: "I think it has done more to emancipate woman than any one thing in the world. I rejoice every time I see a woman ride by on a wheel. It gives her a feeling of self-reliance and independence the moment she takes her seat; and away she goes, the picture of untrammelled womanhood." In 1895 Frances Willard, the tightly laced president of the Woman's Christian Temperance Union, wrote A Wheel Within a Wheel: How I Learned to Ride the Bicycle, with Some Reflections by the Way, a 75-page illustrated memoir praising "Gladys", her bicycle, for its "gladdening effect" on her health and political optimism. Willard used a cycling metaphor to urge other suffragists to action.
In 1985, Georgena Terry started the first women-specific bicycle company. Her designs featured frame geometry and wheel sizes chosen to better fit women, with shorter top tubes and more suitable reach.
Economic implications
Bicycle manufacturing proved to be a training ground for other industries and led to the development of advanced metalworking techniques, both for the frames themselves and for special components such as ball bearings, washers, and sprockets. These techniques later enabled skilled metalworkers and mechanics to develop the components used in early automobiles and aircraft.
Wilbur and Orville Wright, a pair of businessmen, ran the Wright Cycle Company which designed, manufactured and sold their bicycles during the bike boom of the 1890s.
They also served to teach the industrial models later adopted, including mechanization and mass production (later copied and adopted by Ford and General Motors), vertical integration (also later copied and adopted by Ford), aggressive advertising (as much as 10% of all advertising in U.S. periodicals in 1898 was by bicycle makers), lobbying for better roads (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), all first practiced by Pope. In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful.
Early bicycles were an example of conspicuous consumption, being adopted by the fashionable elites. In addition, by serving as a platform for accessories, which could ultimately cost more than the bicycle itself, it paved the way for the likes of the Barbie doll.
Bicycles helped create, or enhance, new kinds of businesses, such as bicycle messengers, traveling seamstresses, riding academies, and racing rinks. Their board tracks were later adapted to early motorcycle and automobile racing. There were a variety of new inventions, such as spoke tighteners, and specialized lights, socks and shoes, and even cameras, such as the Eastman Company's Poco. Probably the best known and most widely used of these inventions, adopted well beyond cycling, is Charles Bennett's Bike Web, which came to be called the jock strap.
They also presaged a move away from public transit that would explode with the introduction of the automobile.
J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and then renamed the Rover Company when it started making cars. Morris Motors Limited (in Oxford) and Škoda also began in the bicycle business, as did the Wright brothers. Alistair Craig, whose company eventually emerged to become the engine manufacturers Ailsa Craig, also started from manufacturing bicycles, in Glasgow in March 1885.
In general, U.S. and European cycle manufacturers used to assemble cycles from their own frames and components made by other companies, although very large companies (such as Raleigh) used to make almost every part of a bicycle (including bottom brackets, axles, etc.) In recent years, those bicycle makers have greatly changed their methods of production. Now, almost none of them produce their own frames.
Many newer or smaller companies only design and market their products; the actual production is done by Asian companies. For example, some 60% of the world's bicycles are now being made in China. Despite this shift in production, as nations such as China and India become more wealthy, their own use of bicycles has declined due to the increasing affordability of cars and motorcycles. One of the major reasons for the proliferation of Chinese-made bicycles in foreign markets is the lower cost of labor in China.
In line with the European financial crisis, in Italy in 2011 the number of bicycle sales (1.75 million) just passed the number of new car sales.
Environmental impact
One of the profound economic implications of bicycle use is that it liberates the user from motor fuel consumption. (Ballantine, 1972) The bicycle is an inexpensive, fast, healthy and environmentally friendly mode of transport. Ivan Illich stated that bicycle use extended the usable physical environment for people, while alternatives such as cars and motorways degraded and confined people's environment and mobility. Currently, two billion bicycles are in use around the world. Children, students, professionals, laborers, civil servants and seniors are pedaling around their communities. They all experience the freedom and the natural opportunity for exercise that the bicycle easily provides. Bicycle also has lowest carbon intensity of travel.
Manufacturing
The global bicycle market is $61 billion in 2011. , 130 million bicycles were sold every year globally and 66% of them were made in China.
Legal requirements
Early in its development, as with automobiles, there were restrictions on the operation of bicycles. Along with advertising, and to gain free publicity, Albert A. Pope litigated on behalf of cyclists.
The 1968 Vienna Convention on Road Traffic of the United Nations considers a bicycle to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is considered an operator. The traffic codes of many countries reflect these definitions and demand that a bicycle satisfy certain legal requirements before it can be used on public roads. In many jurisdictions, it is an offense to use a bicycle that is not in a roadworthy condition.
In some countries, bicycles must have functioning front and rear lights when ridden after dark.
Some countries require child and/or adult cyclists to wear helmets, as this may protect riders from head trauma. Countries which require adult cyclists to wear helmets include Spain, New Zealand and Australia. Mandatory helmet wearing is one of the most controversial topics in the cycling world, with proponents arguing that it reduces head injuries and thus is an acceptable requirement, while opponents argue that by making cycling seem more dangerous and cumbersome, it reduces cyclist numbers on the streets, creating an overall negative health effect (fewer people cycling for their own health, and the remaining cyclists being more exposed through a reversed safety in numbers effect).
Theft
Bicycles are popular targets for theft, due to their value and ease of resale. The number of bicycles stolen annually is difficult to quantify as a large number of crimes are not reported. Around 50% of the participants in the Montreal International Journal of Sustainable Transportation survey were subjected to a bicycle theft in their lifetime as active cyclists. Most bicycles have serial numbers that can be recorded to verify identity in case of theft.
See also
Bicycle and motorcycle geometry
Bicycle drum brake
Bicycle fender
Bicycle parking station
Bicycle-sharing system
Cyclability
Danish bicycle VIN-system
List of bicycle types
List of films about bicycles and cycling
Outline of bicycles
Outline of cycling
rattleCAD (software for bicycle design)
Twike
Velomobile
World Bicycle Day
Notes
References
Citations
Sources
General
Further reading
External links
A History of Bicycles and Other Cycles at the Canada Science and Technology Museum
19th-century inventions
Appropriate technology
Articles containing video clips
German inventions
Sustainable technologies
Sustainable transport
|
https://en.wikipedia.org/wiki/Biopolymer
|
Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The Polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs).
In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering.
Biopolymers versus synthetic polymers
A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or stochastic) structures. This fact leads to a molecular mass distribution that is missing in biopolymers. In fact, as their synthesis is controlled by a template-directed process in most in vivo systems, all biopolymers of a type (say one specific protein) are all alike: they all contain similar sequences and numbers of monomers and thus all have the same mass. This phenomenon is called monodispersity in contrast to the polydispersity encountered in synthetic polymers. As a result, biopolymers have a dispersity of 1.
Biopolymers versus biobased polymers
“Biopolymers” are usually not equal to “biobased polymers”. Biobased polymers are polymers chemically or biologically synthesized (fully or partially) from biomass monomers, such as polyesters (e.g., polyhydroxyalkanoates (PHAs) and polylactic acid (PLA)). In this respect, the only polymers that can be regarded as both biopolymers and biobased polymers are those that are biologically produced (by microbes) from biomass carbon sources (e.g., sugars and lipids), and examples of these include PHAs, bacterial cellulose, gellan gum, xanthan gum, and curdlan.
Conventions and nomenclature
Polypeptides
The convention for a polypeptide is to list its constituent amino acid residues as they occur from the amino terminus to the carboxylic acid terminus. The amino acid residues are always joined by peptide bonds. Protein, though used colloquially to refer to any polypeptide, refers to larger or fully functional forms and can consist of several polypeptide chains as well as single chains. Proteins can also be modified to include non-peptide components, such as saccharide chains and lipids.
Nucleic acids
The convention for a nucleic acid sequence is to list the nucleotides as they occur from the 5' end to the 3' end of the polymer chain, where 5' and 3' refer to the numbering of carbons around the ribose ring which participate in forming the phosphate diester linkages of the chain. Such a sequence is called the primary structure of the biopolymer.
Polysaccharides
Polysaccharides (sugar polymers) can be linear or branched and are typically joined with glycosidic bonds. The exact placement of the linkage can vary, and the orientation of the linking functional groups is also important, resulting in α- and β-glycosidic bonds with numbering definitive of the linking carbons' location in the ring. In addition, many saccharide units can undergo various chemical modifications, such as amination, and can even form parts of other molecules, such as glycoproteins.
Structural characterization
There are a number of biophysical techniques for determining sequence information. Protein sequence can be determined by Edman degradation, in which the N-terminal residues are hydrolyzed from the chain one at a time, derivatized, and then identified. Mass spectrometer techniques can also be used. Nucleic acid sequence can be determined using gel electrophoresis and capillary electrophoresis. Lastly, mechanical properties of these biopolymers can often be measured using optical tweezers or atomic force microscopy. Dual-polarization interferometry can be used to measure the conformational changes or self-assembly of these materials when stimulated by pH, temperature, ionic strength or other binding partners.
Common biopolymers
Collagen: Collagen is the primary structure of vertebrates and is the most abundant protein in mammals. Because of this, collagen is one of the most easily attainable biopolymers, and used for many research purposes. Because of its mechanical structure, collagen has high tensile strength and is a non-toxic, easily absorbable, biodegradable, and biocompatible material. Therefore, it has been used for many medical applications such as in treatment for tissue infection, drug delivery systems, and gene therapy.
Silk fibroin: Silk Fibroin (SF) is another protein rich biopolymer that can be obtained from different silkworm species, such as the mulberry worm Bombyx mori. In contrast to collagen, SF has a lower tensile strength but has strong adhesive properties due to its insoluble and fibrous protein composition. In recent studies, silk fibroin has been found to possess anticoagulation properties and platelet adhesion. Silk fibroin has been additionally found to support stem cell proliferation in vitro.
Gelatin: Gelatin is obtained from type I collagen consisting of cysteine, and produced by the partial hydrolysis of collagen from bones, tissues and skin of animals. There are two types of gelatin, Type A and Type B. Type A collagen is derived by acid hydrolysis of collagen and has 18.5% nitrogen. Type B is derived by alkaline hydrolysis containing 18% nitrogen and no amide groups. Elevated temperatures cause the gelatin to melts and exists as coils, whereas lower temperatures result in coil to helix transformation. Gelatin contains many functional groups like NH2, SH, and COOH which allow for gelatin to be modified using nanoparticles and biomolecules. Gelatin is an Extracellular Matrix protein which allows it to be applied for applications such as wound dressings, drug delivery and gene transfection.
Starch: Starch is an inexpensive biodegradable biopolymer and copious in supply. Nanofibers and microfibers can be added to the polymer matrix to increase the mechanical properties of starch improving elasticity and strength. Without the fibers, starch has poor mechanical properties due to its sensitivity to moisture. Starch being biodegradable and renewable is used for many applications including plastics and pharmaceutical tablets.
Cellulose: Cellulose is very structured with stacked chains that result in stability and strength. The strength and stability comes from the straighter shape of cellulose caused by glucose monomers joined together by glycogen bonds. The straight shape allows the molecules to pack closely. Cellulose is very common in application due to its abundant supply, its biocompatibility, and is environmentally friendly. Cellulose is used vastly in the form of nano-fibrils called nano-cellulose. Nano-cellulose presented at low concentrations produces a transparent gel material. This material can be used for biodegradable, homogeneous, dense films that are very useful in the biomedical field.
Alginate: Alginate is the most copious marine natural polymer derived from brown seaweed. Alginate biopolymer applications range from packaging, textile and food industry to biomedical and chemical engineering. The first ever application of alginate was in the form of wound dressing, where its gel-like and absorbent properties were discovered. When applied to wounds, alginate produces a protective gel layer that is optimal for healing and tissue regeneration, and keeps a stable temperature environment. Additionally, there have been developments with alginate as a drug delivery medium, as drug release rate can easily be manipulated due to a variety of alginate densities and fibrous composition.
Biopolymer applications
The applications of biopolymers can be categorized under two main fields, which differ due to their biomedical and industrial use.
Biomedical
Because one of the main purposes for biomedical engineering is to mimic body parts to sustain normal body functions, due to their biocompatible properties, biopolymers are used vastly for tissue engineering, medical devices and the pharmaceutical industry. Many biopolymers can be used for regenerative medicine, tissue engineering, drug delivery, and overall medical applications due to their mechanical properties. They provide characteristics like wound healing, and catalysis of bioactivity, and non-toxicity. Compared to synthetic polymers, which can present various disadvantages like immunogenic rejection and toxicity after degradation, many biopolymers are normally better with bodily integration as they also possess more complex structures, similar to the human body.
More specifically, polypeptides like collagen and silk, are biocompatible materials that are being used in ground-breaking research, as these are inexpensive and easily attainable materials. Gelatin polymer is often used on dressing wounds where it acts as an adhesive. Scaffolds and films with gelatin allow for the scaffolds to hold drugs and other nutrients that can be used to supply to a wound for healing.
As collagen is one of the more popular biopolymers used in biomedical science, here are some examples of their use:
Collagen based drug delivery systems: collagen films act like a barrier membrane and are used to treat tissue infections like infected corneal tissue or liver cancer. Collagen films have all been used for gene delivery carriers which can promote bone formation.
Collagen sponges: Collagen sponges are used as a dressing to treat burn victims and other serious wounds. Collagen based implants are used for cultured skin cells or drug carriers that are used for burn wounds and replacing skin.
Collagen as haemostat: When collagen interacts with platelets it causes a rapid coagulation of blood. This rapid coagulation produces a temporary framework so the fibrous stroma can be regenerated by host cells. Collagen based haemostat reduces blood loss in tissues and helps manage bleeding in organs such as the liver and spleen.
Chitosan is another popular biopolymer in biomedical research. Chitosan is derived from chitin, the main component in the exoskeleton of crustaceans and insects and the second most abundant biopolymer in the world. Chitosan has many excellent characteristics for biomedical science. Chitosan is biocompatible, it is highly bioactive, meaning it stimulates a beneficial response from the body, it can biodegrade which can eliminate a second surgery in implant applications, can form gels and films, and is selectively permeable. These properties allow for various biomedical applications of chitosan.
Chitosan as drug delivery: Chitosan is used mainly with drug targeting because it has potential to improve drug absorption and stability. In addition, chitosan conjugated with anticancer agents can also produce better anticancer effects by causing gradual release of free drug into cancerous tissue.
Chitosan as an anti-microbial agent: Chitosan is used to stop the growth of microorganisms. It performs antimicrobial functions in microorganisms like algae, fungi, bacteria, and gram-positive bacteria of different yeast species.
Chitosan composite for tissue engineering: Chitosan powder blended with alginate is used to form functional wound dressings. These dressings create a moist, biocompatible environment which aids in the healing process. This wound dressing is also biodegradable and has porous structures that allows cells to grow into the dressing. Furthermore, thiolated chitosans (see thiomers) are used for tissue engineering and wound healing, as these biopolymers are able to crosslink via disulfide bonds forming stable three-dimensional networks.
Industrial
Food: Biopolymers are being used in the food industry for things like packaging, edible encapsulation films and coating foods. Polylactic acid (PLA) is very common in the food industry due to is clear color and resistance to water. However, most polymers have a hydrophilic nature and start deteriorating when exposed to moisture. Biopolymers are also being used as edible films that encapsulate foods. These films can carry things like antioxidants, enzymes, probiotics, minerals, and vitamins. The food consumed encapsulated with the biopolymer film can supply these things to the body.
Packaging: The most common biopolymers used in packaging are polyhydroxyalkanoates (PHAs), polylactic acid (PLA), and starch. Starch and PLA are commercially available and biodegradable, making them a common choice for packaging. However, their barrier properties (either moisture-barrier or gas-barrier properties) and thermal properties are not ideal. Hydrophilic polymers are not water resistant and allow water to get through the packaging which can affect the contents of the package. Polyglycolic acid (PGA) is a biopolymer that has great barrier characteristics and is now being used to correct the barrier obstacles from PLA and starch.
Water purification: Chitosan has been used for water purification. It is used as a flocculant that only takes a few weeks or months rather than years to degrade in the environment. Chitosan purifies water by chelation. This is the process in which binding sites along the polymer chain bind with the metal ions in the water forming chelates. Chitosan has been shown to be an excellent candidate for use in storm and wastewater treatment.
As materials
Some biopolymers- such as PLA, naturally occurring zein, and poly-3-hydroxybutyrate can be used as plastics, replacing the need for polystyrene or polyethylene based plastics.
Some plastics are now referred to as being 'degradable', 'oxy-degradable' or 'UV-degradable'. This means that they break down when exposed to light or air, but these plastics are still primarily (as much as 98 per cent) oil-based and are not currently certified as 'biodegradable' under the European Union directive on Packaging and Packaging Waste (94/62/EC). Biopolymers will break down, and some are suitable for domestic composting.
Biopolymers (also called renewable polymers) are produced from biomass for use in the packaging industry. Biomass comes from crops such as sugar beet, potatoes, or wheat: when used to produce biopolymers, these are classified as non food crops. These can be converted in the following pathways:
Sugar beet > Glyconic acid > Polyglyconic acid
Starch > (fermentation) > Lactic acid > Polylactic acid (PLA)
Biomass > (fermentation) > Bioethanol > Ethene > Polyethylene
Many types of packaging can be made from biopolymers: food trays, blown starch pellets for shipping fragile goods, thin films for wrapping.
Environmental impacts
Biopolymers can be sustainable, carbon neutral and are always renewable, because they are made from plant or animal materials which can be grown indefinitely. Since these materials come from agricultural crops, their use could create a sustainable industry. In contrast, the feedstocks for polymers derived from petrochemicals will eventually deplete. In addition, biopolymers have the potential to cut carbon emissions and reduce CO2 quantities in the atmosphere: this is because the CO2 released when they degrade can be reabsorbed by crops grown to replace them: this makes them close to carbon neutral.
Almost all biopolymers are biodegradable in the natural environment: they are broken down into CO2 and water by microorganisms. These biodegradable biopolymers are also compostable: they can be put into an industrial composting process and will break down by 90% within six months. Biopolymers that do this can be marked with a 'compostable' symbol, under European Standard EN 13432 (2000). Packaging marked with this symbol can be put into industrial composting processes and will break down within six months or less. An example of a compostable polymer is PLA film under 20μm thick: films which are thicker than that do not qualify as compostable, even though they are "biodegradable". In Europe there is a home composting standard and associated logo that enables consumers to identify and dispose of packaging in their compost heap.
See also
Biomaterials
Bioplastic
Biopolymers & Cell (journal)
Condensation polymers
Condensed tannins
DNA sequence
Melanin
Non food crops
Phosphoramidite
Polymer chemistry
Sequence-controlled polymers
Sequencing
Small molecules
Worm-like chain
References
External links
NNFCC: The UK's National Centre for Biorenewable Energy, Fuels and Materials
Bioplastics Magazine
Biopolymer group
What’s Stopping Bioplastic?
Biomolecules
Polymers
Molecular biology
Molecular genetics
Biotechnology products
Bioplastics
Biomaterials
|
https://en.wikipedia.org/wiki/Bicarbonate
|
In inorganic chemistry, bicarbonate (IUPAC-recommended nomenclature: hydrogencarbonate) is an intermediate form in the deprotonation of carbonic acid. It is a polyatomic anion with the chemical formula .
Bicarbonate serves a crucial biochemical role in the physiological pH buffering system.
The term "bicarbonate" was coined in 1814 by the English chemist William Hyde Wollaston. The name lives on as a trivial name.
Chemical properties
The bicarbonate ion (hydrogencarbonate ion) is an anion with the empirical formula and a molecular mass of 61.01 daltons; it consists of one central carbon atom surrounded by three oxygen atoms in a trigonal planar arrangement, with a hydrogen atom attached to one of the oxygens. It is isoelectronic with nitric acid . The bicarbonate ion carries a negative one formal charge and is an amphiprotic species which has both acidic and basic properties. It is both the conjugate base of carbonic acid ; and the conjugate acid of , the carbonate ion, as shown by these equilibrium reactions:
+ 2 H2O + H2O + OH− H2CO3 + 2 OH−
H2CO3 + 2 H2O + H3O+ + H2O + 2 H3O+.
A bicarbonate salt forms when a positively charged ion attaches to the negatively charged oxygen atoms of the ion, forming an ionic compound. Many bicarbonates are soluble in water at standard temperature and pressure; in particular, sodium bicarbonate contributes to total dissolved solids, a common parameter for assessing water quality.
Physiological role
Bicarbonate () is a vital component of the pH buffering system of the human body (maintaining acid–base homeostasis). 70%–75% of CO2 in the body is converted into carbonic acid (H2CO3), which is the conjugate acid of and can quickly turn into it.
With carbonic acid as the central intermediate species, bicarbonate – in conjunction with water, hydrogen ions, and carbon dioxide – forms this buffering system, which is maintained at the volatile equilibrium required to provide prompt resistance to pH changes in both the acidic and basic directions. This is especially important for protecting tissues of the central nervous system, where pH changes too far outside of the normal range in either direction could prove disastrous (see acidosis or alkalosis). Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling.
Additionally, bicarbonate plays a key role in the digestive system. It raises the internal pH of the stomach, after highly acidic digestive juices have finished in their digestion of food. Bicarbonate also acts to regulate pH in the small intestine. It is released from the pancreas in response to the hormone secretin to neutralize the acidic chyme entering the duodenum from the stomach.
Bicarbonate in the environment
Bicarbonate is the dominant form of dissolved inorganic carbon in sea water, and in most fresh waters. As such it is an important sink in the carbon cycle.
Some plants like Chara utilize carbonate and produce calcium carbonate (CaCO3) as result of biological metabolism.
In freshwater ecology, strong photosynthetic activity by freshwater plants in daylight releases gaseous oxygen into the water and at the same time produces bicarbonate ions. These shift the pH upward until in certain circumstances the degree of alkalinity can become toxic to some organisms or can make other chemical constituents such as ammonia toxic. In darkness, when no photosynthesis occurs, respiration processes release carbon dioxide, and no new bicarbonate ions are produced, resulting in a rapid fall in pH.
The flow of bicarbonate ions from rocks weathered by the carbonic acid in rainwater is an important part of the carbon cycle.
Other uses
The most common salt of the bicarbonate ion is sodium bicarbonate, NaHCO3, which is commonly known as baking soda. When heated or exposed to an acid such as acetic acid (vinegar), sodium bicarbonate releases carbon dioxide. This is used as a leavening agent in baking.
Ammonium bicarbonate is used in digestive biscuit manufacture.
Diagnostics
In diagnostic medicine, the blood value of bicarbonate is one of several indicators of the state of acid–base physiology in the body. It is measured, along with chloride, potassium, and sodium, to assess electrolyte levels in an electrolyte panel test (which has Current Procedural Terminology, CPT, code 80051).
The parameter standard bicarbonate concentration (SBCe) is the bicarbonate concentration in the blood at a PaCO2 of , full oxygen saturation and 36 °C.
Bicarbonate compounds
Sodium bicarbonate
Potassium bicarbonate
Caesium bicarbonate
Magnesium bicarbonate
Calcium bicarbonate
Ammonium bicarbonate
Carbonic acid
See also
Carbon dioxide
Carbonate
Carbonic anhydrase
Hard water
Arterial blood gas test
References
External links
Amphoteric compounds
Anions
Bicarbonates
|
https://en.wikipedia.org/wiki/BASIC
|
BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn.
In addition to the program language, Kemeny and Kurtz developed the Dartmouth Time Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC.
The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era, and became the de facto programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge.
BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features (such as Pascal and C) became tenable on such computers. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET, while a hobbyist scene for BASIC more broadly continues to exist.
Origin
John G. Kemeny was the math department chairman at Dartmouth College. Based largely on his reputation as an innovator in math teaching, in 1959 the school won an Alfred P. Sloan Foundation award for $500,000 to build a new department building. Thomas E. Kurtz had joined the department in 1956, and from the 1960s Kemeny and Kurtz agreed on the need for programming literacy among students outside the traditional STEM fields. Kemeny later noted that "Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate. It was as simple as that."
Kemeny and Kurtz had made two previous experiments with simplified languages, DARSIMCO (Dartmouth Simplified Code) and DOPE (Dartmouth Oversimplified Programming Experiment). These did not progress past a single freshman class. New experiments using Fortran and ALGOL followed, but Kurtz concluded these languages were too tricky for what they desired. As Kurtz noted, Fortran had numerous oddly-formed commands, notably an "almost impossible-to-memorize convention for specifying a loop: . Is it '1, 10, 2' or '1, 2, 10', and is the comma after the line number required or not?"
Moreover, the lack of any sort of immediate feedback was a key problem; the machines of the era used batch processing and took a long time to complete a run of a program. While Kurtz was visiting MIT, John McCarthy suggested that time-sharing offered a solution; a single machine could divide up its processing time among many users, giving them the illusion of having a (slow) computer to themselves. Small programs would return results in a few seconds. This led to increasing interest in a system using time-sharing and a new language specifically for use by non-STEM students.
Kemeny wrote the first version of BASIC. The acronym BASIC comes from the name of an unpublished paper by Thomas Kurtz. The new language was heavily patterned on FORTRAN II; statements were one-to-a-line, numbers were used to indicate the target of loops and branches, and many of the commands were similar or identical to Fortran. However, the syntax was changed wherever it could be improved. For instance, the difficult to remember DO loop was replaced by the much easier to remember , and the line number used in the DO was instead indicated by the NEXT I. Likewise, the cryptic IF statement of Fortran, whose syntax matched a particular instruction of the machine on which it was originally written, became the simpler . These changes made the language much less idiosyncratic while still having an overall structure and feel similar to the original FORTRAN.
The project received a $300,000 grant from the National Science Foundation, which was used to purchase a GE-225 computer for processing, and a Datanet-30 realtime processor to handle the Teletype Model 33 teleprinters used for input and output. A team of a dozen undergraduates worked on the project for about a year, writing both the DTSS system and the BASIC compiler. The first version BASIC language was released on 1 May 1964.
Initially, BASIC concentrated on supporting straightforward mathematical work, with matrix arithmetic support from its initial implementation as a batch language, and character string functionality being added by 1965. Usage in the university rapidly expanded, requiring the main CPU to be replaced by a GE-235, and still later by a GE-635. By the early 1970s there were hundreds of terminals connected to the machines at Dartmouth, some of them remotely.
Wanting use of the language to become widespread, its designers made the compiler available free of charge. In the 1960s, software became a chargeable commodity; until then, it was provided without charge as a service with expensive computers, usually available only to lease. They also made it available to high schools in the Hanover, New Hampshire, area and regionally throughout New England on Teletype Model 33 and Model 35 teleprinter terminals connected to Dartmouth via dial-up phone lines, and they put considerable effort into promoting the language. In the following years, as other dialects of BASIC appeared, Kemeny and Kurtz's original BASIC dialect became known as Dartmouth BASIC.
New Hampshire recognized the accomplishment in 2019 when it erected a highway historical marker in Hanover describing the creation of "the first user-friendly programming language".
Spread on time-sharing services
The emergence of BASIC took place as part of a wider movement towards time-sharing systems. First conceptualized during the late 1950s, the idea became so dominant in the computer industry by the early 1960s that its proponents were speaking of a future in which users would "buy time on the computer much the same way that the average household buys power and water from utility companies".
General Electric, having worked on the Dartmouth project, wrote their own underlying operating system and launched an online time-sharing system known as Mark I. It featured BASIC as one of its primary selling points. Other companies in the emerging field quickly followed suit; Tymshare introduced SUPER BASIC in 1968, CompuServe had a version on the DEC-10 at their launch in 1969, and by the early 1970s BASIC was largely universal on general-purpose mainframe computers. Even IBM eventually joined the club with the introduction of VS-BASIC in 1973.
Although time-sharing services with BASIC were successful for a time, the widespread success predicted earlier was not to be. The emergence of minicomputers during the same period, and especially low-cost microcomputers in the mid-1970s, allowed anyone to purchase and run their own systems rather than buy online time which was typically billed at dollars per minute.
Spread on minicomputers
BASIC, by its very nature of being small, was naturally suited to porting to the minicomputer market, which was emerging at the same time as the time-sharing services. These machines had small main memory, perhaps as little as 4 KB in modern terminology, and lacked high-performance storage like hard drives that make compilers practical. On these systems, BASIC was normally implemented as an interpreter rather than a compiler due to its lower requirement for working memory.
A particularly important example was HP Time-Shared BASIC, which, like the original Dartmouth system, used two computers working together to implement a time-sharing system. The first, a low-end machine in the HP 2100 series, was used to control user input and save and load their programs to tape or disk. The other, a high-end version of the same underlying machine, ran the programs and generated output. For a cost of about $100,000, one could own a machine capable of running between 16 and 32 users at the same time. The system, bundled as the HP 2000, was the first mini platform to offer time-sharing and was an immediate runaway success, catapulting HP to become the third-largest vendor in the minicomputer space, behind DEC and Data General (DG).
DEC, the leader in the minicomputer space since the mid-1960s, had initially ignored BASIC. This was due to their work with RAND Corporation, who had purchased a PDP-6 to run their JOSS language, which was conceptually very similar to BASIC. This led DEC to introduce a smaller, cleaned up version of JOSS known as FOCAL, which they heavily promoted in the late 1960s. However, with timesharing systems widely offering BASIC, and all of their competition in the minicomputer space doing the same, DEC's customers were clamoring for BASIC. After management repeatedly ignored their pleas, David H. Ahl took it upon himself to buy a BASIC for the PDP-8, which was a major success in the education market. By the early 1970s, FOCAL and JOSS had been forgotten and BASIC had become almost universal in the minicomputer market. DEC would go on to introduce their updated version, BASIC-PLUS, for use on the RSTS/E time-sharing operating system.
During this period a number of simple text-based games were written in BASIC, most notably Mike Mayfield's Star Trek. David Ahl collected these, some ported from FOCAL, and published them in an educational newsletter he compiled. He later collected a number of these into book form, 101 BASIC Computer Games, published in 1973. During the same period, Ahl was involved in the creation of a small computer for education use, an early personal computer. When management refused to support the concept, Ahl left DEC in 1974 to found the seminal computer magazine, Creative Computing. The book remained popular, and was re-published on several occasions.
Explosive growth: the home computer era
The introduction of the first microcomputers in the mid-1970s was the start of explosive growth for BASIC. It had the advantage that it was fairly well known to the young designers and computer hobbyists who took an interest in microcomputers, many of whom had seen BASIC on minis or mainframes. Despite Dijkstra's famous judgement in 1975, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration", BASIC was one of the few languages that was both high-level enough to be usable by those without training and small enough to fit into the microcomputers of the day, making it the de facto standard programming language on early microcomputers.
The first microcomputer version of BASIC was co-written by Bill Gates, Paul Allen and Monte Davidoff for their newly formed company, Micro-Soft. This was released by MITS in punch tape format for the Altair 8800 shortly after the machine itself, immediately cementing BASIC as the primary language of early microcomputers. Members of the Homebrew Computer Club began circulating copies of the program, causing Gates to write his Open Letter to Hobbyists, complaining about this early example of software piracy.
Partially in response to Gates's letter, and partially to make an even smaller BASIC that would run usefully on 4 KB machines, Bob Albrecht urged Dennis Allison to write their own variation of the language. How to design and implement a stripped-down version of an interpreter for the BASIC language was covered in articles by Allison in the first three quarterly issues of the People's Computer Company newsletter published in 1975 and implementations with source code published in Dr. Dobb's Journal of Tiny BASIC Calisthenics & Orthodontia: Running Light Without Overbyte. This led to a wide variety of Tiny BASICs with added features or other improvements, with versions from Tom Pittman and Li-Chen Wang becoming particularly well known.
Micro-Soft, by this time Microsoft, ported their interpreter for the MOS 6502, which quickly become one of the most popular microprocessors of the 8-bit era. When new microcomputers began to appear, notably the "1977 trinity" of the TRS-80, Commodore PET and Apple II, they either included a version of the MS code, or quickly introduced new models with it. Ohio Scientific's personal computers also joined this trend at that time. By 1978, MS BASIC was a de facto standard and practically every home computer of the 1980s included it in ROM. Upon boot, a BASIC interpreter in direct mode was presented.
Commodore Business Machines included Commodore BASIC, based on Microsoft BASIC. The Apple II and TRS-80 each had two versions of BASIC, a smaller introductory version introduced with the initial releases of the machines and an MS-based version introduced as interest in the platforms increased. As new companies entered the field, additional versions were added that subtly changed the BASIC family. The Atari 8-bit family had its own Atari BASIC that was modified in order to fit on an 8 KB ROM cartridge. Sinclair BASIC was introduced in 1980 with the Sinclair ZX80, and was later extended for the Sinclair ZX81 and the Sinclair ZX Spectrum. The BBC published BBC BASIC, developed by Acorn Computers Ltd, incorporating many extra structured programming keywords and advanced floating-point operation features.
As the popularity of BASIC grew in this period, computer magazines published complete source code in BASIC for video games, utilities, and other programs. Given BASIC's straightforward nature, it was a simple matter to type in the code from the magazine and execute the program. Different magazines were published featuring programs for specific computers, though some BASIC programs were considered universal and could be used in machines running any variant of BASIC (sometimes with minor adaptations). Many books of type-in programs were also available, and in particular, Ahl published versions of the original 101 BASIC games converted into the Microsoft dialect and published it from Creative Computing as BASIC Computer Games. This book, and its sequels, provided hundreds of ready-to-go programs that could be easily converted to practically any BASIC-running platform. The book reached the stores in 1978, just as the home computer market was starting off, and it became the first million-selling computer book. Later packages, such as Learn to Program BASIC would also have gaming as an introductory focus. On the business-focused CP/M computers which soon became widespread in small business environments, Microsoft BASIC (MBASIC) was one of the leading applications.
In 1978, David Lien published the first edition of The BASIC Handbook: An Encyclopedia of the BASIC Computer Language, documenting keywords across over 78 different computers. By 1981, the second edition documented keywords from over 250 different computers, showcasing the explosive growth of the microcomputer era.
IBM PC and compatibles
When IBM was designing the IBM PC, they followed the paradigm of existing home computers in having a built-in BASIC interpreter. They sourced this from Microsoft – IBM Cassette BASIC – but Microsoft also produced several other versions of BASIC for MS-DOS/PC DOS including IBM Disk BASIC (BASIC D), IBM BASICA (BASIC A), GW-BASIC (a BASICA-compatible version that did not need IBM's ROM) and QBasic, all typically bundled with the machine. In addition they produced the Microsoft BASIC Compiler aimed at professional programmers. Turbo Pascal-publisher Borland published Turbo Basic 1.0 in 1985 (successor versions are still being marketed under the name PowerBASIC). On Unix-like systems, specialized implementations were created such as XBasic and X11-Basic. XBasic was ported to Microsoft Windows as XBLite, and cross-platform variants such as SmallBasic, yabasic, Bywater BASIC, nuBasic, MyBasic, Logic Basic, Liberty BASIC, and wxBasic emerged. FutureBASIC and Chipmunk Basic meanwhile targeted the Apple Macintosh.
These later variations introduced many extensions, such as improved string manipulation and graphics support, access to the file system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. However, by the latter half of the 1980s, users were increasingly using pre-made applications written by others rather than learning programming themselves; while professional programmers now had a wide range of more advanced languages available on small computers. C and later C++ became the languages of choice for professional "shrink wrap" application development.
A niche that BASIC continued to fill was for hobbyist video game development, as game creation systems and readily available game engines were still in their infancy. The Atari ST had STOS BASIC while the Amiga had AMOS BASIC for this purpose. Microsoft first exhibited BASIC for game development with DONKEY.BAS for GW-BASIC, and later GORILLA.BAS and NIBBLES.BAS for Quick Basic. QBasic maintained an active game development community, which helped later spawn the QB64 and FreeBASIC implementations. In 2013 a game written in QBasic and compiled with QB64 for modern computers entitled Black Annex was released on Steam. Blitz Basic, Dark Basic, SdlBasic, Super Game System Basic, RCBasic, PlayBASIC, CoolBasic, AllegroBASIC, ethosBASIC, NaaLaa, GLBasic and Basic4GL further filled this demand, right up to the modern AppGameKit, Monkey 2 and Cerberus-X.
Visual Basic
In 1991, Microsoft introduced Visual Basic, an evolutionary development of QuickBASIC. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as "With" and "For Each". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, "Gosub"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. Microsoft also spun it off as Visual Basic for Applications and Embedded Visual Basic.
While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed "in the blink of an eye" even using a "slow" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language, and also features some cross-platform capability through implementations such as Mono-Basic. The IDE, with its event-driven GUI builder, was also influential on other tools, most notably Borland Software's Delphi for Object Pascal and its own descendants such as Lazarus.
Mainstream support for the final version 6.0 of the original Visual Basic ended on March 31, 2005, followed by extended support in March 2008. Owing to its persistent remaining popularity, third-party attempts to further support it, such as Rubberduck and ModernVB, exist. On February 2, 2017 Microsoft announced that development on VB.NET would no longer be in parallel with that of C#, and on March 11, 2020 it was announced that evolution of the VB.NET language had also concluded. Even so, the language was still supported and the third-party Mercury extension has since been produced. Meanwhile, competitors exist such as B4X, RAD Basic, twinBASIC, VisualFBEditor, InForm, Xojo, and Gambas.
Post-1990 versions and dialects
Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, HBasic, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz).
Several web-based simple BASIC interpreters also now exist, including Microsoft's Small Basic and Google's wwwBASIC. A number of compilers also exist that convert BASIC into JavaScript, such as JSBasic which re-implements Applesoft BASIC, Spider BASIC, and NS Basic.
Building from earlier efforts such as Mobile Basic and CellularBASIC, many dialects are now available for smartphones and tablets. Through the Apple App Store for iOS options include Hand BASIC, Learn BASIC, Smart Basic based on Minimal BASIC, Basic! by
miSoft, and BASIC by Anastasia Kovba. The Google Play store for Android meanwhile has the touchscreen focused Touch Basic, B4A, the RFO BASIC! interpreter based on Dartmouth Basic, and adaptations of SmallBasic, BBC Basic, Tiny Basic, X11-Basic, and NS Basic.
On game consoles, an application for the Nintendo 3DS and Nintendo DSi called Petit Computer allows for programming in a slightly modified version of BASIC with DS button support. A version has also been released for Nintendo Switch, which has also been supplied a version of the Fuze Code System, a BASIC variant first implemented as a custom Raspberry Pi machine. Previously BASIC was made available on consoles as Family BASIC (for the Nintendo Famicom) and PSX Chipmunk Basic (for the original PlayStation), while yabasic was ported to the PlayStation 2 and FreeBASIC to the original Xbox, with Dragon BASIC created for homebrew on the Game Boy Advance and Nintendo DS.
Calculators
Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments (TI-BASIC), HP (HP BASIC), Casio (Casio BASIC), and others.
Windows command-line
QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories.
Other
The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4Dos, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an old-school interpreter similar to BASICs of the 1970s, is available for Linux, Microsoft Windows and macOS.
Legacy
The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple "Try It In BASIC" exercises that encouraged students to experiment with mathematical and computational concepts on classroom or home computers. Popular computer magazines of the day typically included type-in programs.
Futurist and sci-fi writer David Brin mourned the loss of ubiquitous BASIC in a 2006 Salon article as have others who first used computers during this era. In turn, the article prompted Microsoft to develop and release Small Basic; it also inspired similar projects like Basic-256. Dartmouth held a 50th anniversary celebration for BASIC on 1 May 2014, as did other organisations; at least one organisation of VBA programmers organised a 35th anniversary observance in 1999.
Dartmouth College celebrated the 50th anniversary of the BASIC language with a day of events on April 30, 2014. A short documentary film was produced for the event.
Syntax
Typical BASIC keywords
Data manipulation
LET assigns a value (which may be the result of an expression) to a variable. In most dialects of BASIC, LET is optional, and a line with no other identifiable keyword will assume the keyword to be LET.
DATA holds a list of values which are assigned sequentially using the READ command.
READ reads a value from a DATA statement and assigns it to a variable. An internal pointer keeps track of the last DATA element that was read and moves it one position forward with each READ. Most dialects allow multiple variables as parameters, reading several values in a single operation.
RESTORE resets the internal pointer to the first DATA statement, allowing the program to begin READing from the first value. Many dialects allow an optional line number or ordinal value to allow the pointer to be reset to a selected location.
DIM Sets up an array.
Program flow control
IF ... THEN ... {ELSE} used to perform comparisons or make decisions. Early dialects only allowed a line number after the THEN, but later versions allowed any valid statement to follow. ELSE was not widely supported, especially in earlier versions.
FOR ... TO ... {STEP} ... NEXT repeat a section of code a given number of times. A variable that acts as a counter, the "index", is available within the loop.
WHILE ... WEND and REPEAT ... UNTIL repeat a section of code while the specified condition is true. The condition may be evaluated before each iteration of the loop, or after. Both of these commands are found mostly in later dialects.
DO ... LOOP {WHILE} or {UNTIL} repeat a section of code indefinitely or while/until the specified condition is true. The condition may be evaluated before each iteration of the loop, or after. Similar to WHILE, these keywords are mostly found in later dialects.
GOTO jumps to a numbered or labelled line in the program. Most dialects also allowed the form .
GOSUB ... RETURN jumps to a numbered or labelled line, executes the code it finds there until it reaches a RETURN command, on which it jumps back to the statement following the GOSUB, either after a colon, or on the next line. This is used to implement subroutines.
ON ... GOTO/GOSUB chooses where to jump based on the specified conditions. See Switch statement for other forms.
DEF FN a pair of keywords introduced in the early 1960s to define functions. The original BASIC functions were modelled on FORTRAN single-line functions. BASIC functions were one expression with variable arguments, rather than subroutines, with a syntax on the model of DEF FND(x) = x*x at the beginning of a program. Function names were originally restricted to FN, plus one letter, i.e., FNA, FNB ...
Input and output
LIST displays the full source code of the current program.
PRINT displays a message on the screen or other output device.
INPUT asks the user to enter the value of a variable. The statement may include a prompt message.
TAB used with PRINT to set the position where the next character will be shown on the screen or printed on paper. AT is an alternative form.
SPC prints out a number of space characters. Similar in concept to TAB but moves by a number of additional spaces from the current column rather than moving to a specified column.
Mathematical functions
ABS Absolute value
ATN Arctangent (result in radians)
COS Cosine (argument in radians)
EXP Exponential function
INT Integer part (typically floor function)
LOG Natural logarithm
RND Random number generation
SIN Sine (argument in radians)
SQR Square root
TAN Tangent (argument in radians)
Miscellaneous
REM holds a programmer's comment or REMark; often used to give a title to the program and to help identify the purpose of a given section of code.
USR ("User Serviceable Routine") transfers program control to a machine language subroutine, usually entered as an alphanumeric string or in a list of DATA statements.
CALL alternative form of USR found in some dialects. Does not require an artificial parameter to complete the function-like syntax of USR, and has a clearly defined method of calling different routines in memory.
TRON / TROFF turns on display of each line number as it is run ("TRace ON"). This was useful for debugging or correcting of problems in a program. TROFF turns it back off again.
ASM some compilers such as Freebasic, Purebasic, and Powerbasic also support inline assembly language, allowing the programmer to intermix high-level and low-level code, typically prefixed with "ASM" or "!" statements.
Data types and variables
Minimal versions of BASIC had only integer variables and one- or two-letter variable names, which minimized requirements of limited and expensive memory (RAM). More powerful versions had floating-point arithmetic, and variables could be labelled with names six or more characters long. There were some problems and restrictions in early implementations; for example, Applesoft BASIC allowed variable names to be several characters long, but only the first two were significant, thus it was possible to inadvertently write a program with variables "LOSS" and "LOAN", which would be treated as being the same; assigning a value to "LOAN" would silently overwrite the value intended as "LOSS". Keywords could not be used in variables in many early BASICs; "SCORE" would be interpreted as "SC" OR "E", where OR was a keyword. String variables are usually distinguished in many microcomputer dialects by having $ suffixed to their name as a sigil, and values are often identified as strings by being delimited by "double quotation marks". Arrays in BASIC could contain integers, floating point or string variables.
Some dialects of BASIC supported matrices and matrix operations, which can be used to solve sets of simultaneous linear algebraic equations. These dialects would directly support matrix operations such as assignment, addition, multiplication (of compatible matrix types), and evaluation of a determinant. Many microcomputer BASICs did not support this data type; matrix operations were still possible, but had to be programmed explicitly on array elements.
Examples
Unstructured BASIC
New BASIC programmers on a home computer might start with a simple program, perhaps using the language's PRINT statement to display a message on the screen; a well-known and often-replicated example is Kernighan and Ritchie's "Hello, World!" program:
10 PRINT "Hello, World!"
20 END
An infinite loop could be used to fill the display with the message:
10 PRINT "Hello, World!"
20 GOTO 10
Note that the END statement is optional and has no action in most dialects of BASIC. It was not always included, as is the case in this example. This same program can be modified to print a fixed number of messages using the common FOR...NEXT statement:
10 LET N=10
20 FOR I=1 TO N
30 PRINT "Hello, World!"
40 NEXT I
Most home computers BASIC versions, such as MSX BASIC and GW-BASIC, supported simple data types, loop cycles, and arrays. The following example is written for GW-BASIC, but will work in most versions of BASIC with minimal changes:
10 INPUT "What is your name: "; U$
20 PRINT "Hello "; U$
30 INPUT "How many stars do you want: "; N
40 S$ = ""
50 FOR I = 1 TO N
60 S$ = S$ + "*"
70 NEXT I
80 PRINT S$
90 INPUT "Do you want more stars? "; A$
100 IF LEN(A$) = 0 THEN GOTO 90
110 A$ = LEFT$(A$, 1)
120 IF A$ = "Y" OR A$ = "y" THEN GOTO 30
130 PRINT "Goodbye "; U$
140 END
The resulting dialog might resemble:
What is your name: Mike
Hello Mike
How many stars do you want: 7
*******
Do you want more stars? yes
How many stars do you want: 3
***
Do you want more stars? no
Goodbye Mike
The original Dartmouth Basic was unusual in having a matrix keyword, MAT. Although not implemented by most later microprocessor derivatives, it is used in this example from the 1968 manual which averages the numbers that are input:
5 LET S = 0
10 MAT INPUT V
20 LET N = NUM
30 IF N = 0 THEN 99
40 FOR I = 1 TO N
45 LET S = S + V(I)
50 NEXT I
60 PRINT S/N
70 GO TO 5
99 END
Structured BASIC
Second-generation BASICs (for example, VAX Basic, SuperBASIC, True BASIC, QuickBASIC, BBC BASIC, Pick BASIC, PowerBASIC, Liberty BASIC, QB64 and (arguably) COMAL) introduced a number of features into the language, primarily related to structured and procedure-oriented programming. Usually, line numbering is omitted from the language and replaced with labels (for GOTO) and procedures to encourage easier and more flexible design. In addition keywords and structures to support repetition, selection and procedures with local variables were introduced.
The following example is in Microsoft QuickBASIC:
REM QuickBASIC example
REM Forward declaration - allows the main code to call a
REM subroutine that is defined later in the source code
DECLARE SUB PrintSomeStars (StarCount!)
REM Main program follows
INPUT "What is your name: ", UserName$
PRINT "Hello "; UserName$
DO
INPUT "How many stars do you want: ", NumStars
CALL PrintSomeStars(NumStars)
DO
INPUT "Do you want more stars? ", Answer$
LOOP UNTIL Answer$ <> ""
Answer$ = LEFT$(Answer$, 1)
LOOP WHILE UCASE$(Answer$) = "Y"
PRINT "Goodbye "; UserName$
END
REM subroutine definition
SUB PrintSomeStars (StarCount)
REM This procedure uses a local variable called Stars$
Stars$ = STRING$(StarCount, "*")
PRINT Stars$
END SUB
Object-oriented BASIC
Third-generation BASIC dialects such as Visual Basic, Xojo, Gambas, StarOffice Basic, BlitzMax and PureBasic introduced features to support object-oriented and event-driven programming paradigm. Most built-in procedures and functions are now represented as methods of standard objects rather than operators. Also, the operating system became increasingly accessible to the BASIC language.
The following example is in Visual Basic .NET:
Public Module StarsProgram
Private Function Ask(prompt As String) As String
Console.Write(prompt)
Return Console.ReadLine()
End Function
Public Sub Main()
Dim userName = Ask("What is your name: ")
Console.WriteLine("Hello {0}", userName)
Dim answer As String
Do
Dim numStars = CInt(Ask("How many stars do you want: "))
Dim stars As New String("*"c, numStars)
Console.WriteLine(stars)
Do
answer = Ask("Do you want more stars? ")
Loop Until answer <> ""
Loop While answer.StartsWith("Y", StringComparison.OrdinalIgnoreCase)
Console.WriteLine("Goodbye {0}", userName)
End Sub
End Module
Standards
ANSI/ISO/IEC Standard for Minimal BASIC:
ANSI X3.60-1978 "For minimal BASIC"
ISO/IEC 6373:1984 "Data Processing—Programming Languages—Minimal BASIC"
ECMA-55 Minimal BASIC (withdrawn, similar to ANSI X3.60-1978)
ANSI/ISO/IEC Standard for Full BASIC:
ANSI X3.113-1987 "Programming Languages Full BASIC"
INCITS/ISO/IEC 10279-1991 (R2005) "Information Technology – Programming Languages – Full BASIC"
ANSI/ISO/IEC Addendum Defining Modules:
ANSI X3.113 Interpretations-1992 "BASIC Technical Information Bulletin # 1 Interpretations of ANSI 03.113-1987"
ISO/IEC 10279:1991/ Amd 1:1994 "Modules and Single Character Input Enhancement"
ECMA-116 BASIC (withdrawn, similar to ANSI X3.113-1987)
Compilers and interpreters
See also
List of BASIC dialects
Notes
References
General references
External links
gotBASIC.com - For all people interested in the continued usage and evolution of the BASIC programming language.
The Basics' page (Since 2001) - Comprehensive listing of dialects.
American inventions
Articles with example BASIC code
Programming languages
Programming languages created in 1964
Programming languages with an ISO standard
|
https://en.wikipedia.org/wiki/Black
|
Black is a color that results from the absence or complete absorption of visible light. It is an achromatic color, without hue, like white and grey. It is often used symbolically or figuratively to represent darkness. Black and white have often been used to describe opposites such as good and evil, the Dark Ages versus Age of Enlightenment, and night versus day. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason it is still commonly worn by judges and magistrates.
Black was one of the first colors used by artists in Neolithic cave paintings. It was used in ancient Egypt and Greece as the color of the underworld. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches, and magic. In the 14th century, it was worn by royalty, clergy, judges, and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, fear, evil, and elegance.
Black is the most common ink color used for printing books, newspapers and documents, as it provides the highest contrast with white paper and thus is the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens. As of September 2019, the darkest material is made by MIT engineers from vertically aligned carbon nanotubes.
Etymology
The word black comes from Old English blæc ("black, dark", also, "ink"), from Proto-Germanic *blakkaz ("burned"), from Proto-Indo-European *bhleg- ("to burn, gleam, shine, flash"), from base *bhel- ("to shine"), related to Old Saxon blak ("ink"), Old High German blach ("black"), Old Norse blakkr ("dark"), Dutch blaken ("to burn"), and Swedish bläck ("ink"). More distant cognates include Latin flagrare ("to blaze, glow, burn"), and Ancient Greek phlegein ("to burn, scorch"). The Ancient Greeks sometimes used the same word to name different colors, if they had the same intensity. Kuanos''' could mean both dark blue and black. The Ancient Romans had two words for black: ater was a flat, dull black, while niger was a brilliant, saturated black. Ater has vanished from the vocabulary, but niger was the source of the country name Nigeria, the English word Negro, and the word for "black" in most modern Romance languages (French: noir; Spanish and Portuguese: negro; Italian: nero; Romanian: negru).
Old High German also had two words for black: swartz for dull black and blach for a luminous black. These are parallelled in Middle English by the terms swart for dull black and blaek for luminous black. Swart still survives as the word swarthy, while blaek became the modern English black. The former is cognate with the words used for black in most modern Germanic languages aside from English (German: schwarz, Dutch: zwart, Swedish: svart, Danish: sort, Icelandic: svartr). In heraldry, the word used for the black color is sable, named for the black fur of the sable, an animal.
Art
Prehistoric
Black was one of the first colors used in art. The Lascaux Cave in France contains drawings of bulls and other animals drawn by paleolithic artists between 18,000 and 17,000 years ago. They began by using charcoal, and later achieved darker pigments by burning bones or grinding a powder of manganese oxide.
Ancient
For the ancient Egyptians, black had positive associations; being the color of fertility and the rich black soil flooded by the Nile. It was the color of Anubis, the god of the underworld, who took the form of a black jackal, and offered protection against evil to the dead. To ancient Greeks, black represented the underworld, separated from the living by the river Acheron, whose water ran black. Those who had committed the worst sins were sent to Tartarus, the deepest and darkest level. In the center was the palace of Hades, the king of the underworld, where he was seated upon a black ebony throne. Black was one of the most important colors used by ancient Greek artists. In the 6th century BC, they began making black-figure pottery and later red figure pottery, using a highly original technique. In black-figure pottery, the artist would paint figures with a glossy clay slip on a red clay pot. When the pot was fired, the figures painted with the slip would turn black, against a red background. Later they reversed the process, painting the spaces between the figures with slip. This created magnificent red figures against a glossy black background.
In the social hierarchy of ancient Rome, purple was the color reserved for the Emperor; red was the color worn by soldiers (red cloaks for the officers, red tunics for the soldiers); white the color worn by the priests, and black was worn by craftsmen and artisans. The black they wore was not deep and rich; the vegetable dyes used to make black were not solid or lasting, so the blacks often faded to gray or brown.
In Latin, the word for black, ater and to darken, atere, were associated with cruelty, brutality and evil. They were the root of the English words "atrocious" and "atrocity". Black was also the Roman color of death and mourning. In the 2nd century BC Roman magistrates began to wear a dark toga, called a toga pulla, to funeral ceremonies. Later, under the Empire, the family of the deceased also wore dark colors for a long period; then, after a banquet to mark the end of mourning, exchanged the black for a white toga. In Roman poetry, death was called the hora nigra, the black hour.
The German and Scandinavian peoples worshipped their own goddess of the night, Nótt, who crossed the sky in a chariot drawn by a black horse. They also feared Hel, the goddess of the kingdom of the dead, whose skin was black on one side and red on the other. They also held sacred the raven. They believed that Odin, the king of the Nordic pantheon, had two black ravens, Huginn and Muninn, who served as his agents, traveling the world for him, watching and listening.
Postclassical
In the early Middle Ages, black was commonly associated with darkness and evil. In Medieval paintings, the devil was usually depicted as having human form, but with wings and black skin or hair.
12th and 13th centuries
In fashion, black did not have the prestige of red, the color of the nobility. It was worn by Benedictine monks as a sign of humility and penitence. In the 12th century a famous theological dispute broke out between the Cistercian monks, who wore white, and the Benedictines, who wore black. A Benedictine abbot, Pierre the Venerable, accused the Cistercians of excessive pride in wearing white instead of black. Saint Bernard of Clairvaux, the founder of the Cistercians responded that black was the color of the devil, hell, "of death and sin", while white represented "purity, innocence and all the virtues".
Black symbolized both power and secrecy in the medieval world. The emblem of the Holy Roman Empire of Germany was a black eagle. The black knight in the poetry of the Middle Ages was an enigmatic figure, hiding his identity, usually wrapped in secrecy.
Black ink, invented in China, was traditionally used in the Middle Ages for writing, for the simple reason that black was the darkest color and therefore provided the greatest contrast with white paper or parchment, making it the easiest color to read. It became even more important in the 15th century, with the invention of printing. A new kind of ink, printer's ink, was created out of soot, turpentine and walnut oil. The new ink made it possible to spread ideas to a mass audience through printed books, and to popularize art through black and white engravings and prints. Because of its contrast and clarity, black ink on white paper continued to be the standard for printing books, newspapers and documents; and for the same reason black text on a white background is the most common format used on computer screens.
14th and 15th centuries
In the early Middle Ages, princes, nobles and the wealthy usually wore bright colors, particularly scarlet cloaks from Italy. Black was rarely part of the wardrobe of a noble family. The one exception was the fur of the sable. This glossy black fur, from an animal of the marten family, was the finest and most expensive fur in Europe. It was imported from Russia and Poland and used to trim the robes and gowns of royalty.
In the 14th century, the status of black began to change. First, high-quality black dyes began to arrive on the market, allowing garments of a deep, rich black. Magistrates and government officials began to wear black robes, as a sign of the importance and seriousness of their positions. A third reason was the passage of sumptuary laws in some parts of Europe which prohibited the wearing of costly clothes and certain colors by anyone except members of the nobility. The famous bright scarlet cloaks from Venice and the peacock blue fabrics from Florence were restricted to the nobility. The wealthy bankers and merchants of northern Italy responded by changing to black robes and gowns, made with the most expensive fabrics.
The change to the more austere but elegant black was quickly picked up by the kings and nobility. It began in northern Italy, where the Duke of Milan and the Count of Savoy and the rulers of Mantua, Ferrara, Rimini and Urbino began to dress in black. It then spread to France, led by Louis I, Duke of Orleans, younger brother of King Charles VI of France. It moved to England at the end of the reign of King Richard II (1377–1399), where all the court began to wear black. In 1419–20, black became the color of the powerful Duke of Burgundy, Philip the Good. It moved to Spain, where it became the color of the Spanish Habsburgs, of Charles V and of his son, Philip II of Spain (1527–1598). European rulers saw it as the color of power, dignity, humility and temperance. By the end of the 16th century, it was the color worn by almost all the monarchs of Europe and their courts.
Modern
16th and 17th centuries
While black was the color worn by the Catholic rulers of Europe, it was also the emblematic color of the Protestant Reformation in Europe and the Puritans in England and America. John Calvin, Philip Melanchthon and other Protestant theologians denounced the richly colored and decorated interiors of Roman Catholic churches. They saw the color red, worn by the Pope and his Cardinals, as the color of luxury, sin, and human folly. In some northern European cities, mobs attacked churches and cathedrals, smashed the stained glass windows and defaced the statues and decoration. In Protestant doctrine, clothing was required to be sober, simple and discreet. Bright colors were banished and replaced by blacks, browns and grays; women and children were recommended to wear white.
In the Protestant Netherlands, Rembrandt used this sober new palette of blacks and browns to create portraits whose faces emerged from the shadows expressing the deepest human emotions. The Catholic painters of the Counter-Reformation, like Rubens, went in the opposite direction; they filled their paintings with bright and rich colors. The new Baroque churches of the Counter-Reformation were usually shining white inside and filled with statues, frescoes, marble, gold and colorful paintings, to appeal to the public. But European Catholics of all classes, like Protestants, eventually adopted a sober wardrobe that was mostly black, brown and gray.
In the second part of the 17th century, Europe and America experienced an epidemic of fear of witchcraft. People widely believed that the devil appeared at midnight in a ceremony called a Black Mass or black sabbath, usually in the form of a black animal, often a goat, a dog, a wolf, a bear, a deer or a rooster, accompanied by their familiar spirits, black cats, serpents and other black creatures. This was the origin of the widespread superstition about black cats and other black animals. In medieval Flanders, in a ceremony called Kattenstoet, black cats were thrown from the belfry of the Cloth Hall of Ypres to ward off witchcraft.
Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a "black thing with a blue cap," and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches.
18th and 19th centuries
In the 18th century, during the European Age of Enlightenment, black receded as a fashion color. Paris became the fashion capital, and pastels, blues, greens, yellow and white became the colors of the nobility and upper classes. But after the French Revolution, black again became the dominant color.
Black was the color of the industrial revolution, largely fueled by coal, and later by oil. Thanks to coal smoke, the buildings of the large cities of Europe and America gradually turned black. By 1846 the industrial area of the West Midlands of England was "commonly called 'the Black Country'”. Charles Dickens and other writers described the dark streets and smoky skies of London, and they were vividly illustrated in the engravings of French artist Gustave Doré.
A different kind of black was an important part of the romantic movement in literature. Black was the color of melancholy, the dominant theme of romanticism. The novels of the period were filled with castles, ruins, dungeons, storms, and meetings at midnight. The leading poets of the movement were usually portrayed dressed in black, usually with a white shirt and open collar, and a scarf carelessly over their shoulder, Percy Bysshe Shelley and Lord Byron helped create the enduring stereotype of the romantic poet.
The invention of inexpensive synthetic black dyes and the industrialization of the textile industry meant that high-quality black clothes were available for the first time to the general population. In the 19th century black gradually became the most popular color of business dress of the upper and middle classes in England, the Continent, and America.
Black dominated literature and fashion in the 19th century, and played a large role in painting. James McNeill Whistler made the color the subject of his most famous painting, Arrangement in grey and black number one (1871), better known as Whistler's Mother.
Some 19th-century French painters had a low opinion of black: "Reject black," Paul Gauguin said, "and that mix of black and white they call gray. Nothing is black, nothing is gray." But Édouard Manet used blacks for their strength and dramatic effect. Manet's portrait of painter Berthe Morisot was a study in black which perfectly captured her spirit of independence. The black gave the painting power and immediacy; he even changed her eyes, which were green, to black to strengthen the effect. Henri Matisse quoted the French impressionist Pissarro telling him, "Manet is stronger than us all – he made light with black."
Pierre-Auguste Renoir used luminous blacks, especially in his portraits. When someone told him that black was not a color, Renoir replied: "What makes you think that? Black is the queen of colors. I always detested Prussian blue. I tried to replace black with a mixture of red and blue, I tried using cobalt blue or ultramarine, but I always came back to ivory black."
Vincent van Gogh used black lines to outline many of the objects in his paintings, such as the bed in the famous painting of his bedroom. making them stand apart. His painting of black crows over a cornfield, painted shortly before he died, was particularly agitated and haunting. In the late 19th century, black also became the color of anarchism. (See the section political movements.)
20th and 21st centuries
In the 20th century, black was the color of Italian and German fascism. (See the section political movements.)
In art, black regained some of the territory that it had lost during the 19th century. The Russian painter Kasimir Malevich, a member of the Suprematist movement, created the Black Square in 1915, is widely considered the first purely abstract painting. He wrote, "The painted work is no longer simply the imitation of reality, but is this very reality ... It is not a demonstration of ability, but the materialization of an idea."
Black was also appreciated by Henri Matisse. "When I didn't know what color to put down, I put down black," he said in 1945. "Black is a force: I used black as ballast to simplify the construction ... Since the impressionists it seems to have made continuous progress, taking a more and more important part in color orchestration, comparable to that of the double bass as a solo instrument."
In the 1950s, black came to be a symbol of individuality and intellectual and social rebellion, the color of those who did not accept established norms and values. In Paris, it was worn by Left-Bank intellectuals and performers such as Juliette Gréco, and by some members of the Beat Movement in New York and San Francisco. Black leather jackets were worn by motorcycle gangs such as the Hells Angels and street gangs on the fringes of society in the United States. Black as a color of rebellion was celebrated in such films as The Wild One, with Marlon Brando. By the end of the 20th century, black was the emblematic color of the punk subculture punk fashion, and the goth subculture. Goth fashion, which emerged in England in the 1980s, was inspired by Victorian era mourning dress.
In men's fashion, black gradually ceded its dominance to navy blue, particularly in business suits. Black evening dress and formal dress in general were worn less and less. In 1960, John F. Kennedy was the last American President to be inaugurated wearing formal dress; President Lyndon Johnson and all his successors were inaugurated wearing business suits.
Women's fashion was revolutionized and simplified in 1926 by the French designer Coco Chanel, who published a drawing of a simple black dress in Vogue magazine. She famously said, "A woman needs just three things; a black dress, a black sweater, and, on her arm, a man she loves." French designer Jean Patou also followed suit by creating a black collection in 1929. Other designers contributed to the trend of the little black dress. The Italian designer Gianni Versace said, "Black is the quintessence of simplicity and elegance," and French designer Yves Saint Laurent said, "black is the liaison which connects art and fashion. One of the most famous black dresses of the century was designed by Hubert de Givenchy and was worn by Audrey Hepburn in the 1961 film Breakfast at Tiffany's.
The American civil rights movement in the 1950s was a struggle for the political equality of African Americans. It developed into the Black Power movement in the early 1960s until the late 1980s, and the Black Lives Matter movement in the 2010s and 2020s. It also popularized the slogan "Black is Beautiful".
Science
Physics
In the visible spectrum, black is the result of the absorption of all light wavelengths. Black can be defined as the visual impression (or color) experienced when no visible light reaches the eye. Pigments or dyes that absorb light rather than reflect it back to the eye "look black". A black pigment can, however, result from a combination of several pigments that collectively absorb all colors. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called "black". This provides two superficially opposite but actually complementary descriptions of black. Black is the absorption of all colors of light, or an exhaustive combination of multiple colors of pigment.
In physics, a black body is a perfect absorber of light, but, by a thermodynamic rule, it is also the best emitter. Thus, the best radiative cooling, out of sunlight, is by using black paint, though it is important that it be black (a nearly perfect absorber) in the infrared as well. In elementary science, far ultraviolet light is called "black light" because, while itself unseen, it causes many minerals and other substances to fluoresce.
Absorption of light is contrasted by transmission, reflection and diffusion, where the light is only redirected, causing objects to appear transparent, reflective or white respectively. A material is said to be black if most incoming light is absorbed equally in the material. Light (electromagnetic radiation in the visible spectrum) interacts with the atoms and molecules, which causes the energy of the light to be converted into other forms of energy, usually heat. This means that black surfaces can act as thermal collectors, absorbing light and generating heat (see Solar thermal collector).
As of September 2019, the darkest material is made from vertically aligned carbon nanotubes. The material was grown by MIT engineers and was reported to have a 99.995% absorption rate of any incoming light. This surpasses any former darkest materials including Vantablack, which has a peak absorption rate of 99.965% in the visible spectrum.
Chemistry
Pigments
The earliest pigments used by Neolithic man were charcoal, red ocher and yellow ocher. The black lines of cave art were drawn with the tips of burnt torches made of a wood with resin. Different charcoal pigments were made by burning different woods and animal products, each of which produced a different tone. The charcoal would be ground and then mixed with animal fat to make the pigment.
Vine black was produced in Roman times by burning the cut branches of grapevines. It could also be produced by burning the remains of the crushed grapes, which were collected and dried in an oven. According to the historian Vitruvius, the deepness and richness of the black produced corresponded to the quality of the wine. The finest wines produced a black with a bluish tinge the color of indigo.
The 15th-century painter Cennino Cennini described how this pigment was made during the Renaissance in his famous handbook for artists: "...there is a black which is made from the tendrils of vines. And these tendrils need to be burned. And when they have been burned, throw some water onto them and put them out and then mull them in the same way as the other black. And this is a lean and black pigment and is one of the perfect pigments that we use."
Cennini also noted that "There is another black which is made from burnt almond shells or peaches and this is a perfect, fine black." Similar fine blacks were made by burning the pits of the peach, cherry or apricot. The powdered charcoal was then mixed with gum arabic or the yellow of an egg to make a paint.
Different civilizations burned different plants to produce their charcoal pigments. The Inuit of Alaska used wood charcoal mixed with the blood of seals to paint masks and wooden objects. The Polynesians burned coconuts to produce their pigment.
Lamp black was used as a pigment for painting and frescoes, as a dye for fabrics, and in some societies for making tattoos. The 15th century Florentine painter Cennino Cennini described how it was made during the Renaissance: "... take a lamp full of linseed oil and fill the lamp with the oil and light the lamp. Then place it, lit, under a thoroughly clean pan and make sure that the flame from the lamp is two or three fingers from the bottom of the pan. The smoke that comes off the flame will hit the bottom of the pan and gather, becoming thick. Wait a bit. take the pan and brush this pigment (that is, this smoke) onto paper or into a pot with something. And it is not necessary to mull or grind it because it is a very fine pigment. Re-fill the lamp with the oil and put it under the pan like this several times and, in this way, make as much of it as is necessary." This same pigment was used by Indian artists to paint the Ajanta Caves, and as dye in ancient Japan.
Ivory black, also known as bone char, was originally produced by burning ivory and mixing the resulting charcoal powder with oil. The color is still made today, but ordinary animal bones are substituted for ivory.
Mars black is a black pigment made of synthetic iron oxides. It is commonly used in water-colors and oil painting. It takes its name from Mars, the god of war and patron of iron.
Dyes
Good-quality black dyes were not known until the middle of the 14th century. The most common early dyes were made from bark, roots or fruits of different trees; usually walnuts, chestnuts, or certain oak trees. The blacks produced were often more gray, brown or bluish. The cloth had to be dyed several times to darken the color. One solution used by dyers was add to the dye some iron filings, rich in iron oxide, which gave a deeper black. Another was to first dye the fabric dark blue, and then to dye it black.
A much richer and deeper black dye was eventually found made from the oak apple or "gall-nut". The gall-nut is a small round tumor which grows on oak and other varieties of trees. They range in size from 2–5 cm, and are caused by chemicals injected by the larva of certain kinds of gall wasp in the family Cynipidae. The dye was very expensive; a great quantity of gall-nuts were needed for a very small amount of dye. The gall-nuts which made the best dye came from Poland, eastern Europe, the near east and North Africa. Beginning in about the 14th century, dye from gall-nuts was used for clothes of the kings and princes of Europe.
Another important source of natural black dyes from the 17th century onwards was the logwood tree, or Haematoxylum campechianum, which also produced reddish and bluish dyes. It is a species of flowering tree in the legume family, Fabaceae, that is native to southern Mexico and northern Central America. The modern nation of Belize grew from 17th century English logwood logging camps.
Since the mid-19th century, synthetic black dyes have largely replaced natural dyes. One of the important synthetic blacks is Nigrosin, a mixture of synthetic black dyes (CI 50415, Solvent black 5) made by heating a mixture of nitrobenzene, aniline and aniline hydrochloride in the presence of a copper or iron catalyst. Its main industrial uses are as a colorant for lacquers and varnishes and in marker-pen inks.
Inks
The first known inks were made by the Chinese, and date back to the 23rd century B.C. They used natural plant dyes and minerals such as graphite ground with water and applied with an ink brush. Early Chinese inks similar to the modern inkstick have been found dating to about 256 BC at the end of the Warring States period. They were produced from soot, usually produced by burning pine wood, mixed with animal glue. To make ink from an inkstick, the stick is continuously ground against an inkstone with a small quantity of water to produce a dark liquid which is then applied with an ink brush. Artists and calligraphists could vary the thickness of the resulting ink by reducing or increasing the intensity and time of ink grinding. These inks produced the delicate shading and subtle or dramatic effects of Chinese brush painting.
India ink (or "Indian ink" in British English) is a black ink once widely used for writing and printing and now more commonly used for drawing, especially when inking comic books and comic strips. The technique of making it probably came from China. India ink has been in use in India since at least the 4th century BC, where it was called masi. In India, the black color of the ink came from bone char, tar, pitch and other substances.
The ancient Romans had a black writing ink they called atramentum librarium. Its name came from the Latin word atrare, which meant to make something black. (This was the same root as the English word atrocious.) It was usually made, like India ink, from soot, although one variety, called atramentum elephantinum, was made by burning the ivory of elephants.
Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century.
Astronomy
A black hole is a region of spacetime where gravity prevents anything, including light, from escaping. The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. Around a black hole there is a mathematically defined boundary called an event horizon that marks the point of no return. It is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics. Black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies. Although a black hole itself is black, infalling material forms an accretion disk, one of the brightest types of object in the universe.
Black-body radiation refers to the radiation coming from a body at a given temperature where all incoming energy (light) is converted to heat.
Black sky refers to the appearance of space as one emerges from Earth's atmosphere.
Why the night sky and space are black – Olbers' paradox
The fact that outer space is black is sometimes called Olbers' paradox. In theory, because the universe is full of stars, and is believed to be infinitely large, it would be expected that the light of an infinite number of stars would be enough to brilliantly light the whole universe all the time. However, the background color of outer space is black. This contradiction was first noted in 1823 by German astronomer Heinrich Wilhelm Matthias Olbers, who posed the question of why the night sky was black.
The current accepted answer is that, although the universe may be infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black.
The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering.
The nighttime sky on Earth is black because the part of Earth experiencing night is facing away from the Sun, the light of the Sun is blocked by Earth itself, and there is no other bright nighttime source of light in the vicinity. Thus, there is not enough light to undergo Rayleigh scattering and make the sky blue. On the Moon, on the other hand, because there is virtually no atmosphere to scatter the light, the sky is black both day and night. This also holds true for other locations without an atmosphere, such as Mercury.
Biology
Culture
In China, the color black is associated with water, one of the five fundamental elements believed to compose all things; and with winter, cold, and the direction north, usually symbolized by a black tortoise. It is also associated with disorder, including the positive disorder which leads to change and new life. When the first Emperor of China Qin Shi Huang seized power from the Zhou Dynasty, he changed the Imperial color from red to black, saying that black extinguished red. Only when the Han Dynasty appeared in 206 BC was red restored as the imperial color.
In Japan, black is associated with mystery, the night, the unknown, the supernatural, the invisible and death. Combined with white, it can symbolize intuition. In 10th and 11th century Japan, it was believed that wearing black could bring misfortune. It was worn at court by those who wanted to set themselves apart from the established powers or who had renounced material possessions.
In Japan black can also symbolize experience, as opposed to white, which symbolizes naiveté. The black belt in martial arts symbolizes experience, while a white belt is worn by novices. Japanese men traditionally wear a black kimono with some white decoration on their wedding day.
In Indonesia black is associated with depth, the subterranean world, demons, disaster, and the left hand. When black is combined with white, however, it symbolizes harmony and equilibrium.
Political movements
Anarchism
Anarchism is a political philosophy, most popular in the late 19th and early 20th centuries, which holds that governments and capitalism are harmful and undesirable. The symbols of anarchism was usually either a black flag or a black letter A. More recently it is usually represented with a bisected red and black flag, to emphasise the movement's socialist roots in the First International. Anarchism was most popular in Spain, France, Italy, Ukraine and Argentina. There were also small but influential movements in the United States, Russia and many other countries all around the world.
The Black Army was a collection of anarchist military units which fought for a stateless society in Ukraine in the Russian Civil War. While fighting against the reactionary White Army and alongside the Bolshevik Red Army at first, it was later defeated by the Communist forces. It was officially known as the Revolutionary Insurgent Army of Ukraine, and originally founded by the anarchist Nestor Makhno.
Fascism
The Blackshirts () were Fascist paramilitary groups in Italy during the period immediately following World War I and until the end of World War II. The Blackshirts were officially known as the Voluntary Militia for National Security (Milizia Volontaria per la Sicurezza Nazionale, or MVSN).
Inspired by the black uniforms of the Arditi, Italy's elite storm troops of World War I, the Fascist Blackshirts were organized by Benito Mussolini as the military tool of his political movement. They used violence and intimidation against Mussolini's opponents. The emblem of the Italian fascists was a black flag with fasces, an axe in a bundle of sticks, an ancient Roman symbol of authority. Mussolini came to power in 1922 through his March on Rome with the blackshirts.
Black was also adopted by Adolf Hitler and the Nazis in Germany. Red, white and black were the colors of the flag of the German Empire from 1870 to 1918. In Mein Kampf, Hitler explained that they were "revered colors expressive of our homage to the glorious past." Hitler also wrote that "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement." The black swastika was meant to symbolize the Aryan race, which, according to the Nazis, "was always anti-Semitic and will always be anti-Semitic." Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Black became the color of the uniform of the SS, the Schutzstaffel or "defense corps", the paramilitary wing of the Nazi Party, and was worn by SS officers from 1932 until the end of World War II.
The Nazis used a black triangle to symbolize anti-social elements. The symbol originates from Nazi concentration camps, where every prisoner had to wear one of the Nazi concentration camp badges on their jacket, the color of which categorized them according to "their kind". Many Black Triangle prisoners were either mentally disabled or mentally ill. The homeless were also included, as were alcoholics, the Romani people, the habitually "work-shy", prostitutes, draft dodgers and pacifists. More recently the black triangle has been adopted as a symbol in lesbian culture and by disabled activists.
Black shirts were also worn by the British Union of Fascists before World War II, and members of fascist movements in the Netherlands.
Patriotic resistance
The Lützow Free Corps, composed of volunteer German students and academics fighting against Napoleon in 1813, could not afford to make special uniforms and therefore adopted black, as the only color that could be used to dye their civilian clothing without the original color showing. In 1815 the students began to carry a red, black and gold flag, which they believed (incorrectly) had been the colors of the Holy Roman Empire (the imperial flag had actually been gold and black). In 1848, this banner became the flag of the German confederation. In 1866, Prussia unified Germany under its rule, and imposed the red, white and black of its own flag, which remained the colors of the German flag until the end of the Second World War. In 1949 the Federal Republic of Germany returned to the original flag and colors of the students and professors of 1815, which is the flag of Germany today.
Military
Black has been a traditional color of cavalry and armoured or mechanized troops. German armoured troops (Panzerwaffe) traditionally wore black uniforms, and even in others, a black beret is common. In Finland, black is the symbolic color for both armoured troops and combat engineers, and military units of these specialities have black flags and unit insignia.
The black beret and the color black is also a symbol of special forces in many countries. Soviet and Russian OMON special police and Russian naval infantry wear a black beret. A black beret is also worn by military police in the Canadian, Czech, Croatian, Portuguese, Spanish and Serbian armies.
The silver-on-black skull and crossbones symbol or Totenkopf and a black uniform were used by Hussars and Black Brunswickers, the German Panzerwaffe and the Nazi Schutzstaffel, and U.S. 400th Missile Squadron (crossed missiles), and continues in use with the Estonian Kuperjanov Battalion.
Religion
In Christian theology, black was the color of the universe before God created light. In many religious cultures, from Mesoamerica to Oceania to India and Japan, the world was created out of a primordial darkness. In the Bible the light of faith and Christianity is often contrasted with the darkness of ignorance and paganism.
In Christianity, the devil is often called the "prince of darkness". The term was used in John Milton's poem Paradise Lost, published in 1667, referring to Satan, who is viewed as the embodiment of evil. It is an English translation of the Latin phrase princeps tenebrarum, which occurs in the Acts of Pilate, written in the fourth century, in the 11th-century hymn Rhythmus de die mortis by Pietro Damiani, and in a sermon by Bernard of Clairvaux from the 12th century. The phrase also occurs in King Lear by William Shakespeare (), Act III, Scene IV, l. 14:
'The prince of darkness is a gentleman."
Priests and pastors of the Roman Catholic, Eastern Orthodox and Protestant churches commonly wear black, as do monks of the Benedictine Order, who consider it the color of humility and penitence.
In Islam, black, along with green, plays an important symbolic role. It is the color of the Black Standard, the banner that is said to have been carried by the soldiers of Muhammad. It is also used as a symbol in Shi'a Islam (heralding the advent of the Mahdi), and the flag of followers of Islamism and Jihadism.
In Hinduism, the goddess Kali, goddess of time and change, is portrayed with black or dark blue skin. wearing a necklace adorned with severed heads and hands. Her name means "The black one". She destroys anger and passion according to Hindu mythology and her devotees are supposed to abstain from meat or intoxication. Kali does not eat meat, but it is the śāstra's injunction that those who are unable to give up meat-eating, they may sacrifice one goat, not cow, one small animal before the goddess Kali, on amāvāsya (new moon) day, night, not day, and they can eat it.
In Paganism, black represents dignity, force, stability, and protection. The color is often used to banish and release negative energies, or binding. An athame is a ceremonial blade often having a black handle, which is used in some forms of witchcraft.
Sports
The national rugby union team of New Zealand is called the All Blacks, in reference to their black outfits, and the color is also shared by other New Zealand national teams such as the Black Caps (cricket) and the Kiwis (rugby league).
Association football (soccer) referees traditionally wear all-black uniforms, however nowadays other uniform colors may also be worn.
In auto racing, a black flag signals a driver to go into the pits.
In baseball, "the black" refers to the batter's eye, a blacked out area around the center-field bleachers, painted black to give hitters a decent background for pitched balls.
A large number of teams have uniforms designed with black colors even when the team does not normally feature that color. Many feel the color sometimes imparts a psychological advantage in its wearers. Black is used by numerous professional and collegiate sports teams
Idioms and expressions
In general, the Negro race of African origin is called "Black", while the Caucasian race of European origin is called "White".
In the United States, "Black Friday" (the day after Thanksgiving Day, the fourth Thursday in November) is traditionally the busiest shopping day of the year. Many Americans are on holiday because of Thanksgiving, and many retailers open earlier and close later than normal, and offer special prices. The day's name originated in Philadelphia sometime before 1961, and originally was used to describe the heavy and disruptive downtown pedestrian and vehicle traffic which would occur on that day.Martin L. Apfelbaum, Philadelphia's "Black Friday," American Philatelist, vol. 69, no. 4, p. 239 (January 1966). Later an alternative explanation began to be offered: that "Black Friday" indicates the point in the year that retailers begin to turn a profit, or are "in the black", because of the large volume of sales on that day.
"In the black" means profitable. Accountants originally used black ink in ledgers to indicate profit, and red ink to indicate a loss.
Black Friday also refers to any particularly disastrous day on financial markets. The first Black Friday (1869), September 24, 1869, was caused by the efforts of two speculators, Jay Gould and James Fisk, to corner the gold market on the New York Gold Exchange.
A blacklist is a list of undesirable persons or entities (to be placed on the list is to be "blacklisted").
Black comedy is a form of comedy dealing with morbid and serious topics. The expression is similar to black humor or black humour.
A black mark against a person relates to something bad they have done.
A black mood is a bad one (cf Winston Churchill's clinical depression, which he called "my black dog").
Black market is used to denote the trade of illegal goods, or alternatively the illegal trade of otherwise legal items at considerably higher prices, e.g. to evade rationing.
Black propaganda is the use of known falsehoods, partial truths, or masquerades in propaganda to confuse an opponent.
Blackmail is the act of threatening someone to do something that would hurt them in some way, such as by revealing sensitive information about them, in order to force the threatened party to fulfill certain demands. Ordinarily, such a threat is illegal.
If the black eight-ball, in billiards, is sunk before all others are out of play, the player loses.
The black sheep of the family is the ne'er-do-well.
To blackball someone is to block their entry into a club or some such institution. In the traditional English gentlemen's club, members vote on the admission of a candidate by secretly placing a white or black ball in a hat. If upon the completion of voting, there was even one black ball amongst the white, the candidate would be denied membership, and he would never know who had "blackballed" him.
Black tea in the Western culture is known as "crimson tea" in Chinese and culturally influenced languages (紅 茶, Mandarin Chinese hóngchá; Japanese kōcha; Korean hongcha).
"The black" is a wildfire suppression term referring to a burned area on a wildfire capable of acting as a safety zone.
Black coffee refers to coffee without sugar or cream.
Associations and symbolism
Mourning
In the West, black is commonly associated with mourning and bereavement, and usually worn at funerals and memorial services. In some traditional societies, for example in Greece and Italy, some widows wear black for the rest of their lives. In contrast, across much of Africa and parts of Asia like Vietnam, white is a color of mourning.
In Victorian England, the colors and fabrics of mourning were specified in an unofficial dress code: "non-reflective black paramatta and crape for the first year of deepest mourning, followed by nine months of dullish black silk, heavily trimmed with crape, and then three months when crape was discarded. Paramatta was a fabric of combined silk and wool or cotton; crape was a harsh black silk fabric with a crimped appearance produced by heat. Widows were allowed to change into the colors of half-mourning, such as gray and lavender, black and white, for the final six months."
A "black day" (or week or month) usually refers to tragic date. The Romans marked fasti days with white stones and nefasti days with black. The term is often used to remember massacres. Black months include the Black September in Jordan, when large numbers of Palestinians were killed, and Black July in Sri Lanka, the killing of members of the Tamil population by the Sinhalese government.
In the financial world, the term often refers to a dramatic drop in the stock market. For example, the Wall Street Crash of 1929, the stock market crash on October 29, 1929, which marked the start of the Great Depression, is nicknamed Black Tuesday, and was preceded by Black Thursday, a downturn on October 24 the previous week.
Darkness and evil
In western popular culture, black has long been associated with evil and darkness. It is the traditional color of witchcraft and black magic.
In the Book of Revelation, the last book in the New Testament of the Bible, the Four Horsemen of the Apocalypse are supposed to announce the Apocalypse before the Last Judgment. The horseman representing famine rides a black horse. The vampire of literature and films, such as Count Dracula of the Bram Stoker novel, dressed in black, and could only move at night. The Wicked Witch of the West in the 1939 film The Wizard of Oz became the archetype of witches for generations of children. Whereas witches and sorcerers inspired real fear in the 17th century, in the 21st century children and adults dressed as witches for Halloween parties and parades.
Power, authority and solemnity
Black is frequently used as a color of power, law and authority. In many countries judges and magistrates wear black robes. That custom began in Europe in the 13th and 14th centuries. Jurists, magistrates and certain other court officials in France began to wear long black robes during the reign of Philip IV of France (1285–1314), and in England from the time of Edward I (1271–1307). The custom spread to the cities of Italy at about the same time, between 1300 and 1320. The robes of judges resembled those worn by the clergy, and represented the law and authority of the King, while those of the clergy represented the law of God and authority of the church.
Until the 20th century most police uniforms were black, until they were largely replaced by a less menacing blue in France, the U.S. and other countries. In the United States, police cars are frequently Black and white. The riot control units of the Basque Autonomous Police in Spain are known as beltzak ("blacks") after their uniform.
Black today is the most common color for limousines and the official cars of government officials.
Black formal attire is still worn at many solemn occasions or ceremonies, from graduations to formal balls. Graduation gowns are copied from the gowns worn by university professors in the Middle Ages, which in turn were copied from the robes worn by judges and priests, who often taught at the early universities. The mortarboard hat worn by graduates is adapted from a square cap called a biretta worn by Medieval professors and clerics.
Functionality
In the 19th and 20th centuries, many machines and devices, large and small, were painted black, to stress their functionality. These included telephones, sewing machines, steamships, railroad locomotives, and automobiles. The Ford Model T, the first mass-produced car, was available only in black from 1914 to 1926. Of means of transportation, only airplanes were rarely ever painted black.
Black house paint is becoming more popular with Sherwin-Williams reporting that the color, Tricorn Black, was the 6th most popular exterior house paint color in Canada and the 12th most popular paint in the United States in 2018.
Ethnography
The term "black" is often used in the West to describe people whose skin is darker. In the United States, it is particularly used to describe African Americans. The terms for African Americans have changed over the years, as shown by the categories in the United States Census, taken every ten years.
In the first U.S. Census, taken in 1790, just four categories were used: Free White males, Free White females, other free persons, and slaves.
In the 1820 census the new category "colored" was added.
In the 1850 census, slaves were listed by owner, and a B indicated black, while an M indicated "mulatto".
In the 1890 census, the categories for race were white, black, mulatto, quadroon (a person one-quarter black); octoroon (a person one-eighth black), Chinese, Japanese, or American Indian.
In the 1930 census, anyone with any black blood was supposed to be listed as "Negro".
In the 1970 census, the category "Negro or black" was used for the first time.
In the 2000 and 2012 census, the category "Black or African-American" was used, defined as "a person having their origin in any of the racial groups in Africa." In the 2012 Census 12.1 percent of Americans identified themselves as Black or African-American.
Black is also commonly used as a racial description in the United Kingdom, since ethnicity was first measured in the 2001 census. The 2011 British census asked residents to describe themselves, and categories offered included Black, African, Caribbean, or Black British. Other possible categories were African British, African Scottish, Caribbean British and Caribbean Scottish. Of the total UK population in 2001, 1.0 percent identified themselves as Black Caribbean, 0.8 percent as Black African, and 0.2 percent as Black (others).
In Canada, census respondents can identify themselves as Black. In the 2006 census, 2.5 percent of the population identified themselves as black.
In Australia, the term black is not used in the census. In the 2006 census, 2.3 percent of Australians identified themselves as Aboriginal and/or Torres Strait Islanders.
In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as branco (white), pardo (brown), preto (black), or amarelo (yellow). In 2008 6.8 percent of the population identified themselves as "preto".
Opposite of white
Black and white have often been used to describe opposites; particularly light and darkness and good and evil. In Medieval literature, the white knight usually represented virtue, the black knight something mysterious and sinister. In American westerns, the hero often wore a white hat, the villain a black hat.
In the original game of chess invented in Persia or India, the colors of the two sides were varied; a 12th-century Iranian chess set in the New York Metropolitan Museum of Art, has red and green pieces. But when the game was imported into Europe, the colors, corresponding to European culture, usually became black and white.
Studies have shown that something printed in black letters on white has more authority with readers than any other color of printing.
In philosophy and arguments, the issue is often described as black-and-white, meaning that the issue at hand is dichotomized (having two clear, opposing sides with no middle ground).
Conspiracy
Black is commonly associated with secrecy.
The Black Chamber was a term given to an office which secretly opened and read diplomatic mail and broke codes. Queen Elizabeth I had such an office, headed by her Secretary, Sir Francis Walsingham, which successfully broke the Spanish codes and broke up several plots against the Queen. In France a cabinet noir'' was established inside the French post office by Louis XIII to open diplomatic mail. It was closed during the French Revolution but re-opened under Napoleon I. The Habsburg Empire and Dutch Republic had similar black chambers.
The United States created a secret peacetime Black Chamber, called the Cipher Bureau, in 1919. It was funded by the State Department and Army and disguised as a commercial company in New York. It successfully broke a number of diplomatic codes, including the code of the Japanese government. It was closed down in 1929 after the State Department withdrew funding, when the new Secretary of State, Henry Stimson, stated that "Gentlemen do not read each other's mail." The Cipher Bureau was the ancestor of the U.S. National Security Agency.
A black project is a secret unacknowledged military project, such as Enigma Decryption during World War II, or a secret counter-narcotics or police sting operation.
Black ops are covert operations carried out by a government, government agency or military.
A black budget is a government budget that is allocated for classified or other secret operations of a nation. The black budget is an account expenses and spending related to military research and covert operations. The black budget is mostly classified due to security reasons.
Elegant fashion
Black is the color most commonly associated with elegance in Europe and the United States, followed by silver, gold, and white.
Black first became a fashionable color for men in Europe in the 17th century, in the courts of Italy and Spain. (See history above.) In the 19th century, it was the fashion for men both in business and for evening wear, in the form of a black coat whose tails came down the knees. In the evening it was the custom of the men to leave the women after dinner to go to a special smoking room to enjoy cigars or cigarettes. This meant that their tailcoats eventually smelled of tobacco. According to the legend, in 1865 Edward VII, then the Prince of Wales, had his tailor make a special short smoking jacket. The smoking jacket then evolved into the dinner jacket. Again according to legend, the first Americans to wear the jacket were members of the Tuxedo Club in New York State. Thereafter the jacket became known as a tuxedo in the U.S. The term "smoking" is still used today in Russia and other countries.
The tuxedo was always black until the 1930s, when the Duke of Windsor began to wear a tuxedo that was a very dark midnight blue. He did so because a black tuxedo looked greenish in artificial light, while a dark blue tuxedo looked blacker than black itself.
For women's fashion, the defining moment was the invention of the simple black dress by Coco Chanel in 1926. (See history.) Thereafter, a long black gown was used for formal occasions, while the simple black dress could be used for everything else. The designer Karl Lagerfeld, explaining why black was so popular, said: "Black is the color that goes with everything. If you're wearing black, you're on sure ground." Skirts have gone up and down and fashions have changed, but the black dress has not lost its position as the essential element of a woman's wardrobe. The fashion designer Christian Dior said, "elegance is a combination of distinction, naturalness, care and simplicity," and black exemplified elegance.
The expression "X is the new black" is a reference to the latest trend or fad that is considered a wardrobe basic for the duration of the trend, on the basis that black is always fashionable. The phrase has taken on a life of its own and has become a cliché.
Many performers of both popular and European classical music, including French singers Edith Piaf and Juliette Gréco, and violinist Joshua Bell have traditionally worn black on stage during performances. A black costume was usually chosen as part of their image or stage persona, or because it did not distract from the music, or sometimes for a political reason. Country-western singer Johnny Cash always wore black on stage. In 1971, Cash wrote the song "Man in Black" to explain why he dressed in that color: "We're doing mighty fine I do suppose / In our streak of lightning cars and fancy clothes / But just so we're reminded of the ones who are held back / Up front there ought to be a man in black."
See also
Black Rose (disambiguation)
Lists of colors
Rich black, which is different from using black ink alone, in printing.
Shades of black
References
Notes and citations
Bibliography
Shades of gray
Color
Spoken articles
Darkness
Web colors
Cultural aspects of death
|
https://en.wikipedia.org/wiki/BQP
|
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.
A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3.
Definition
BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits , such that
For all , Qn takes n qubits as input and outputs 1 bit
For all x in L,
For all x not in L,
Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances.
Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input.
A complete problem for Promise-BQP
Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every problem in Promise-BQP reduces to it in polynomial time.
Here is an intuitive problem that is complete for efficient quantum computation, which stems directly from the definition of Promise-BQP. Note that for technical reasons, completeness proofs focus on the promise problem version of BQP. We show that the problem below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class having a trivial promise, for which no complete problems are known).
APPROX-QCIRCUIT-PROB problem
Given a description of a quantum circuit acting on qubits with gates, where is a polynomial in and each gate acts on one or two qubits, and two numbers , distinguish between the following two cases:
measuring the first qubit of the state yields with probability
measuring the first qubit of the state yields with probability
Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases.
Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB.
Proof.
Suppose we have an algorithm that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuit acting on qubits, and two numbers , distinguishes between the above two cases. We can solve any problem in BQP with this oracle, by setting .
For any , there exists family of quantum circuits such that for all , a state of qubits, if ; else if . Fix an input of qubits, and the corresponding quantum circuit . We can first construct a circuit such that . This can be done easily by hardwiring and apply a sequence of CNOT gates to flip the qubits. Then we can combine two circuits to get , and now . And finally, necessarily the results of is obtained by measuring several qubits and apply some (classical) logic gates to them. We can always defer the measurement and reroute the circuits so that by measuring the first qubit of , we get the output. This will be our circuit , and we decide the membership of in by running with . By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), so reduces to APPROX-QCIRCUIT-PROB.
APPROX-QCIRCUIT-PROB comes handy when we try to prove the relationships between some well-known complexity classes and BQP.
Relationship to other complexity classes
BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means BQPBQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time.
BQP contains P and BPP and is contained in AWPP, PP and PSPACE.
In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are:
As the problem of P ≟ PSPACE has not yet been solved, the proof of inequality between BQP and classes mentioned above is supposed to be difficult. The relation between BQP and NP is not known. In May 2018, computer scientists Ran Raz of Princeton University and Avishay Tal of Stanford University published a paper which showed that, relative to an oracle, BQP was not contained in PH. It can be proven that there exists an oracle A such that BQPA PHA. In an extremely informal sense, this can be thought of as giving PH and BQP an identical, but additional, capability and verifying that BQP with the oracle (BQPA) can do things PHA cannot. While an oracle separation has been proven, the fact that BQP is not contained in PH has not been proven. An oracle separation does not prove whether or not complexity classes are the same. The oracle separation gives intuition that BQP may not be contained in PH.
It has been suspected for many years that Fourier Sampling is a problem that exists within BQP, but not within the polynomial hierarchy. Recent conjectures have provided evidence that a similar problem, Fourier Checking, also exists in the class BQP without being contained in the polynomial hierarchy. This conjecture is especially notable because it suggests that problems existing in BQP could be classified as harder than NP-Complete problems. Paired with the fact that many practical BQP problems are suspected to exist outside of P (it is suspected and not verified because there is no proof that P ≠ NP), this illustrates the potential power of quantum computing in relation to classical computing.
Adding postselection to BQP results in the complexity class PostBQP which is equal to PP.
We will prove or discuss some of these results below.
BQP and EXP
We begin with an easier containment. To show that , it suffices to show that APPROX-QCIRCUIT-PROB is in EXP since APPROX-QCIRCUIT-PROB is BQP-complete.
Note that this algorithm also requires space to store the vectors and the matrices. We will show in the following section that we can improve upon the space complexity.
BQP and PSPACE
To prove , we first introduce a technique called the sum of histories.
Sum of Histories
Source:
Sum of histories is a technique introduced by physicist Richard Feynman for path integral formulation. We apply this technique to quantum computing to solve APPROX-QCIRCUIT-PROB.
Consider a quantum circuit , which consists of gates, , where each comes from a universal gate set and acts on at most two qubits.
To understand what the sum of histories is, we visualize the evolution of a quantum state given a quantum circuit as a tree. The root is the input , and each node in the tree has children, each representing a state in . The weight on a tree edge from a node in -th level representing a state to a node in -th level representing a state is , the amplitude of after applying on . The transition amplitude of a root-to-leaf path is the product of all the weights on the edges along the path. To get the probability of the final state being , we sum up the amplitudes of all root-to-leave paths that ends at a node representing .
More formally, for the quantum circuit , its sum over histories tree is a tree of depth , with one level for each gate in addition to the root, and with branching factor .
Notice in the sum over histories algorithm to compute some amplitude , only one history is stored at any point in the computation. Hence, the sum over histories algorithm uses space to compute for any since bits are needed to store the histories in addition to some workspace variables.
Therefore, in polynomial space, we may compute over all with the first qubit being , which is the probability that the first qubit is measured to be 1 by the end of the circuit.
Notice that compared with the simulation given for the proof that , our algorithm here takes far less space but far more time instead. In fact it takes time to calculate a single amplitude!
BQP and PP
A similar sum-over-histories argument can be used to show that .
P and BQP
We know , since every classical circuit can be simulated by a quantum circuit.
It is conjectured that BQP solves hard problems outside of P, specifically, problems in NP. The claim is indefinite because we don't know if P=NP, so we don't know if those problems are actually in P. Below are some evidence of the conjecture:
Integer factorization (see Shor's algorithm)
Discrete logarithm
Simulation of quantum systems (see universal quantum simulator)
Approximating the Jones polynomial at certain roots of unity
Harrow-Hassidim-Lloyd (HHL) algorithm
See also
Hidden subgroup problem
Polynomial hierarchy (PH)
Quantum complexity theory
QMA, the quantum equivalent to NP.
QIP, the quantum equivalent to IP.
References
External links
Complexity Zoo link to BQP
Probabilistic complexity classes
Quantum complexity theory
Quantum computing
|
https://en.wikipedia.org/wiki/Brainfuck
|
Brainfuck is an esoteric programming language created in 1993 by Urban Müller.
Notable for its extreme minimalism, the language consists of only eight simple commands, a data pointer and an instruction pointer. While it is fully Turing complete, it is not intended for practical use, but to challenge and amuse programmers. Brainfuck requires one to break commands into microscopic steps.
The language's name is a reference to the slang term brainfuck, which refers to things so complicated or unusual that they exceed the limits of one's understanding, as it was not meant or made for designing actual software but to challenge the boundaries of computer programming.
History
Müller designed Brainfuck with the goal of implementing the smallest possible compiler, inspired by the 1024-byte compiler for the FALSE programming language. Müller's original compiler was implemented in machine language and compiled to a binary with a size of 296 bytes. He uploaded the first Brainfuck compiler to Aminet in 1993. The program came with a "Readme" file, which briefly described the language, and challenged the reader "Who can program anything useful with it? :)". Müller also included an interpreter and some examples. A second version of the compiler used only 240 bytes.
P′′
Except for its two I/O commands, Brainfuck is a minor variation of the formal programming language P′′ created by Corrado Böhm in 1964, which is explicitly based on the Turing machine. In fact, using six symbols equivalent to the respective Brainfuck commands +, -, <, >, [, ], Böhm provided an explicit program for each of the basic functions that together serve to compute any computable function. So the first "Brainfuck" programs appear in Böhm's 1964 paper – and they were sufficient to prove Turing completeness.
Language design
The language consists of eight commands. A brainfuck program is a sequence of these commands, possibly interspersed with other characters (which are ignored). The commands are executed sequentially, with some exceptions: an instruction pointer begins at the first command, and each command it points to is executed, after which it normally moves forward to the next command. The program terminates when the instruction pointer moves past the last command.
The brainfuck language uses a simple machine model consisting of the program and instruction pointer, as well as a one-dimensional array of at least 30,000 byte cells initialized to zero; a movable data pointer (initialized to point to the leftmost byte of the array); and two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding).
The eight language commands each consist of a single character:
[ and ] match as parentheses usually do: each [ matches exactly one ] and vice versa, the [ comes first, and there can be no unmatched [ or ] between the two.
As the name suggests, Brainfuck programs tend to be difficult to comprehend. This is partly because any mildly complex task requires a long sequence of commands and partly because the program's text gives no direct indications of the program's state. These, as well as Brainfuck's inefficiency and its limited input/output capabilities, are some of the reasons it is not used for serious programming. Nonetheless, like any Turing complete language, Brainfuck is theoretically capable of computing any computable function or simulating any other computational model, if given access to an unlimited amount of memory. A variety of Brainfuck programs have been written. Although Brainfuck programs, especially complicated ones, are difficult to write, it is quite trivial to write an interpreter for Brainfuck in a more typical language such as C due to its simplicity. There even exist Brainfuck interpreters written in the Brainfuck language itself.
Brainfuck is an example of a so-called Turing tarpit: It can be used to write any program, but it is not practical to do so, because Brainfuck provides so little abstraction that the programs get very long or complicated.
Examples
Adding two values
As a first, simple example, the following code snippet will add the current cell's value to the next cell: Each time the loop is executed, the current cell is decremented, the data pointer moves to the right, that next cell is incremented, and the data pointer moves left again. This sequence is repeated until the starting cell is 0.
[->+<]
This can be incorporated into a simple addition program as follows:
++ Cell c0 = 2
> +++++ Cell c1 = 5
[ Start your loops with your cell pointer on the loop counter (c1 in our case)
< + Add 1 to c0
> - Subtract 1 from c1
] End your loops with the cell pointer on the loop counter
At this point our program has added 5 to 2 leaving 7 in c0 and 0 in c1
but we cannot output this value to the terminal since it is not ASCII encoded
To display the ASCII character "7" we must add 48 to the value 7
We use a loop to compute 48 = 6 * 8
++++ ++++ c1 = 8 and this will be our loop counter again
[
< +++ +++ Add 6 to c0
> - Subtract 1 from c1
]
< . Print out c0 which has the value 55 which translates to "7"!
Hello World!
The following program prints "Hello World!" and a newline to the screen:
[ This program prints "Hello World!" and a newline to the screen, its
length is 106 active command characters. [It is not the shortest.]
This loop is an "initial comment loop", a simple way of adding a comment
to a BF program such that you don't have to worry about any command
characters. Any ".", ",", "+", "-", "<" and ">" characters are simply
ignored, the "[" and "]" characters just have to be balanced. This
loop and the commands it contains are ignored because the current cell
defaults to a value of 0; the 0 value causes this loop to be skipped.
]
++++++++ Set Cell #0 to 8
[
>++++ Add 4 to Cell #1; this will always set Cell #1 to 4
[ as the cell will be cleared by the loop
>++ Add 2 to Cell #2
>+++ Add 3 to Cell #3
>+++ Add 3 to Cell #4
>+ Add 1 to Cell #5
<<<<- Decrement the loop counter in Cell #1
] Loop until Cell #1 is zero; number of iterations is 4
>+ Add 1 to Cell #2
>+ Add 1 to Cell #3
>- Subtract 1 from Cell #4
>>+ Add 1 to Cell #6
[<] Move back to the first zero cell you find; this will
be Cell #1 which was cleared by the previous loop
<- Decrement the loop Counter in Cell #0
] Loop until Cell #0 is zero; number of iterations is 8
The result of this is:
Cell no : 0 1 2 3 4 5 6
Contents: 0 0 72 104 88 32 8
Pointer : ^
>>. Cell #2 has value 72 which is 'H'
>---. Subtract 3 from Cell #3 to get 101 which is 'e'
+++++++..+++. Likewise for 'llo' from Cell #3
>>. Cell #5 is 32 for the space
<-. Subtract 1 from Cell #4 for 87 to give a 'W'
<. Cell #3 was set to 'o' from the end of 'Hello'
+++.------.--------. Cell #3 for 'rl' and 'd'
>>+. Add 1 to Cell #5 gives us an exclamation point
>++. And finally a newline from Cell #6
For "readability", this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands +-<>[],. so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.
Another example of a code golfed version that prints Hello, World!:
+[-->-[>>+>-----<<]<--<---]>-.>>>+.>>..+++[.>]<<<<.+++.------.<<-.>>>>+.
ROT13
This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65–77) to N-Z (78-90), and vice versa. Also it must map a-m (97-109) to n-z (110-122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates.
-,+[ Read first character and start outer character reading loop
-[ Skip forward if character is 0
>>++++[>++++++++<-] Set up divisor (32) for division loop
(MEMORY LAYOUT: dividend copy remainder divisor quotient zero zero)
<+<-[ Set up dividend (x minus 1) and enter division loop
>+>+>-[>>>] Increase copy and remainder / reduce divisor / Normal case: skip forward
<[[>+<-]>>+>] Special case: move remainder back to divisor and increase quotient
<<<<<- Decrement dividend
] End division loop
]>>>[-]+ End skip loop; zero former divisor and reuse space for a flag
>--[-[<->+++[-]]]<[ Zero that flag unless quotient was 2 or 3; zero quotient; check flag
++++++++++++<[ If flag then set up divisor (13) for second division loop
(MEMORY LAYOUT: zero copy dividend divisor remainder quotient zero zero)
>-[>+>>] Reduce divisor; Normal case: increase remainder
>[+[<+>-]>+>>] Special case: increase remainder / move it back to divisor / increase quotient
<<<<<- Decrease dividend
] End division loop
>>[<+>-] Add remainder back to divisor to get a useful 13
>[ Skip forward if quotient was 0
-[ Decrement quotient and skip forward if quotient was 1
-<<[-]>> Zero quotient and divisor if quotient was 2
]<<[<<->>-]>> Zero divisor and subtract 13 from copy if quotient was 1
]<<[<<+>>-] Zero divisor and add 13 to copy if quotient was 0
] End outer skip loop (jump to here if ((character minus 1)/32) was not 2 or 3)
<[-] Clear remainder from first division if second division was skipped
<.[-] Output ROT13ed character from copy and clear it
<-,+ Read next character
] End character reading loop
See also
JSFuck – an esoteric JavaScript programming language with a very limited set of characters
Notes
References
Non-English-based programming languages
Esoteric programming languages
Programming languages created in 1993
|
https://en.wikipedia.org/wiki/Bioleaching
|
Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt.
Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite.
The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper.
Process
Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal.
Pyrite leaching (FeS2):
In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+):
(1) spontaneous
The ferrous ion is then oxidized by bacteria using oxygen:
(2) (iron oxidizers)
Thiosulfate is also oxidized by bacteria to give sulfate:
(3) (sulfur oxidizers)
The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction:
(4)
The net products of the reaction are soluble ferrous sulfate and sulfuric acid.
The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant.
The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution.
Chalcopyrite leaching:
(1) spontaneous
(2) (iron oxidizers)
(3) (sulfur oxidizers)
net reaction:
(4)
In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by Acidithiobacillus spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals.
Further processing
The dissolved copper (Cu2+) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene:
Cu2+(aq) + 2LH(organic) → CuL2(organic) + 2H+(aq)
The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution.
Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there.
The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron:
Cu2+(aq) + Fe(s) → Cu(s) + Fe2+(aq)
The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons).
Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal.
With fungi
Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and fly ash from municipal waste incineration. Experiments have shown that two fungal strains (Aspergillus niger, Penicillium simplicissimum) were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. Aspergillus niger can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal.
Feasibility
Economic feasibility
Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants. And low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished.
Bioleaching can be used to extract metals from low concentration ores such as gold that are too poor for other technologies. It can be used to partially replace the extensive crushing and grinding that translates to prohibitive cost and energy consumption in a conventional process. Because the lower cost of bacterial leaching outweighs the time it takes to extract the metal.
High concentration ores, such as copper, are more economical to smelt rather bioleach due to the slow speed of the bacterial leaching process compared to smelting. The slow speed of bioleaching introduces a significant delay in cash flow for new mines. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable.
Economically it is also very expensive and many companies once started can not keep up with the demand and end up in debt.
In space
In 2020 scientists showed, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basaltic rocks via bioleaching in space.
Environmental impact
The process is more environmentally friendly than traditional extraction methods. For the company this can translate into profit, since the necessary limiting of sulfur dioxide emissions during smelting is expensive. Less landscape damage occurs, since the bacteria involved grow naturally, and the mine and surrounding area can be left relatively untouched. As the bacteria breed in the conditions of the mine, they are easily cultivated and recycled.
Toxic chemicals are sometimes produced in the process. Sulfuric acid and H+ ions that have been formed can leak into the ground and surface water turning it acidic, causing environmental damage. Heavy ions such as iron, zinc, and arsenic leak during acid mine drainage. When the pH of this solution rises, as a result of dilution by fresh water, these ions precipitate, forming "Yellow Boy" pollution. For these reasons, a setup of bioleaching must be carefully planned, since the process can lead to a biosafety failure. Unlike other methods, once started, bioheap leaching cannot be quickly stopped, because leaching would still continue with rainwater and natural bacteria. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous.
See also
Phytomining
References
Further reading
T. A. Fowler and F. K. Crundwell – "Leaching of zinc sulfide with Thiobacillus ferrooxidans"
Brandl H. (2001) "Microbial leaching of metals". In: Rehm H. J. (ed.) Biotechnology, Vol. 10. Wiley-VCH, Weinheim, pp. 191–224
Biotechnology
Economic geology
Metallurgical processes
Applied microbiology
|
https://en.wikipedia.org/wiki/Bus
|
A bus (contracted from omnibus, with variants multibus, motorbus, autobus, etc.) is a road vehicle that carries significantly more passengers than an average car or van. It is most commonly used in public transport, but is also in use for charter purposes, or through private ownership. Although the average bus carries between 30 and 100 passengers, some buses have a capacity of up to 300 passengers. The most common type is the single-deck rigid bus, with double-decker and articulated buses carrying larger loads, and midibuses and minibuses carrying smaller loads. Coaches are used for longer-distance services. Many types of buses, such as city transit buses and inter-city coaches, charge a fare. Other types, such as elementary or secondary school buses or shuttle buses within a post-secondary education campus, are free. In many jurisdictions, bus drivers require a special large vehicle licence above and beyond a regular driving licence.
Buses may be used for scheduled bus transport, scheduled coach transport, school transport, private hire, or tourism; promotional buses may be used for political campaigns and others are privately operated for a wide range of purposes, including rock and pop band tour vehicles.
Horse-drawn buses were used from the 1820s, followed by steam buses in the 1830s, and electric trolleybuses in 1882. The first internal combustion engine buses, or motor buses, were used in 1895. Recently, interest has been growing in hybrid electric buses, fuel cell buses, and electric buses, as well as buses powered by compressed natural gas or biodiesel. As of the 2010s, bus manufacturing is increasingly globalised, with the same designs appearing around the world.
Name
The word bus is a shortened form of the Latin adjectival form ("for all"), the dative plural of ("all"). The theoretical full name is in French ("vehicle for all"). The name originates from a mass-transport service started in 1823 by a French corn-mill owner named in Richebourg, a suburb of Nantes. A by-product of his mill was hot water, and thus next to it he established a spa business. In order to encourage customers he started a horse-drawn transport service from the city centre of Nantes to his establishment. The first vehicles stopped in front of the shop of a hatter named Omnés, which displayed a large sign inscribed "Omnes Omnibus", a pun on his Latin-sounding surname, being the male and female nominative, vocative and accusative form of the Latin adjective ("all"), combined with omnibus, the dative plural form meaning "for all", thus giving his shop the name "Omnés for all", or "everything for everyone".
His transport scheme was a huge success, although not as he had intended as most of his passengers did not visit his spa. He turned the transport service into his principal lucrative business venture and closed the mill and spa. Nantes citizens soon gave the nickname "omnibus" to the vehicle. Having invented the successful concept Baudry moved to Paris and launched the first omnibus service there in April 1828. A similar service was introduced in Manchester in 1824 and in London in 1829.
History
Steam buses
Regular intercity bus services by steam-powered buses were pioneered in England in the 1830s by Walter Hancock and by associates of Sir Goldsworthy Gurney, among others, running reliable services over road conditions which were too hazardous for horse-drawn transportation.
The first mechanically propelled omnibus appeared on the streets of London on 22 April 1833. Steam carriages were much less likely to overturn, they travelled faster than horse-drawn carriages, they were much cheaper to run, and caused much less damage to the road surface due to their wide tyres.
However, the heavy road tolls imposed by the turnpike trusts discouraged steam road vehicles and left the way clear for the horse bus companies, and from 1861 onwards, harsh legislation virtually eliminated mechanically propelled vehicles from the roads of Great Britain for 30 years, the Locomotive Act 1861 imposing restrictive speed limits on "road locomotives" of in towns and cities, and in the country.
Trolleybuses
In parallel to the development of the bus was the invention of the electric trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the Journal of the Society of Arts in 1881 as an "...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all."
The first such vehicle, the Electromote, was made by his brother Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration.
Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911.
Motor buses
In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales.
Germany's Daimler Motors Corporation also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, DMG expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler Motors Corporation also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard.
The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company—it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds of them saw military service on the Western Front during the First World War.
The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. GM purchased the balance of the shares in 1943 to form the GM Truck and Coach Division.
Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking.
Types
Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes.
Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer.
Design
Accessibility
Transit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have ramps to provide access for wheelchair users and people with baby carriages, sometimes as electrically or hydraulically extended under-floor constructs for level access. Prior to more general use of such technology, these wheelchair users could only use specialist para-transit mobility buses.
Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws.
Configuration
Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-man operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, while articulated buses have three.
Guidance
Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways.
Liveries
Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative.
Propulsion
The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s.
Dimensions
United Kingdom and European Union:
Maximum Length: Single rear axle . Twin rear axle .
Maximum Width:
United States, Canada and Mexico:
Maximum Length: None
Maximum Width:
Manufacture
Early bus manufacturing grew out of carriage coach building, and later out of automobile or truck manufacturers. Early buses were merely a bus body fitted to a truck chassis. This body+chassis approach has continued with modern specialist manufacturers, although there also exist integral designs such as the Leyland National where the two are practically inseparable. Specialist builders also exist and concentrate on building buses for special uses or modifying standard buses into specialised products.
Integral designs have the advantages that they have been well-tested for strength and stability, and also are off-the-shelf. However, two incentives cause use of the chassis+body model. First, it allows the buyer and manufacturer both to shop for the best deal for their needs, rather than having to settle on one fixed design—the buyer can choose the body and the chassis separately. Second, over the lifetime of a vehicle (in constant service and heavy traffic), it will likely get minor damage now and again, and being able easily to replace a body panel or window etc. can vastly increase its service life and save the cost and inconvenience of removing it from service.
As with the rest of the automotive industry, into the 20th century, bus manufacturing increasingly became globalized, with manufacturers producing buses far from their intended market to exploit labour and material cost advantages. A typical city bus costs almost US$450,000.
Uses
Public transport
Transit buses, used on public transport bus services, have utilitarian fittings designed for efficient movement of large numbers of people, and often have multiple doors. Coaches are used for longer-distance routes. High-capacity bus rapid transit services may use the bi-articulated bus or tram-style buses such as the Wright StreetCar and the Irisbus Civis.
Buses and coach services often operate to a predetermined published public transport timetable defining the route and the timing, but smaller vehicles may be used on more flexible demand responsive transport services.
Tourism
Buses play a major part in the tourism industry. Tour buses around the world allow tourists to view local attractions or scenery. These are often open-top buses, but can also be regular buses or coaches.
In local sightseeing, City Sightseeing is the largest operator of local tour buses, operating on a franchised basis all over the world. Specialist tour buses are also often owned and operated by safari parks and other theme parks or resorts. Longer-distance tours are also carried out by bus, either on a turn up and go basis or through a tour operator, and usually allow disembarkation from the bus to allow touring of sites of interest on foot. These may be day trips or longer excursions incorporating hotel stays. Tour buses often carry a tour guide, although the driver or a recorded audio commentary may also perform this function. The tour operator may be a subsidiary of a company that operates buses and coaches for other uses or an independent company that charters buses or coaches. Commuter transport operators may also use their coaches to conduct tours within the target city between the morning and evening commuter transport journey.
Buses and coaches are also a common component of the wider package holiday industry, providing private airport transfers (in addition to general airport buses) and organised tours and day trips for holidaymakers on the package.
Tour buses can also be hired as chartered buses by groups for sightseeing at popular holiday destinations. These private tour buses may offer specific stops, such as all the historical sights, or allow the customers to choose their own itineraries. Tour buses come with professional and informed staff and insurance, and maintain state governed safety standards. Some provide other facilities like entertainment units, luxurious reclining seats, large scenic windows, and even lavatories.
Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world. Some companies such as Topdeck Travel were set up specifically to use buses to drive the hippie trail or travel to places such as North Africa.
In many tourist or travel destinations, a bus is part of the tourist attraction, such as the North American tourist trolleys, London's AEC Routemaster heritage routes, or the customised buses of Malta, Asia, and the Americas. Another example of tourist stops is the homes of celebrities, such as tours based near Hollywood. There are several such services between 6000 and 7000 Hollywood Boulevard in Los Angeles.
Student transport
In some countries, particularly the US and Canada, buses used to transport schoolchildren have evolved into a specific design with specified mandatory features. American states have also adopted laws regarding motorist conduct around school buses, including large fines and possibly prison for passing a stopped school bus in the process of loading or offloading children passengers. These school buses may have school bus yellow livery and crossing guards. Other countries may mandate the use of seat belts. As a minimum, many countries require a bus carrying students to display a sign, and may also adopt yellow liveries. Student transport often uses older buses cascaded from service use, retrofitted with more seats or seatbelts. Student transport may be operated by local authorities or private contractors. Schools may also own and operate their own buses for other transport needs, such as class field trips, or transport to associated sports, music, or other school events.
Private charter
Due to the costs involved in owning, operating, and driving buses and coaches, much bus and coach use comes from the private hire of vehicles from charter bus companies, either for a day or two or on a longer contract basis, where the charter company provides the vehicles and qualified drivers.
Charter bus operators may be completely independent businesses, or charter hire may be a subsidiary business of a public transport operator that might maintain a separate fleet or use surplus buses, coaches, and dual-purpose coach-seated buses. Many private taxicab companies also operate larger minibus vehicles to cater for group fares. Companies, private groups, and social clubs may hire buses or coaches as a cost-effective method of transporting a group to an event or site, such as a group meeting, racing event, or organised recreational activity such as a summer camp. Schools often hire charter bus services on regular basis for transportation of children to and from their homes. Chartered buses are also used by education institutes for transport to conventions, exhibitions, and field trips. Entertainment or event companies may also hire temporary shuttles buses for transport at events such as festivals or conferences. Party buses are used by companies in a similar manner to limousine hire, for luxury private transport to social events or as a touring experience. Sleeper buses are used by bands or other organisations that tour between entertainment venues and require mobile rest and recreation facilities. Some couples hire preserved buses for their wedding transport, instead of the traditional car. Buses are often hired for parades or processions. Victory parades are often held for triumphant sports teams, who often tour their home town or city in an open-top bus. Sports teams may also contract out their transport to a team bus, for travel to away games, to a competition or to a final event. These buses are often specially decorated in a livery matching the team colours. Private companies often contract out private shuttle bus services, for transport of their customers or patrons, such as hotels, amusement parks, university campuses, or private airport transfer services. This shuttle usage can be as transport between locations, or to and from parking lots. High specification luxury coaches are often chartered by companies for executive or VIP transport. Charter buses may also be used in tourism and for promotion (See Tourism and Promotion sections).
Private ownership
Many organisations, including the police, not for profit, social or charitable groups with a regular need for group transport may find it practical or cost-effective to own and operate a bus for their own needs. These are often minibuses for practical, tax and driver licensing reasons, although they can also be full-size buses. Cadet or scout groups or other youth organizations may also own buses. Companies such as railroads, construction contractors, and agricultural firms may own buses to transport employees to and from remote job sites. Specific charities may exist to fund and operate bus transport, usually using specially modified mobility buses or otherwise accessible buses (See Accessibility section). Some use their contributions to buy vehicles and provide volunteer drivers.
Airport operators make use of special airside airport buses for crew and passenger transport in the secure airside parts of an airport. Some public authorities, police forces, and military forces make use of armoured buses where there is a special need to provide increased passenger protection. The United States Secret Service acquired two in 2010 for transporting dignitaries needing special protection. Police departments make use of police buses for a variety of reasons, such as prisoner transport, officer transport, temporary detention facilities, and as command and control vehicles. Some fire departments also use a converted bus as a command post while those in cold climates might retain a bus as a heated shelter at fire scenes. Many are drawn from retired school or service buses.
Promotion
Buses are often used for advertising, political campaigning, public information campaigns, public relations, or promotional purposes. These may take the form of temporary charter hire of service buses, or the temporary or permanent conversion and operation of buses, usually of second-hand buses. Extreme examples include converting the bus with displays and decorations or awnings and fittings. Interiors may be fitted out for exhibition or information purposes with special equipment or audio visual devices.
Bus advertising takes many forms, often as interior and exterior adverts and all-over advertising liveries. The practice often extends into the exclusive private hire and use of a bus to promote a brand or product, appearing at large public events, or touring busy streets. The bus is sometimes staffed by promotions personnel, giving out free gifts. Campaign buses are often specially decorated for a political campaign or other social awareness information campaign, designed to bring a specific message to different areas, or used to transport campaign personnel to local areas/meetings. Exhibition buses are often sent to public events such as fairs and festivals for purposes such as recruitment campaigns, for example by private companies or the armed forces. Complex urban planning proposals may be organised into a mobile exhibition bus for the purposes of public consultation.
Goods transport
In some sparsely populated areas, it is common to use brucks, buses with a cargo area to transport both passengers and cargo at the same time. They are especially common in the Nordic countries.
Around the world
Historically, the types and features of buses have developed according to local needs. Buses were fitted with technology appropriate to the local climate or passenger needs, such as air conditioning in Asia, or cycle mounts on North American buses. The bus types in use around the world where there was little mass production were often sourced secondhand from other countries, such as the Malta bus, and buses in use in Africa. Other countries such as Cuba required novel solutions to import restrictions, with the creation of the "camellos" (camel bus), a specially manufactured trailer bus.
After the Second World War, manufacturers in Europe and the Far East, such as Mercedes-Benz buses and Mitsubishi Fuso expanded into other continents influencing the use of buses previously served by local types. Use of buses around the world has also been influenced by colonial associations or political alliances between countries. Several of the Commonwealth nations followed the British lead and sourced buses from British manufacturers, leading to a prevalence of double-decker buses. Several Eastern Bloc countries adopted trolleybus systems, and their manufacturers such as Trolza exported trolleybuses to other friendly states. In the 1930s, Italy designed the world's only triple decker bus for the busy route between Rome and Tivoli that could carry eighty-eight passengers. It was unique not only in being a triple decker but having a separate smoking compartment on the third level.
The buses to be found in countries around the world often reflect the quality of the local road network, with high-floor resilient truck-based designs prevalent in several less developed countries where buses are subject to tough operating conditions. Population density also has a major impact, where dense urbanisation such as in Japan and the far east has led to the adoption of high capacity long multi-axle buses, often double-deckers while South America and China are implementing large numbers of articulated buses for bus rapid transit schemes.
Bus expositions
Euro Bus Expo is a trade show, which is held biennially at the UK's National Exhibition Centre in Birmingham. As the official show of the Confederation of Passenger Transport, the UK's trade association for the bus, coach and light rail industry, the three-day event offers visitors from Europe and beyond the chance to see and experience the very latest vehicles and product and service innovations right across the industry.
Busworld Kortrijk in Kortrijk, Belgium, is the leading bus trade fair in Europe. It is also held biennially.
Use of retired buses
Most public or private buses and coaches, once they have reached the end of their service with one or more operators, are sent to the wrecking yard for breaking up for scrap and spare parts. Some buses which are not economical to keep running as service buses are often converted for use other than revenue-earning transport. Much like old cars and trucks, buses often pass through a dealership where they can be bought privately or at auction.
Bus operators often find it economical to convert retired buses to use as permanent training buses for driver training, rather than taking a regular service bus out of use. Some large operators have also converted retired buses into tow bus vehicles, to act as tow trucks. With the outsourcing of maintenance staff and facilities, the increase in company health and safety regulations, and the increasing curb weights of buses, many operators now contract their towing needs to a professional vehicle recovery company.
Some buses that have reached the end of their service that are still in good condition are sent for export to other countries.
Some retired buses have been converted to static or mobile cafés, often using historic buses as a tourist attraction. There are also catering buses: buses converted into a mobile canteen and break room. These are commonly seen at external filming locations to feed the cast and crew, and at other large events to feed staff. Another use is as an emergency vehicle, such as high-capacity ambulance bus or mobile command centre.
Some organisations adapt and operate playbuses or learning buses to provide a playground or learning environments to children who might not have access to proper play areas. An ex-London AEC Routemaster bus has been converted to a mobile theatre and catwalk fashion show.
Some buses meet a destructive end by being entered in banger races or at demolition derbys. A larger number of old retired buses have also been converted into mobile holiday homes and campers.
Bus preservation
Rather than being scrapped or converted for other uses, sometimes retired buses are saved for preservation. This can be done by individuals, volunteer preservation groups or charitable trusts, museums, or sometimes by the operators themselves as part of a heritage fleet. These buses often need to be restored to their original condition and will have their livery and other details such as internal notices and rollsigns restored to be authentic to a specific time in the bus's history. Some buses that undergo preservation are rescued from a state of great disrepair, but others enter preservation with very little wrong with them. As with other historic vehicles, many preserved buses either in a working or static state form part of the collections of transport museums. Additionally, some buses are preserved so they can appear alongside other period vehicles in television and film. Working buses will often be exhibited at rallies and events, and they are also used as charter buses. While many preserved buses are quite old or even vintage, in some cases relatively new examples of a bus type can enter restoration. In-service examples are still in use by other operators. This often happens when a change in design or operating practice, such as the switch to one person operation or low floor technology, renders some buses redundant while still relatively new.
Modification as railway vehicles
See also
Coach (bus)
Bicycle carrier (bus mounted bike racks)
Bus spotting
Bus station
Cutaway bus
Dollar van
Horsebus
Intercity bus
Intercity bus driver
List of fictional buses
Multi-axle bus
Public light bus
Trackless train
Transit bus
References
Bibliography
External links
American Bus Association ()
French inventions
|
https://en.wikipedia.org/wiki/Bronze
|
Bronze is an alloy consisting primarily of copper, commonly with about 12–12.5% tin and often with the addition of other metals (including aluminium, manganese, nickel, or zinc) and sometimes non-metals, such as phosphorus, or metalloids such as arsenic or silicon. These additions produce a range of alloys that may be harder than copper alone, or have other useful properties, such as strength, ductility, or machinability.
The archaeological period in which bronze was the hardest metal in widespread use is known as the Bronze Age. The beginning of the Bronze Age in western Eurasia and India is conventionally dated to the mid-4th millennium BCE (~3500 BCE), and to the early 2nd millennium BCE in China; elsewhere it gradually spread across regions. The Bronze Age was followed by the Iron Age starting about 1300 BCE and reaching most of Eurasia by about 500 BCE, although bronze continued to be much more widely used than it is in modern times.
Because historical artworks were often made of brasses (copper and zinc) and bronzes with different compositions, modern museum and scholarly descriptions of older artworks increasingly use the generalized term "copper alloy" instead.
Etymology
The word bronze (1730–1740) is borrowed from Middle French (1511), itself borrowed from Italian (13th century, transcribed in Medieval Latin as ) from either:
, back-formation from Byzantine Greek (, 11th century), perhaps from (, , reputed for its bronze; or originally:
in its earliest form from Old Persian , (, , modern ) and () , from which also came Georgian (), Turkish , and Armenian (), also meaning .
History
The discovery of bronze enabled people to create metal objects that were harder and more durable than previously possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper ("Chalcolithic") predecessors. Initially, bronze was made out of copper and arsenic, forming arsenic bronze, or from naturally or artificially mixed ores of copper and arsenic.
The earliest artifacts so far known come from the Iranian plateau, in the 5th millennium BCE, and are smelted from native arsenical copper and copper-arsenides, such as algodonite and domeykite. The earliest tin-copper-alloy artifact has been dated to , in a Vinča culture site in Pločnik (Serbia), and believed to have been smelted from a natural tin-copper ore, stannite. Other early examples date to the late 4th millennium BCE in Egypt, Susa (Iran) and some ancient sites in China, Luristan (Iran), Tepe Sialk (Iran), Mundigak (Afghanistan), and Mesopotamia (Iraq).
Tin bronze was superior to arsenic bronze in that the alloying process could be more easily controlled, and the resulting alloy was stronger and easier to cast. Also, unlike those of arsenic, metallic tin and fumes from tin refining are not toxic.
Tin became the major non-copper ingredient of bronze in the late 3rd millennium BCE.
Ores of copper and the far rarer tin are not often found together (exceptions include Cornwall in the United Kingdom, one ancient site in Thailand and one in Iran), so serious bronze work has always involved trade. Tin sources and trade in ancient times had a major influence on the development of cultures. In Europe, a major source of tin was the British deposits of ore in Cornwall, which were traded as far as Phoenicia in the eastern Mediterranean.
In many parts of the world, large hoards of bronze artifacts are found, suggesting that bronze also represented a store of value and an indicator of social status. In Europe, large hoards of bronze tools, typically socketed axes (illustrated above), are found, which mostly show no signs of wear. With Chinese ritual bronzes, which are documented in the inscriptions they carry and from other sources, the case is clear. These were made in enormous quantities for elite burials, and also used by the living for ritual offerings.
Transition to iron
Though bronze is generally harder than wrought iron, with Vickers hardness of 60–258 vs. 30–80, the Bronze Age gave way to the Iron Age after a serious disruption of the tin trade: the population migrations of around 1200–1100 BCE reduced the shipping of tin around the Mediterranean and from Britain, limiting supplies and raising prices. As the art of working in iron improved, iron became cheaper and improved in quality. As cultures advanced from hand-wrought iron to machine-forged iron (typically made with trip hammers powered by water), blacksmiths learned how to make steel. Steel is stronger and harder than bronze and holds a sharper edge longer.
Bronze was still used during the Iron Age, and has continued in use for many purposes to the modern day.
Composition
There are many different bronze alloys, but typically modern bronze is 88% copper and 12% tin. Alpha bronze consists of the alpha solid solution of tin in copper. Alpha bronze alloys of 4–5% tin are used to make coins, springs, turbines and blades. Historical "bronzes" are highly variable in composition, as most metalworkers probably used whatever scrap was on hand; the metal of the 12th-century English Gloucester Candlestick is bronze containing a mixture of copper, zinc, tin, lead, nickel, iron, antimony, arsenic and an unusually large amount of silver – between 22.5% in the base and 5.76% in the pan below the candle. The proportions of this mixture suggest that the candlestick was made from a hoard of old coins. The 13th-century Benin Bronzes are in fact brass, and the 12th-century Romanesque Baptismal font at St Bartholomew's Church, Liège is described as both bronze and brass.
In the Bronze Age, two forms of bronze were commonly used: "classic bronze", about 10% tin, was used in casting; and "mild bronze", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were mostly cast from classic bronze, while helmets and armor were hammered from mild bronze.
Commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications.
Plastic bronze contains a significant quantity of lead, which makes for improved plasticity possibly used by the ancient Greeks in their ship construction.
has a composition of Si: 2.80–3.80%, Mn: 0.50–1.30%, Fe: 0.80% max., Zn: 1.50% max., Pb: 0.05% max., Cu: balance.
Other bronze alloys include aluminium bronze, phosphor bronze, manganese bronze, bell metal, arsenical bronze, speculum metal, bismuth bronze, and cymbal alloys.
Properties
Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminum or silicon may be slightly less dense. Bronze is a better conductor of heat and electricity than most steels. The cost of copper-base alloys is generally higher than that of steels but lower than that of nickel-base alloys.
Bronzes are typically ductile alloys, considerably less brittle than cast iron. Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), resonant qualities of bell bronze (20% tin, 80% copper), and resistance to corrosion by seawater of several bronze alloys.
The melting point of bronze varies depending on the ratio of the alloy components and is about . Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties.
Typically bronze oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. If copper chlorides are formed, a corrosion-mode called "bronze disease" will eventually completely destroy it.
Uses
Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings.
In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows color-matched repair of defects in castings. Aluminum is also used for the structural metal aluminum bronze.
Bronze parts are tough and typically used for bearings, clips, electrical connectors and springs.
Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings, automobile transmission pilot bearings, and similar fittings, and is particularly common in the bearings of small electric motors. Phosphor bronze is particularly suited to precision-grade bearings and springs. It is also used in guitar and piano strings.
Unlike steel, bronze struck against a hard surface will not generate sparks, so it (along with beryllium copper) is used to make hammers, mallets, wrenches and other durable tools to be used in explosive atmospheres or in the presence of flammable vapors. Bronze is used to make bronze wool for woodworking applications where steel wool would discolor oak.
Phosphor bronze is used for ships' propellers, musical instruments, and electrical contacts. Bearings are often made of bronze for its friction properties. It can be impregnated with oil to make the proprietary Oilite and similar material for bearings. Aluminum bronze is hard and wear-resistant, and is used for bearings and machine tool ways.
Sculptures
Bronze is widely used for casting bronze sculptures. Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Then, as the bronze cools, it shrinks a little, making it easier to separate from the mould.
The Assyrian king Sennacherib (704–681 BCE) claims to have been the first to cast monumental bronze statues (of up to 30 tonnes) using two-part moulds instead of the lost-wax method.
Bronze statues were regarded as the highest form of sculpture in Ancient Greek art, though survivals are few, as bronze was a valuable material in short supply in the Late Antique and medieval periods. Many of the most famous Greek bronze sculptures are known through Roman copies in marble, which were more likely to survive.
In India, bronze sculptures from the Kushana (Chausa hoard) and Gupta periods (Brahma from Mirpur-Khas, Akota Hoard, Sultanganj Buddha) and later periods (Hansi Hoard) have been found. Indian Hindu artisans from the period of the Chola empire in Tamil Nadu used bronze to create intricate statues via the lost-wax casting method with ornate detailing depicting the deities of Hinduism. The art form survives to this day, with many silpis, craftsmen, working in the areas of Swamimalai and Chennai.
In antiquity other cultures also produced works of high art using bronze. For example: in Africa, the bronze heads of the Kingdom of Benin; in Europe, Grecian bronzes typically of figures from Greek mythology; in east Asia, Chinese ritual bronzes of the Shang and Zhou dynasty—more often ceremonial vessels but including some figurine examples.
Bronze continues into modern times as one of the materials of choice for monumental statuary.
Mirrors
Before it became possible to produce glass with acceptably flat surfaces, bronze was a standard material for mirrors. Bronze was used for this purpose in many parts of the world, probably based on independent discoveries.
Bronze mirrors survive from the Egyptian Middle Kingdom (2040–1750 BCE), and China from at least . In Europe, the Etruscans were making bronze mirrors in the sixth century BCE, and Greek and Roman mirrors followed the same pattern. Although other materials such as speculum metal had come into use, and Western glass mirrors had largely taken over, bronze mirrors were still being made in Japan and elsewhere in the eighteenth century, and are still made on a small scale in Kerala, India.
Musical instruments
Bronze is the preferred metal for bells in the form of a high tin bronze alloy known as bell metal, which is typically about 23% tin.
Nearly all professional cymbals are made from bronze, which gives a desirable balance of durability and timbre. Several types of bronze are used, commonly B20 bronze, which is roughly 20% tin, 80% copper, with traces of silver, or the tougher B8 bronze made from 8% tin and 92% copper. As the tin content in a bell or cymbal rises, the timbre drops.
Bronze is also used for the windings of steel and nylon strings of various stringed instruments such as the double bass, piano, harpsichord, and guitar. Bronze strings are commonly reserved on pianoforte for the lower pitch tones, as they possess a superior sustain quality to that of high-tensile steel.
Bronzes of various metallurgical properties are widely used in struck idiophones around the world, notably bells, singing bowls, gongs, cymbals, and other idiophones from Asia. Examples include Tibetan singing bowls, temple bells of many sizes and shapes, Javanese gamelan, and other bronze musical instruments. The earliest bronze archeological finds in Indonesia date from 1–2 BCE, including flat plates probably suspended and struck by a wooden or bone mallet. Ancient bronze drums from Thailand and Vietnam date back 2,000 years. Bronze bells from Thailand and Cambodia date back to 3600 BCE.
Some companies are now making saxophones from phosphor bronze (3.5 to 10% tin and up to 1% phosphorus content). Bell bronze/B20 is used to make the tone rings of many professional model banjos. The tone ring is a heavy (usually ) folded or arched metal ring attached to a thick wood rim, over which a skin, or most often, a plastic membrane (or head) is stretched – it is the bell bronze that gives the banjo a crisp powerful lower register and clear bell-like treble register.
Biblical references
There are over 125 references to bronze ('nehoshet'), which appears to be the Hebrew word used for copper and any of its alloys. However, the Old Testament era Hebrews are not thought to have had the capability to manufacture zinc (needed to make brass) and so it is likely that 'nehoshet' refers to copper and its alloys with tin, now called bronze. In the King James Version, there is no use of the word 'bronze' and 'nehoshet' was translated as 'brass'. Modern translations use 'bronze'. Bronze (nehoshet) was used widely in the Tabernacle for items such as the bronze altar (Exodus Ch.27), bronze laver (Exodus Ch.30), utensils, and mirror (Exodus Ch.38). It was mentioned in the account of Moses holding up a bronze snake on a pole in Numbers Ch.21. In First Kings, it is mentioned that Hiram was very skilled in working with bronze, and he made many furnishings for Solomon's Temple including pillars, capitals, stands, wheels, bowls, and plates, some of which were highly decorative (see I Kings 7:13-47). Bronze was also widely used as battle armor and helmet, as in the battle of David and Goliath in I Samuel 17:5-6;38 (also see II Chron. 12:10).
Coins and medals
Bronze has also been used in coins; most "copper" coins are actually bronze, with about 4 percent tin and 1 percent zinc.
As with coins, bronze has been used in the manufacture of various types of medals for centuries, and "bronze medals" are known in contemporary times for being awarded for third place in sporting competitions and other events. The term is now often used for third place even when no actual bronze medal is awarded. The usage in part arose from the trio of gold, silver and bronze to represent the first three Ages of Man in Greek mythology: the Golden Age, when men lived among the gods; the Silver age, where youth lasted a hundred years; and the Bronze Age, the era of heroes. It was first adopted for a sports event at the 1904 Summer Olympics. At the 1896 event, silver was awarded to winners and bronze to runners-up, while at 1900 other prizes were given rather than medals.
Bronze is the normal material for the related form of the plaquette, normally a rectangular work of art with a scene in relief, for a collectors' market.
See also
References
External links
Bronze bells (archived 16 December 2006)
"Lost Wax, Found Bronze": lost-wax casting explained (archived 23 May 2009)
Viking Bronze – Ancient and Early Medieval bronze casting (archived 16 April 2016)
Copper alloys
Tin alloys
|
https://en.wikipedia.org/wiki/Botany
|
Botany, also called plant science (or plant sciences), plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term "botany" comes from the Ancient Greek word () meaning "pasture", "herbs" "grass", or "fodder"; is in turn derived from (), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants of which some 391,000 species are vascular plants (including approximately 369,000 species of flowering plants), and approximately 20,000 are bryophytes.
Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species.
In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately.
Modern botany is a broad, multidisciplinary subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity.
History
Early botany
Botany originated as herbalism, the study and use of plants for their possible medicinal properties. The early recorded history of botany includes many ancient writings and plant classifications. Examples of early botanical works have been found in ancient texts from India dating back to before 1100 BCE, Ancient Egypt, in archaic Ancient Iranic Avestan writings, and in works from China purportedly from before 221 BCE.
Modern botany traces its roots back to Ancient Greece specifically to Theophrastus (–287 BCE), a student of Aristotle who invented and described many of its principles and is widely regarded in the scientific community as the "Father of Botany". His major works, Enquiry into Plants and On the Causes of Plants, constitute the most important contributions to botanical science until the Middle Ages, almost seventeen centuries later.
Another work from Ancient Greece that made an early impact on botany is , a five-volume encyclopedia about preliminary herbal medicine written in the middle of the first century by Greek physician and pharmacologist Pedanius Dioscorides. was widely read for more than 1,500 years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century, Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner.
In the mid-16th century, botanical gardens were founded in a number of Italian universities. The Padua botanical garden in 1545 is usually considered to be the first which is still in its original location. These gardens continued the practical value of earlier "physic gardens", often associated with monasteries, in which plants were cultivated for suspected medicinal uses. They supported the growth of botany as an academic subject. Lectures were given about the plants grown in the gardens. Botanical gardens came much later to northern Europe; the first in England was the University of Oxford Botanic Garden in 1621.
German physician Leonhart Fuchs (1501–1566) was one of "the three German fathers of botany", along with theologian Otto Brunfels (1489–1534) and physician Hieronymus Bock (1498–1554) (also called Hieronymus Tragus). Fuchs and Brunfels broke away from the tradition of copying earlier works to make original observations of their own. Bock created his own system of plant classification.
Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad von Gesner (1516–1565) and herbalist John Gerard (1545–) published herbals covering the supposed medicinal uses of plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells, a term he coined, in cork, and a short time later in living plant tissue.
Early modern botany
During the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family, genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753, Carl Linnaeus published his Species Plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme where the first name represented the genus and the second identified the species within the genus. For the purposes of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts, ferns, algae and fungi.
This clinical categorization of plants was soon followed by the creation of the categories of race and sexuality; the classification of plants necessitated classification of all other living things, including humans. As a result, taxonomy and botany played an influential role in the development of scientific racism. One example of this progression is in the works of Carl Linnaeus, the previously mentioned 18th century botanist. As Linnaeus moved on from classifying plants to classifying all organisms, he published Systema Naturae, a major classificatory piece that he would continue to edit and grow over time. In his 10th edition he expands from four "varieties" of man - Europeans Albus, Americanus Rubescens, Asiaticus Fuscus, and Africanus Niger, based on the four known continents - he also attributes certain skin color, medical temperament, body posture, physical traits, behavior, manner of clothing, and form of government to each variety of people. In these descriptions he labels Asian people as stern, taught, and greedy; black people as sly, sluggish, and neglectful; white people as light, wise, and inventors. Linnaeus is only one of many botanists who influenced scientific racism through the categorization of organisms.
Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation that there were more natural affinities between plants than the artificial sexual system of Linnaeus. Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected his ideas of the progression of morphological complexity and the later Bentham & Hooker system, which was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary relationships as distinct from mere morphological similarity.
Botany was greatly stimulated by the appearance of the first "modern" textbook, Matthias Schleiden's , published in English in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled the calculation of the rates of molecular diffusion in biological systems.
The system in which early modern botany was practiced was very extensive. Modern botany emerged following the surge in exploration of other continents by European colonizers. Plant collectors would travel to different countries in search of new specimens for botanists to classify. Plants usable for cultivataion would then be hybridized. The history of botany has been connected to imbalanced power structures in the past. Slave labor was widespread not only in plantations but also in the running of botanical gardens; for example, on St. Vincent Island, plantation slavery was vital for the economic success of the sugar colonies and for the maintenance of the breadfruit cultivation project in the St. Vincent botanical gardens.
Late modern botany
Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century.
The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants.
Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research. The discovery and identification of the auxin plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals. Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones. The synthetic auxin 2,4-dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides.
20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield.
Modern morphology recognises a continuum between the major morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasises structural dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern Molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research.
Scope and importance
The study of plants is vital because they underpin almost all animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil.
Historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. Botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. At each of these levels, a botanist may be concerned with the classification (taxonomy), phylogeny and evolution, structure (anatomy and morphology), or function (physiology) of plant life.
The strictest definition of "plant" includes only the "land plants" or embryophytes, which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology), non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists, and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses.
Palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria, the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years.
Among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life's basic ingredients: energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability.
Human nutrition
Virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics.
Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists.
Plant biochemistry
Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds.
Plants make various photosynthetic pigments, some of which can be seen here through paper chromatography
Xanthophylls
Chlorophyll a
Chlorophyll b
Plants and various other groups of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product.
The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that's used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant.
Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out.
Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed.
Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period.
The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common carbon fixation pathway. These biochemical strategies are unique to land plants.
Medicine and materials
Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery.
Plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder.
Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin.
Plant ecology
Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change.
Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest.
Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds.
Plants, climate and environmental change
Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction.
Genetics
Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, "jumping genes" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms.
Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes.
Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent.
Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed.
As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants.
Molecular genetics
A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally.
Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology.
Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops.
Epigenetics
Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells.
Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others.
Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate.
Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other.
Plant evolution
The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as "blue-green algae") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident.
The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina.
Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms.
Plant physiology
Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis.
Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes.
Plant hormones
Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids.
The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About the same time, the role of auxins (from the Greek , to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification.
Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops.
Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack.
In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light.
Plant anatomy and morphology
Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form.
All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division.
The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts.
The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant.
In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots.
Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means.
Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations.
Systematic botany
Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress.
Kingdom Plantae belongs to Domain Eukaryota and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. For example, the tiger lily is Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined when italics are not available).
The evolutionary relationships and heredity of a group of organisms is called its phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two genera are indeed related.
Judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies) – and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary branching and descent.
From the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. Clive Stace describes this as having "direct access to the genetic basis of evolution." As a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants.
In 1998, the Angiosperm Phylogeny Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms, have now been answered. Investigating how plant species are related to each other allows botanists to better understand the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological developments such as computers and electron microscopes have greatly increased the level of detail studied and speed at which data can be analysed.
Symbols
A few symbols are in current use in botany. A number of others are obsolete; for example, Linnaeus used planetary symbols (Mars) for biennial plants, (Jupiter) for herbaceous perennials and (Saturn) for woody perennials, based on the planets' orbital periods of 2, 12 and 30 years; and Willd used (Saturn) for neuter in addition to (Mercury) for hermaphroditic. The following symbols are still used:
♀ female
♂ male
⚥ hermaphrodite/bisexual
⚲ vegetative (asexual) reproduction
◊ sex unknown
☉ annual
⚇ biennial
♾ perennial
☠ poisonous
🛈 further information
× crossbred hybrid
+ grafted hybrid
See also
Branches of botany
Evolution of plants
Glossary of botanical terms
Glossary of plant morphology
List of botany journals
List of botanists
List of botanical gardens
List of botanists by author abbreviation
List of domesticated plants
List of flowers
List of systems of plant taxonomy
Outline of botany
Timeline of British botany
Notes
References
Citations
Sources
Supporting Information
External links
Articles containing video clips
|
https://en.wikipedia.org/wiki/Bacteriophage
|
A bacteriophage (), also known informally as a phage (), is a virus that infects and replicates within bacteria and archaea. The term was derived from "bacteria" and the Greek φαγεῖν (), meaning "to devour". Bacteriophages are composed of proteins that encapsulate a DNA or RNA genome, and may have structures that are either simple or elaborate. Their genomes may encode as few as four genes (e.g. MS2) and as many as hundreds of genes. Phages replicate within the bacterium following the injection of their genome into its cytoplasm.
Bacteriophages are among the most common and diverse entities in the biosphere. Bacteriophages are ubiquitous viruses, found wherever bacteria exist. It is estimated there are more than 1031 bacteriophages on the planet, more than every other organism on Earth, including bacteria, combined. Viruses are the most abundant biological entity in the water column of the world's oceans, and the second largest component of biomass after prokaryotes, where up to 9x108 virions per millilitre have been found in microbial mats at the surface, and up to 70% of marine bacteria may be infected by phages.
Phages have been used since the late 20th century as an alternative to antibiotics in the former Soviet Union and Central Europe, as well as in France. They are seen as a possible therapy against multi-drug-resistant strains of many bacteria (see phage therapy).
Phages are known to interact with the immune system both indirectly via bacterial expression of phage-encoded proteins and directly by influencing innate immunity and bacterial clearance. Phage–host interactions are becoming increasingly important areas of research.
Classification
Bacteriophages occur abundantly in the biosphere, with different genomes and lifestyles. Phages are classified by the International Committee on Taxonomy of Viruses (ICTV) according to morphology and nucleic acid.
It has been suggested that members of Picobirnaviridae infect bacteria, but not mammals.
There are also many unassigned genera of the class Leviviricetes: Chimpavirus, Hohglivirus, Mahrahvirus, Meihzavirus, Nicedsevirus, Sculuvirus, Skrubnovirus, Tetipavirus and Winunavirus containing linear ssRNA genomes and the unassigned genus Lilyvirus of the order Caudovirales containing a linear dsDNA genome.
History
In 1896, Ernest Hanbury Hankin reported that something in the waters of the Ganges and Yamuna rivers in India had a marked antibacterial action against cholera and it could pass through a very fine porcelain filter. In 1915, British bacteriologist Frederick Twort, superintendent of the Brown Institution of London, discovered a small agent that infected and killed bacteria. He believed the agent must be one of the following:
a stage in the life cycle of the bacteria
an enzyme produced by the bacteria themselves, or
a virus that grew on and destroyed the bacteria
Twort's research was interrupted by the onset of World War I, as well as a shortage of funding and the discoveries of antibiotics.
Independently, French-Canadian microbiologist Félix d'Hérelle, working at the Pasteur Institute in Paris, announced on 3 September 1917 that he had discovered "an invisible, antagonistic microbe of the dysentery bacillus". For d'Hérelle, there was no question as to the nature of his discovery: "In a flash I had understood: what caused my clear spots was in fact an invisible microbe... a virus parasitic on bacteria." D'Hérelle called the virus a bacteriophage, a bacteria-eater (from the Greek , meaning "to devour"). He also recorded a dramatic account of a man suffering from dysentery who was restored to good health by the bacteriophages. It was d'Hérelle who conducted much research into bacteriophages and introduced the concept of phage therapy. In 1919, in Paris, France, d'Hérelle conducted the first clinical application of a bacteriophage, with the first reported use in the United States being in 1922.
Nobel prizes awarded for phage research
In 1969, Max Delbrück, Alfred Hershey, and Salvador Luria were awarded the Nobel Prize in Physiology or Medicine for their discoveries of the replication of viruses and their genetic structure. Specifically the work of Hershey, as contributor to the Hershey–Chase experiment in 1952, provided convincing evidence that DNA, not protein, was the genetic material of life. Delbrück and Luria carried out the Luria–Delbrück experiment which demonstrated statistically that mutations in bacteria occur randomly and thus follow Darwinian rather than Lamarckian principles.
Uses
Phage therapy
Phages were discovered to be antibacterial agents and were used in the former Soviet Republic of Georgia (pioneered there by Giorgi Eliava with help from the co-discoverer of bacteriophages, Félix d'Hérelle) during the 1920s and 1930s for treating bacterial infections. They had widespread use, including treatment of soldiers in the Red Army. However, they were abandoned for general use in the West for several reasons:
Antibiotics were discovered and marketed widely. They were easier to make, store, and prescribe.
Medical trials of phages were carried out, but a basic lack of understanding of phages raised questions about the validity of these trials.
Publication of research in the Soviet Union was mainly in the Russian or Georgian languages and for many years was not followed internationally.
The use of phages has continued since the end of the Cold War in Russia, Georgia, and elsewhere in Central and Eastern Europe. The first regulated, randomized, double-blind clinical trial was reported in the Journal of Wound Care in June 2009, which evaluated the safety and efficacy of a bacteriophage cocktail to treat infected venous ulcers of the leg in human patients. The FDA approved the study as a Phase I clinical trial. The study's results demonstrated the safety of therapeutic application of bacteriophages, but did not show efficacy. The authors explained that the use of certain chemicals that are part of standard wound care (e.g. lactoferrin or silver) may have interfered with bacteriophage viability. Shortly after that, another controlled clinical trial in Western Europe (treatment of ear infections caused by Pseudomonas aeruginosa) was reported in the journal Clinical Otolaryngology in August 2009. The study concludes that bacteriophage preparations were safe and effective for treatment of chronic ear infections in humans. Additionally, there have been numerous animal and other experimental clinical trials evaluating the efficacy of bacteriophages for various diseases, such as infected burns and wounds, and cystic fibrosis-associated lung infections, among others. On the other hand, phages of Inoviridae have been shown to complicate biofilms involved in pneumonia and cystic fibrosis and to shelter the bacteria from drugs meant to eradicate disease, thus promoting persistent infection.
Meanwhile, bacteriophage researchers have been developing engineered viruses to overcome antibiotic resistance, and engineering the phage genes responsible for coding enzymes that degrade the biofilm matrix, phage structural proteins, and the enzymes responsible for lysis of the bacterial cell wall. There have been results showing that T4 phages that are small in size and short-tailed can be helpful in detecting E. coli in the human body.
Therapeutic efficacy of a phage cocktail was evaluated in a mice model with nasal infection of multidrug-resistant (MDR) A. baumannii. Mice treated with the phage cocktail showed a 2.3-fold higher survival rate compared to those untreated at seven days post-infection. In 2017, a patient with a pancreas compromised by MDR A. baumannii was put on several antibiotics; despite this, the patient's health continued to deteriorate during a four-month period. Without effective antibiotics, the patient was subjected to phage therapy using a phage cocktail containing nine different phages that had been demonstrated to be effective against MDR A. baumannii. Once on this therapy the patient's downward clinical trajectory reversed, and returned to health.
D'Herelle "quickly learned that bacteriophages are found wherever bacteria thrive: in sewers, in rivers that catch waste runoff from pipes, and in the stools of convalescent patients." This includes rivers traditionally thought to have healing powers, including India's Ganges River.
Other
Food industry – Phages have increasingly been used to safen food products and to forestall spoilage bacteria. Since 2006, the United States Food and Drug Administration (FDA) and United States Department of Agriculture (USDA) have approved several bacteriophage products. LMP-102 (Intralytix) was approved for treating ready-to-eat (RTE) poultry and meat products. In that same year, the FDA approved LISTEX (developed and produced by Micreos) using bacteriophages on cheese to kill Listeria monocytogenes bacteria, in order to give them generally recognized as safe (GRAS) status. In July 2007, the same bacteriophage were approved for use on all food products. In 2011 USDA confirmed that LISTEX is a clean label processing aid and is included in USDA. Research in the field of food safety is continuing to see if lytic phages are a viable option to control other food-borne pathogens in various food products.
Diagnostics – In 2011, the FDA cleared the first bacteriophage-based product for in vitro diagnostic use. The KeyPath MRSA/MSSA Blood Culture Test uses a cocktail of bacteriophage to detect Staphylococcus aureus in positive blood cultures and determine methicillin resistance or susceptibility. The test returns results in about five hours, compared to two to three days for standard microbial identification and susceptibility test methods. It was the first accelerated antibiotic-susceptibility test approved by the FDA.
Counteracting bioweapons and toxins – Government agencies in the West have for several years been looking to Georgia and the former Soviet Union for help with exploiting phages for counteracting bioweapons and toxins, such as anthrax and botulism. Developments are continuing among research groups in the U.S. Other uses include spray application in horticulture for protecting plants and vegetable produce from decay and the spread of bacterial disease. Other applications for bacteriophages are as biocides for environmental surfaces, e.g., in hospitals, and as preventative treatments for catheters and medical devices before use in clinical settings. The technology for phages to be applied to dry surfaces, e.g., uniforms, curtains, or even sutures for surgery now exists. Clinical trials reported in Clinical Otolaryngology show success in veterinary treatment of pet dogs with otitis.
The SEPTIC bacterium sensing and identification method uses the ion emission and its dynamics during phage infection and offers high specificity and speed for detection.
Phage display is a different use of phages involving a library of phages with a variable peptide linked to a surface protein. Each phage genome encodes the variant of the protein displayed on its surface (hence the name), providing a link between the peptide variant and its encoding gene. Variant phages from the library may be selected through their binding affinity to an immobilized molecule (e.g., botulism toxin) to neutralize it. The bound, selected phages can be multiplied by reinfecting a susceptible bacterial strain, thus allowing them to retrieve the peptides encoded in them for further study.
Antimicrobial drug discovery – Phage proteins often have antimicrobial activity and may serve as leads for peptidomimetics, i.e. drugs that mimic peptides. Phage-ligand technology makes use of phage proteins for various applications, such as binding of bacteria and bacterial components (e.g. endotoxin) and lysis of bacteria.
Basic research – Bacteriophages are important model organisms for studying principles of evolution and ecology.
Detriments
Dairy industry
Bacteriophages present in the environment can cause cheese to not ferment. In order to avoid this, mixed-strain starter cultures and culture rotation regimes can be used. Genetic engineering of culture microbes – especially Lactococcus lactis and Streptococcus thermophilus – have been studied for genetic analysis and modification to improve phage resistance. This has especially focused on plasmid and recombinant chromosomal modifications.
Some research has focused on the potential of bacteriophages as antimicrobial against foodborne pathogens and biofilm formation within the dairy industry. As the spread of antibiotic resistance is a main concern within the dairy industry, phages can serve as a promising alternative.
Replication
The life cycle of bacteriophages tends to be either a lytic cycle or a lysogenic cycle. In addition, some phages display pseudolysogenic behaviors.
With lytic phages such as the T4 phage, bacterial cells are broken open (lysed) and destroyed after immediate replication of the virion. As soon as the cell is destroyed, the phage progeny can find new hosts to infect. Lytic phages are more suitable for phage therapy. Some lytic phages undergo a phenomenon known as lysis inhibition, where completed phage progeny will not immediately lyse out of the cell if extracellular phage concentrations are high. This mechanism is not identical to that of the temperate phage going dormant and usually is temporary.
In contrast, the lysogenic cycle does not result in immediate lysing of the host cell. Those phages able to undergo lysogeny are known as temperate phages. Their viral genome will integrate with host DNA and replicate along with it, relatively harmlessly, or may even become established as a plasmid. The virus remains dormant until host conditions deteriorate, perhaps due to depletion of nutrients, then, the endogenous phages (known as prophages) become active. At this point they initiate the reproductive cycle, resulting in lysis of the host cell. As the lysogenic cycle allows the host cell to continue to survive and reproduce, the virus is replicated in all offspring of the cell. An example of a bacteriophage known to follow the lysogenic cycle and the lytic cycle is the phage lambda of E. coli.
Sometimes prophages may provide benefits to the host bacterium while they are dormant by adding new functions to the bacterial genome, in a phenomenon called lysogenic conversion. Examples are the conversion of harmless strains of Corynebacterium diphtheriae or Vibrio cholerae by bacteriophages to highly virulent ones that cause diphtheria or cholera, respectively. Strategies to combat certain bacterial infections by targeting these toxin-encoding prophages have been proposed.
Attachment and penetration
Bacterial cells are protected by a cell wall of polysaccharides, which are important virulence factors protecting bacterial cells against both immune host defenses and antibiotics.
Host growth conditions also influence the ability of the phage to attach and invade them. As phage virions do not move independently, they must rely on random encounters with the correct receptors when in solution, such as blood, lymphatic circulation, irrigation, soil water, etc.
Myovirus bacteriophages use a hypodermic syringe-like motion to inject their genetic material into the cell. After contacting the appropriate receptor, the tail fibers flex to bring the base plate closer to the surface of the cell. This is known as reversible binding. Once attached completely, irreversible binding is initiated and the tail contracts, possibly with the help of ATP present in the tail, injecting genetic material through the bacterial membrane. The injection is accomplished through a sort of bending motion in the shaft by going to the side, contracting closer to the cell and pushing back up. Podoviruses lack an elongated tail sheath like that of a myovirus, so instead, they use their small, tooth-like tail fibers enzymatically to degrade a portion of the cell membrane before inserting their genetic material.
Synthesis of proteins and nucleic acid
Within minutes, bacterial ribosomes start translating viral mRNA into protein. For RNA-based phages, RNA replicase is synthesized early in the process. Proteins modify the bacterial RNA polymerase so it preferentially transcribes viral mRNA. The host's normal synthesis of proteins and nucleic acids is disrupted, and it is forced to manufacture viral products instead. These products go on to become part of new virions within the cell, helper proteins that contribute to the assemblage of new virions, or proteins involved in cell lysis. In 1972, Walter Fiers (University of Ghent, Belgium) was the first to establish the complete nucleotide sequence of a gene and in 1976, of the viral genome of bacteriophage MS2. Some dsDNA bacteriophages encode ribosomal proteins, which are thought to modulate protein translation during phage infection.
Virion assembly
In the case of the T4 phage, the construction of new virus particles involves the assistance of helper proteins that act catalytically during phage morphogenesis. The base plates are assembled first, with the tails being built upon them afterward. The head capsids, constructed separately, will spontaneously assemble with the tails. During assembly of the phage T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. The DNA is packed efficiently within the heads. The whole process takes about 15 minutes.
Release of virions
Phages may be released via cell lysis, by extrusion, or, in a few cases, by budding. Lysis, by tailed phages, is achieved by an enzyme called endolysin, which attacks and breaks down the cell wall peptidoglycan. An altogether different phage type, the filamentous phage, makes the host cell continually secrete new virus particles. Released virions are described as free, and, unless defective, are capable of infecting a new bacterium. Budding is associated with certain Mycoplasma phages. In contrast to virion release, phages displaying a lysogenic cycle do not kill the host and instead become long-term residents as prophages.
Communication
Research in 2017 revealed that the bacteriophage Φ3T makes a short viral protein that signals other bacteriophages to lie dormant instead of killing the host bacterium. Arbitrium is the name given to this protein by the researchers who discovered it.
Genome structure
Given the millions of different phages in the environment, phage genomes come in a variety of forms and sizes. RNA phages such as MS2 have the smallest genomes, with only a few kilobases. However, some DNA phages such as T4 may have large genomes with hundreds of genes; the size and shape of the capsid varies along with the size of the genome. The largest bacteriophage genomes reach a size of 735 kb.Bacteriophage genomes can be highly mosaic, i.e. the genome of many phage species appear to be composed of numerous individual modules. These modules may be found in other phage species in different arrangements. Mycobacteriophages, bacteriophages with mycobacterial hosts, have provided excellent examples of this mosaicism. In these mycobacteriophages, genetic assortment may be the result of repeated instances of site-specific recombination and illegitimate recombination (the result of phage genome acquisition of bacterial host genetic sequences). Evolutionary mechanisms shaping the genomes of bacterial viruses vary between different families and depend upon the type of the nucleic acid, characteristics of the virion structure, as well as the mode of the viral life cycle.
Some marine roseobacter phages contain deoxyuridine (dU) instead of deoxythymidine (dT) in their genomic DNA. There is some evidence that this unusual component is a mechanism to evade bacterial defense mechanisms such as restriction endonucleases and CRISPR/Cas systems which evolved to recognize and cleave sequences within invading phages, thereby inactivating them. Other phages have long been known to use unusual nucleotides. In 1963, Takahashi and Marmur identified a Bacillus phage that has dU substituting dT in its genome, and in 1977, Kirnos et al. identified a cyanophage containing 2-aminoadenine (Z) instead of adenine (A).
Systems biology
The field of systems biology investigates the complex networks of interactions within an organism, usually using computational tools and modeling. For example, a phage genome that enters into a bacterial host cell may express hundreds of phage proteins which will affect the expression of numerous host genes or the host's metabolism. All of these complex interactions can be described and simulated in computer models.
For instance, infection of Pseudomonas aeruginosa by the temperate phage PaP3 changed the expression of 38% (2160/5633) of its host's genes. Many of these effects are probably indirect, hence the challenge becomes to identify the direct interactions among bacteria and phage.
Several attempts have been made to map protein–protein interactions among phage and their host. For instance, bacteriophage lambda was found to interact with its host, E. coli, by dozens of interactions. Again, the significance of many of these interactions remains unclear, but these studies suggest that there most likely are several key interactions and many indirect interactions whose role remains uncharacterized.
Host resistance
Bacteriophages are a major threat to bacteria and prokaryotes have evolved numerous mechanisms to block infection or to block the replication of bacteriophages within host cells. The CRISPR system is one such mechanism as are retrons and the anti-toxin system encoded by them. The Thoeris defense system is known to deploy a unique strategy for bacterial antiphage resistance via NAD+ degradation.
Bacteriophage–host symbiosis
Temperate phages are bacteriophages that integrate their genetic material into the host as extrachromosomal episomes or as a prophage during a lysogenic cycle. Some temperate phages can confer fitness advantages to their host in numerous ways, including giving antibiotic resistance through the transfer or introduction of antibiotic resistance genes (ARGs), protecting hosts from phagocytosis, protecting hosts from secondary infection through superinfection exclusion, enhancing host pathogenicity, or enhancing bacterial metabolism or growth. Bacteriophage–host symbiosis may benefit bacteria by providing selective advantages while passively replicating the phage genome.
In the environment
Metagenomics has allowed the in-water detection of bacteriophages that was not possible previously.
Also, bacteriophages have been used in hydrological tracing and modelling in river systems, especially where surface water and groundwater interactions occur. The use of phages is preferred to the more conventional dye marker because they are significantly less absorbed when passing through ground waters and they are readily detected at very low concentrations. Non-polluted water may contain approximately 2×108 bacteriophages per ml.
Bacteriophages are thought to contribute extensively to horizontal gene transfer in natural environments, principally via transduction, but also via transformation. Metagenomics-based studies also have revealed that viromes from a variety of environments harbor antibiotic-resistance genes, including those that could confer multidrug resistance.
In humans
Although phages do not infect humans, there are countless phage particles in the human body, given our extensive microbiome. Our phage population has been called the human phageome, including the "healthy gut phageome" (HGP) and the "diseased human phageome" (DHP). The active phageome of a healthy human (i.e., actively replicating as opposed to nonreplicating, integrated prophage) has been estimated to comprise dozens to thousands of different viruses.
There is evidence that bacteriophages and bacteria interact in the human gut microbiome both antagonistically and beneficially.
Preliminary studies have indicated that common bacteriophages are found in 62% of healthy individuals on average, while their prevalence was reduced by 42% and 54% on average in patients with ulcerative colitis (UC) and Crohn's disease (CD). Abundance of phages may also decline in the elderly.
The most common phages in the human intestine, found worldwide, are crAssphages. CrAssphages are transmitted from mother to child soon after birth, and there is some evidence suggesting that they may be transmitted locally. Each person develops their own unique crAssphage clusters. CrAss-like phages also may be present in primates besides humans.
Commonly studied bacteriophage
Among the countless phage, only a few have been studied in detail, including some historically important phage that were discovered in the early days of microbial genetics. These, especially the T-phage, helped to discover important principles of gene structure and function.
186 phage
λ phage
Φ6 phage
Φ29 phage
ΦX174
Bacteriophage φCb5
G4 phage
M13 phage
MS2 phage (23–28 nm in size)
N4 phage
P1 phage
P2 phage
P4 phage
R17 phage
T2 phage
T4 phage (169 kbp genome, 200 nm long)
T7 phage
T12 phage
See also
Bacterivore
CrAssphage
CRISPR
DNA viruses
Macrophage
Phage ecology
Phage monographs (a comprehensive listing of phage and phage-associated monographs, 1921–present)
Phagemid
Polyphage
RNA viruses
Transduction
Viriome
Virophage, viruses that infect other viruses
References
Bibliography
External links
Biology
|
https://en.wikipedia.org/wiki/Bactericide
|
A bactericide or bacteriocide, sometimes abbreviated Bcidal, is a substance which kills bacteria. Bactericides are disinfectants, antiseptics, or antibiotics.
However, material surfaces can also have bactericidal properties based solely on their physical surface structure, as for example biomaterials like insect wings.
Disinfectants
The most used disinfectants are those applying
active chlorine (i.e., hypochlorites, chloramines, dichloroisocyanurate and trichloroisocyanurate, wet chlorine, chlorine dioxide, etc.),
active oxygen (peroxides, such as peracetic acid, potassium persulfate, sodium perborate, sodium percarbonate, and urea perhydrate),
iodine (povidone-iodine, Lugol's solution, iodine tincture, iodinated nonionic surfactants),
concentrated alcohols (mainly ethanol, 1-propanol, called also n-propanol and 2-propanol, called isopropanol and mixtures thereof; further, 2-phenoxyethanol and 1- and 2-phenoxypropanols are used),
phenolic substances (such as phenol (also called "carbolic acid"), cresols such as thymol, halogenated (chlorinated, brominated) phenols, such as hexachlorophene, triclosan, trichlorophenol, tribromophenol, pentachlorophenol, salts and isomers thereof),
cationic surfactants, such as some quaternary ammonium cations (such as benzalkonium chloride, cetyl trimethylammonium bromide or chloride, didecyldimethylammonium chloride, cetylpyridinium chloride, benzethonium chloride) and others, non-quaternary compounds, such as chlorhexidine, glucoprotamine, octenidine dihydrochloride etc.),
strong oxidizers, such as ozone and permanganate solutions;
heavy metals and their salts, such as colloidal silver, silver nitrate, mercury chloride, phenylmercury salts, copper sulfate, copper oxide-chloride etc. Heavy metals and their salts are the most toxic and environment-hazardous bactericides and therefore their use is strongly discouraged or prohibited
strong acids (phosphoric, nitric, sulfuric, amidosulfuric, toluenesulfonic acids), pH < 1, and
alkalis (sodium, potassium, calcium hydroxides), such as of pH > 13, particularly under elevated temperature (above 60 °C), kills bacteria.
Antiseptics
As antiseptics (i.e., germicide agents that can be used on human or animal body, skin, mucoses, wounds and the like), few of the above-mentioned disinfectants can be used, under proper conditions (mainly concentration, pH, temperature and toxicity toward humans and animals). Among them, some important are
properly diluted chlorine preparations (f.e. Dakin's solution, 0.5% sodium or potassium hypochlorite solution, pH-adjusted to pH 7 – 8, or 0.5 – 1% solution of sodium benzenesulfochloramide (chloramine B)), some
iodine preparations, such as iodopovidone in various galenics (ointment, solutions, wound plasters), in the past also Lugol's solution,
peroxides such as urea perhydrate solutions and pH-buffered 0.1 – 0.25% peracetic acid solutions,
alcohols with or without antiseptic additives, used mainly for skin antisepsis,
weak organic acids such as sorbic acid, benzoic acid, lactic acid and salicylic acid
some phenolic compounds, such as hexachlorophene, triclosan and Dibromol, and
cationic surfactants, such as 0.05 – 0.5% benzalkonium, 0.5 – 4% chlorhexidine, 0.1 – 2% octenidine solutions.
Others are generally not applicable as safe antiseptics, either because of their corrosive or toxic nature.
Antibiotics
Bactericidal antibiotics kill bacteria; bacteriostatic antibiotics slow their growth or reproduction.
Bactericidal antibiotics that inhibit cell wall synthesis: the beta-lactam antibiotics (penicillin derivatives (penams), cephalosporins (cephems), monobactams, and carbapenems) and vancomycin.
Also bactericidal are daptomycin, fluoroquinolones, metronidazole, nitrofurantoin, co-trimoxazole, telithromycin.
Aminoglycosidic antibiotics are usually considered bactericidal, although they may be bacteriostatic with some organisms.
As of 2004, the distinction between bactericidal and bacteriostatic agents appeared to be clear according to the basic/clinical definition, but this only applies under strict laboratory conditions and it is important to distinguish microbiological and clinical definitions. The distinction is more arbitrary when agents are categorized in clinical situations. The supposed superiority of bactericidal agents over bacteriostatic agents is of little relevance when treating the vast majority of infections with gram-positive bacteria, particularly in patients with uncomplicated infections and noncompromised immune systems. Bacteriostatic agents have been effectively used for treatment that are considered to require bactericidal activity. Furthermore, some broad classes of antibacterial agents considered bacteriostatic can exhibit bactericidal activity against some bacteria on the basis of in vitro determination of MBC/MIC values. At high concentrations, bacteriostatic agents are often bactericidal against some susceptible organisms. The ultimate guide to treatment of any infection must be clinical outcome.
Surfaces
Material surfaces can exhibit bactericidal properties because of their crystallographic surface structure.
Somewhere in the mid-2000s it was shown that metallic nanoparticles can kill bacteria. The effect of a silver nanoparticle for example depends on its size with a preferential diameter of about 1-10 nm to interact with bacteria.
In 2013, cicada wings were found to have a selective anti-gram-negative bactericidal effect based on their physical surface structure. Mechanical deformation of the more or less rigid nanopillars found on the wing releases energy, striking and killing bacteria within minutes, hence called a mechano-bactericidal effect.
In 2020 researchers combined cationic polymer adsorption and femtosecond laser surface structuring to generate a bactericidal effect against both gram-positive Staphylococcus aureus and gram-negative Escherichia coli bacteria on borosilicate glass surfaces, providing a practical platform for the study of the bacteria-surface interaction.
See also
List of antibiotics
Microbicide
Virucide
References
|
https://en.wikipedia.org/wiki/Bohrium
|
Bohrium is a synthetic chemical element with the symbol Bh and atomic number 107. It is named after Danish physicist Niels Bohr. As a synthetic element, it can be created in particle accelerators but is not found in nature. All known isotopes of bohrium are highly radioactive; the most stable known isotope is 270Bh with a half-life of approximately 2.4 minutes, though the unconfirmed 278Bh may have a longer half-life of about 11.5 minutes.
In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 7 elements as the fifth member of the 6d series of transition metals. Chemistry experiments have confirmed that bohrium behaves as the heavier homologue to rhenium in group 7. The chemical properties of bohrium are characterized only partly, but they compare well with the chemistry of the other group 7 elements.
Introduction
History
Discovery
Two groups claimed discovery of the element. Evidence of bohrium was first reported in 1976 by a Soviet research team led by Yuri Oganessian, in which targets of bismuth-209 and lead-208 were bombarded with accelerated nuclei of chromium-54 and manganese-55 respectively. Two activities, one with a half-life of one to two milliseconds, and the other with an approximately five-second half-life, were seen. Since the ratio of the intensities of these two activities was constant throughout the experiment, it was proposed that the first was from the isotope bohrium-261 and that the second was from its daughter dubnium-257. Later, the dubnium isotope was corrected to dubnium-258, which indeed has a five-second half-life (dubnium-257 has a one-second half-life); however, the half-life observed for its parent is much shorter than the half-lives later observed in the definitive discovery of bohrium at Darmstadt in 1981. The IUPAC/IUPAP Transfermium Working Group (TWG) concluded that while dubnium-258 was probably seen in this experiment, the evidence for the production of its parent bohrium-262 was not convincing enough.
In 1981, a German research team led by Peter Armbruster and Gottfried Münzenberg at the GSI Helmholtz Centre for Heavy Ion Research (GSI Helmholtzzentrum für Schwerionenforschung) in Darmstadt bombarded a target of bismuth-209 with accelerated nuclei of chromium-54 to produce 5 atoms of the isotope bohrium-262:
+ → +
This discovery was further substantiated by their detailed measurements of the alpha decay chain of the produced bohrium atoms to previously known isotopes of fermium and californium. The IUPAC/IUPAP Transfermium Working Group (TWG) recognised the GSI collaboration as official discoverers in their 1992 report.
Proposed names
In September 1992, the German group suggested the name nielsbohrium with symbol Ns to honor the Danish physicist Niels Bohr. The Soviet scientists at the Joint Institute for Nuclear Research in Dubna, Russia had suggested this name be given to element 105 (which was finally called dubnium) and the German team wished to recognise both Bohr and the fact that the Dubna team had been the first to propose the cold fusion reaction, and simultaneously help to solve the controversial problem of the naming of element 105. The Dubna team agreed with the German group's naming proposal for element 107.
There was an element naming controversy as to what the elements from 104 to 106 were to be called; the IUPAC adopted unnilseptium (symbol Uns) as a temporary, systematic element name for this element. In 1994 a committee of IUPAC recommended that element 107 be named bohrium, not nielsbohrium, since there was no precedent for using a scientist's complete name in the naming of an element. This was opposed by the discoverers as there was some concern that the name might be confused with boron and in particular the distinguishing of the names of their respective oxyanions, bohrate and borate. The matter was handed to the Danish branch of IUPAC which, despite this, voted in favour of the name bohrium, and thus the name bohrium for element 107 was recognized internationally in 1997; the names of the respective oxyanions of boron and bohrium remain unchanged despite their homophony.
Isotopes
Bohrium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Twelve different isotopes of bohrium have been reported with atomic masses 260–262, 264–267, 270–272, 274, and 278, one of which, bohrium-262, has a known metastable state. All of these but the unconfirmed 278Bh decay only through alpha decay, although some unknown bohrium isotopes are predicted to undergo spontaneous fission.
The lighter isotopes usually have shorter half-lives; half-lives of under 100 ms for 260Bh, 261Bh, 262Bh, and 262mBh were observed. 264Bh, 265Bh, 266Bh, and 271Bh are more stable at around 1 s, and 267Bh and 272Bh have half-lives of about 10 s. The heaviest isotopes are the most stable, with 270Bh and 274Bh having measured half-lives of about 2.4 min and 40 s respectively, and the even heavier unconfirmed isotope 278Bh appearing to have an even longer half-life of about 11.5 minutes.
The most proton-rich isotopes with masses 260, 261, and 262 were directly produced by cold fusion, those with mass 262 and 264 were reported in the decay chains of meitnerium and roentgenium, while the neutron-rich isotopes with masses 265, 266, 267 were created in irradiations of actinide targets. The five most neutron-rich ones with masses 270, 271, 272, 274, and 278 (unconfirmed) appear in the decay chains of 282Nh, 287Mc, 288Mc, 294Ts, and 290Fl respectively. The half-lives of bohrium isotopes range from about ten milliseconds for 262mBh to about one minute for 270Bh and 274Bh, extending to about 11.5 minutes for the unconfirmed 278Bh, one of the longest-lived known superheavy nuclides.
Predicted properties
Very few properties of bohrium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that bohrium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of bohrium metal remain unknown and only predictions are available.
Chemical
Bohrium is the fifth member of the 6d series of transition metals and the heaviest member of group 7 in the periodic table, below manganese, technetium and rhenium. All the members of the group readily portray their group oxidation state of +7 and the state becomes more stable as the group is descended. Thus bohrium is expected to form a stable +7 state. Technetium also shows a stable +4 state whilst rhenium exhibits stable +4 and +3 states. Bohrium may therefore show these lower states as well. The higher +7 oxidation state is more likely to exist in oxyanions, such as perbohrate, , analogous to the lighter permanganate, pertechnetate, and perrhenate. Nevertheless, bohrium(VII) is likely to be unstable in aqueous solution, and would probably be easily reduced to the more stable bohrium(IV).
Technetium and rhenium are known to form volatile heptoxides M2O7 (M = Tc, Re), so bohrium should also form the volatile oxide Bh2O7. The oxide should dissolve in water to form perbohric acid, HBhO4.
Rhenium and technetium form a range of oxyhalides from the halogenation of the oxide. The chlorination of the oxide forms the oxychlorides MO3Cl, so BhO3Cl should be formed in this reaction. Fluorination results in MO3F and MO2F3 for the heavier elements in addition to the rhenium compounds ReOF5 and ReF7. Therefore, oxyfluoride formation for bohrium may help to indicate eka-rhenium properties. Since the oxychlorides are asymmetrical, and they should have increasingly large dipole moments going down the group, they should become less volatile in the order TcO3Cl > ReO3Cl > BhO3Cl: this was experimentally confirmed in 2000 by measuring the enthalpies of adsorption of these three compounds. The values are for TcO3Cl and ReO3Cl are −51 kJ/mol and −61 kJ/mol respectively; the experimental value for BhO3Cl is −77.8 kJ/mol, very close to the theoretically expected value of −78.5 kJ/mol.
Physical and atomic
Bohrium is expected to be a solid under normal conditions and assume a hexagonal close-packed crystal structure (c/a = 1.62), similar to its lighter congener rhenium. Early predictions by Fricke estimated its density at 37.1 g/cm3, but newer calculations predict a somewhat lower value of 26–27 g/cm3.
The atomic radius of bohrium is expected to be around 128 pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Bh+ ion is predicted to have an electron configuration of [Rn] 5f14 6d4 7s2, giving up a 6d electron instead of a 7s electron, which is the opposite of the behavior of its lighter homologues manganese and technetium. Rhenium, on the other hand, follows its heavier congener bohrium in giving up a 5d electron before a 6s electron, as relativistic effects have become significant by the sixth period, where they cause among other things the yellow color of gold and the low melting point of mercury. The Bh2+ ion is expected to have an electron configuration of [Rn] 5f14 6d3 7s2; in contrast, the Re2+ ion is expected to have a [Xe] 4f14 5d5 configuration, this time analogous to manganese and technetium. The ionic radius of hexacoordinate heptavalent bohrium is expected to be 58 pm (heptavalent manganese, technetium, and rhenium having values of 46, 57, and 53 pm respectively). Pentavalent bohrium should have a larger ionic radius of 83 pm.
Experimental chemistry
In 1995, the first report on attempted isolation of the element was unsuccessful, prompting new theoretical studies to investigate how best to investigate bohrium (using its lighter homologs technetium and rhenium for comparison) and removing unwanted contaminating elements such as the trivalent actinides, the group 5 elements, and polonium.
In 2000, it was confirmed that although relativistic effects are important, bohrium behaves like a typical group 7 element. A team at the Paul Scherrer Institute (PSI) conducted a chemistry reaction using six atoms of 267Bh produced in the reaction between 249Bk and 22Ne ions. The resulting atoms were thermalised and reacted with a HCl/O2 mixture to form a volatile oxychloride. The reaction also produced isotopes of its lighter homologues, technetium (as 108Tc) and rhenium (as 169Re). The isothermal adsorption curves were measured and gave strong evidence for the formation of a volatile oxychloride with properties similar to that of rhenium oxychloride. This placed bohrium as a typical member of group 7. The adsorption enthalpies of the oxychlorides of technetium, rhenium, and bohrium were measured in this experiment, agreeing very well with the theoretical predictions and implying a sequence of decreasing oxychloride volatility down group 7 of TcO3Cl > ReO3Cl > BhO3Cl.
2 Bh + 3 + 2 HCl → 2 +
The longer-lived heavy isotopes of bohrium, produced as the daughters of heavier elements, offer advantages for future radiochemical experiments. Although the heavy isotope 274Bh requires a rare and highly radioactive berkelium target for its production, the isotopes 272Bh, 271Bh, and 270Bh can be readily produced as daughters of more easily produced moscovium and nihonium isotopes.
Notes
References
Bibliography
External links
Bohrium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Transition metals
Synthetic elements
Chemical elements with hexagonal close-packed structure
|
https://en.wikipedia.org/wiki/Bipedalism
|
Bipedalism is a form of terrestrial locomotion where a tetrapod moves by means of its two rear (or lower) limbs or legs. An animal or machine that usually moves in a bipedal manner is known as a biped , meaning 'two feet' (from Latin bis 'double' and pes 'foot'). Types of bipedal movement include walking or running (a bipedal gait) and hopping.
Several groups of modern species are habitual bipeds whose normal method of locomotion is two-legged. In the Triassic period some groups of archosaurs (a group that includes crocodiles and dinosaurs) developed bipedalism; among the dinosaurs, all the early forms and many later groups were habitual or exclusive bipeds; the birds are members of a clade of exclusively bipedal dinosaurs, the theropods. Within mammals, habitual bipedalism has evolved multiple times, with the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and hominin apes (australopithecines, including humans) as well as various other extinct groups evolving the trait independently.
A larger number of modern species intermittently or briefly use a bipedal gait. Several lizard species move bipedally when running, usually to escape from threats. Many primate and bear species will adopt a bipedal gait in order to reach food or explore their environment, though there are a few cases where they walk on their hind limbs only. Several arboreal primate species, such as gibbons and indriids, exclusively walk on two legs during the brief periods they spend on the ground. Many animals rear up on their hind legs while fighting or copulating. Some animals commonly stand on their hind legs to reach food, keep watch, threaten a competitor or predator, or pose in courtship, but do not move bipedally.
Etymology
The word is derived from the Latin words bi(s) 'two' and ped- 'foot', as contrasted with quadruped 'four feet'.
Advantages
Limited and exclusive bipedalism can offer a species several advantages. Bipedalism raises the head; this allows a greater field of vision with improved detection of distant dangers or resources, access to deeper water for wading animals and allows the animals to reach higher food sources with their mouths. While upright, non-locomotory limbs become free for other uses, including manipulation (in primates and rodents), flight (in birds), digging (in the giant pangolin), combat (in bears, great apes and the large monitor lizard) or camouflage.
The maximum bipedal speed appears slower than the maximum speed of quadrupedal movement with a flexible backbone – both the ostrich and the red kangaroo can reach speeds of , while the cheetah can exceed . Even though bipedalism is slower at first, over long distances, it has allowed humans to outrun most other animals according to the endurance running hypothesis. Bipedality in kangaroo rats has been hypothesized to improve locomotor performance, which could aid in escaping from predators.
Facultative and obligate bipedalism
Zoologists often label behaviors, including bipedalism, as "facultative" (i.e. optional) or "obligate" (the animal has no reasonable alternative). Even this distinction is not completely clear-cut — for example, humans other than infants normally walk and run in biped fashion, but almost all can crawl on hands and knees when necessary. There are even reports of humans who normally walk on all fours with their feet but not their knees on the ground, but these cases are a result of conditions such as Uner Tan syndrome — very rare genetic neurological disorders rather than normal behavior. Even if one ignores exceptions caused by some kind of injury or illness, there are many unclear cases, including the fact that "normal" humans can crawl on hands and knees. This article therefore avoids the terms "facultative" and "obligate", and focuses on the range of styles of locomotion normally used by various groups of animals. Normal humans may be considered "obligate" bipeds because the alternatives are very uncomfortable and usually only resorted to when walking is impossible.
Movement
There are a number of states of movement commonly associated with bipedalism.
Standing. Staying still on both legs. In most bipeds this is an active process, requiring constant adjustment of balance.
Walking. One foot in front of another, with at least one foot on the ground at any time.
Running. One foot in front of another, with periods where both feet are off the ground.
Jumping/hopping. Moving by a series of jumps with both feet moving together.
Bipedal animals
The great majority of living terrestrial vertebrates are quadrupeds, with bipedalism exhibited by only a handful of living groups. Humans, gibbons and large birds walk by raising one foot at a time. On the other hand, most macropods, smaller birds, lemurs and bipedal rodents move by hopping on both legs simultaneously. Tree kangaroos are able to walk or hop, most commonly alternating feet when moving arboreally and hopping on both feet simultaneously when on the ground.
Extant reptiles
Many species of lizards become bipedal during high-speed, sprint locomotion, including the world's fastest lizard, the spiny-tailed iguana (genus Ctenosaura).
Early reptiles and lizards
The first known biped is the bolosaurid Eudibamus whose fossils date from 290 million years ago. Its long hind-legs, short forelegs, and distinctive joints all suggest bipedalism. The species became extinct in the early Permian.
Archosaurs (includes crocodilians and dinosaurs)
Birds
All birds are bipeds, as is the case for all theropod dinosaurs. However, hoatzin chicks have claws on their wings which they use for climbing.
Other archosaurs
Bipedalism evolved more than once in archosaurs, the group that includes both dinosaurs and crocodilians. All dinosaurs are thought to be descended from a fully bipedal ancestor, perhaps similar to Eoraptor.
Dinosaurs diverged from their archosaur ancestors approximately 230 million years ago during the Middle to Late Triassic period, roughly 20 million years after the Permian-Triassic extinction event wiped out an estimated 95 percent of all life on Earth. Radiometric dating of fossils from the early dinosaur genus Eoraptor establishes its presence in the fossil record at this time. Paleontologists suspect Eoraptor resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as Marasuchus and Lagerpeton in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators.
Bipedal movement also re-evolved in a number of other dinosaur lineages such as the iguanodonts. Some extinct members of Pseudosuchia, a sister group to the avemetatarsalians (the group including dinosaurs and relatives), also evolved bipedal forms – a poposauroid from the Triassic, Effigia okeeffeae, is thought to have been bipedal. Pterosaurs were previously thought to have been bipedal, but recent trackways have all shown quadrupedal locomotion.
Mammals
A number of groups of extant mammals have independently evolved bipedalism as their main form of locomotion - for example humans, giant pangolins, the extinct giant ground sloths, numerous species of jumping rodents and macropods. Humans, as their bipedalism has been extensively studied, are documented in the next section. Macropods are believed to have evolved bipedal hopping only once in their evolution, at some time no later than 45 million years ago.
Bipedal movement is less common among mammals, most of which are quadrupedal. All primates possess some bipedal ability, though most species primarily use quadrupedal locomotion on land. Primates aside, the macropods (kangaroos, wallabies and their relatives), kangaroo rats and mice, hopping mice and springhare move bipedally by hopping. Very few non-primate mammals commonly move bipedally with an alternating leg gait. Exceptions are the ground pangolin and in some circumstances the tree kangaroo. One black bear, Pedals, became famous locally and on the internet for having a frequent bipedal gait, although this is attributed to injuries on the bear's front paws. A two-legged fox was filmed in a Derbyshire garden in 2023, most likely having been born that way.
Primates
Most bipedal animals move with their backs close to horizontal, using a long tail to balance the weight of their bodies. The primate version of bipedalism is unusual because the back is close to upright (completely upright in humans), and the tail may be absent entirely. Many primates can stand upright on their hind legs without any support.
Chimpanzees, bonobos, gorillas, gibbons and baboons exhibit forms of bipedalism. On the ground sifakas move like all indrids with bipedal sideways hopping movements of the hind legs, holding their forelimbs up for balance. Geladas, although usually quadrupedal, will sometimes move between adjacent feeding patches with a squatting, shuffling bipedal form of locomotion. However, they can only do so for brief amounts, as their bodies are not adapted for constant bipedal locomotion.
Humans are the only primates who are normally biped, due to an extra curve in the spine which stabilizes the upright position, as well as shorter arms relative to the legs than is the case for the nonhuman great apes. The evolution of human bipedalism began in primates about four million years ago, or as early as seven million years ago with Sahelanthropus or about 12 million years ago with Danuvius guggenmosi. One hypothesis for human bipedalism is that it evolved as a result of differentially successful survival from carrying food to share with group members, although there are alternative hypotheses.
Injured individuals
Injured chimpanzees and bonobos have been capable of sustained bipedalism.
Three captive primates, one macaque Natasha and two chimps, Oliver and Poko (chimpanzee), were found to move bipedally. Natasha switched to exclusive bipedalism after an illness, while Poko was discovered in captivity in a tall, narrow cage. Oliver reverted to knuckle-walking after developing arthritis. Non-human primates often use bipedal locomotion when carrying food, or while moving through shallow water.
Limited bipedalism
Limited bipedalism in mammals
Other mammals engage in limited, non-locomotory, bipedalism. A number of other animals, such as rats, raccoons, and beavers will squat on their hindlegs to manipulate some objects but revert to four limbs when moving (the beaver will move bipedally if transporting wood for their dams, as will the raccoon when holding food). Bears will fight in a bipedal stance to use their forelegs as weapons. A number of mammals will adopt a bipedal stance in specific situations such as for feeding or fighting. Ground squirrels and meerkats will stand on hind legs to survey their surroundings, but will not walk bipedally. Dogs (e.g. Faith) can stand or move on two legs if trained, or if birth defect or injury precludes quadrupedalism. The gerenuk antelope stands on its hind legs while eating from trees, as did the extinct giant ground sloth and chalicotheres. The spotted skunk will walk on its front legs when threatened, rearing up on its front legs while facing the attacker so that its anal glands, capable of spraying an offensive oil, face its attacker.
Limited bipedalism in non-mammals (and non-birds)
Bipedalism is unknown among the amphibians. Among the non-archosaur reptiles bipedalism is rare, but it is found in the "reared-up" running of lizards such as agamids and monitor lizards. Many reptile species will also temporarily adopt bipedalism while fighting. One genus of basilisk lizard can run bipedally across the surface of water for some distance. Among arthropods, cockroaches are known to move bipedally at high speeds. Bipedalism is rarely found outside terrestrial animals, though at least two types of octopus walk bipedally on the sea floor using two of their arms, allowing the remaining arms to be used to camouflage the octopus as a mat of algae or a floating coconut.
Evolution of human bipedalism
There are at least twelve distinct hypotheses as to how and why bipedalism evolved in humans, and also some debate as to when. Bipedalism evolved well before the large human brain or the development of stone tools. Bipedal specializations are found in Australopithecus fossils from 4.2 to 3.9 million years ago and recent studies have suggested that obligate bipedal hominid species were present as early as 7 million years ago. Nonetheless, the evolution of bipedalism was accompanied by significant evolutions in the spine including the forward movement in position of the foramen magnum, where the spinal cord leaves the cranium. Recent evidence regarding modern human sexual dimorphism (physical differences between male and female) in the lumbar spine has been seen in pre-modern primates such as Australopithecus africanus. This dimorphism has been seen as an evolutionary adaptation of females to bear lumbar load better during pregnancy, an adaptation that non-bipedal primates would not need to make. Adapting bipedalism would have required less shoulder stability, which allowed the shoulder and other limbs to become more independent of each other and adapt for specific suspensory behaviors. In addition to the change in shoulder stability, changing locomotion would have increased the demand for shoulder mobility, which would have propelled the evolution of bipedalism forward. The different hypotheses are not necessarily mutually exclusive and a number of selective forces may have acted together to lead to human bipedalism. It is important to distinguish between adaptations for bipedalism and adaptations for running, which came later still.
The form and function of modern-day humans' upper bodies appear to have evolved from living in a more forested setting. Living in this kind of environment would have made it so that being able to travel arboreally would have been advantageous at the time. It has also been proposed that, like some modern-day apes, early hominins had undergone a knuckle-walking stage prior to adapting the back limbs for bipedality while retaining forearms capable of grasping. Numerous causes for the evolution of human bipedalism involve freeing the hands for carrying and using tools, sexual dimorphism in provisioning, changes in climate and environment (from jungle to savanna) that favored a more elevated eye-position, and to reduce the amount of skin exposed to the tropical sun. It is possible that bipedalism provided a variety of benefits to the hominin species, and scientists have suggested multiple reasons for evolution of human bipedalism. There is also not only the question of why the earliest hominins were partially bipedal but also why hominins became more bipedal over time. For example, the postural feeding hypothesis describes how the earliest hominins became bipedal for the benefit of reaching food in trees while the savanna-based theory describes how the late hominins that started to settle on the ground became increasingly bipedal.
Multiple factors
Napier (1963) argued that it is unlikely that a single factor drove the evolution of bipedalism. He stated "It seems unlikely that any single factor was responsible for such a dramatic change in behaviour. In addition to the advantages of accruing from ability to carry objects – food or otherwise – the improvement of the visual range and the freeing of the hands for purposes of defence and offence may equally have played their part as catalysts." Sigmon (1971) demonstrated that chimpanzees exhibit bipedalism in different contexts, and one single factor should be used to explain bipedalism: preadaptation for human bipedalism. Day (1986) emphasized three major pressures that drove evolution of bipedalism: food acquisition, predator avoidance, and reproductive success. Ko (2015) stated that there are two questions main regarding bipedalism 1. Why were the earliest hominins partially bipedal? and 2. Why did hominins become more bipedal over time? He argued that these questions can be answered with combination of prominent theories such as Savanna-based, Postural feeding, and Provisioning.
Savannah-based theory
According to the Savanna-based theory, hominines came down from the tree's branches and adapted to life on the savanna by walking erect on two feet. The theory suggests that early hominids were forced to adapt to bipedal locomotion on the open savanna after they left the trees. One of the proposed mechanisms was the knuckle-walking hypothesis, which states that human ancestors used quadrupedal locomotion on the savanna, as evidenced by morphological characteristics found in Australopithecus anamensis and Australopithecus afarensis forelimbs, and that it is less parsimonious to assume that knuckle walking developed twice in genera Pan and Gorilla instead of evolving it once as synapomorphy for Pan and Gorilla before losing it in Australopithecus. The evolution of an orthograde posture would have been very helpful on a savanna as it would allow the ability to look over tall grasses in order to watch out for predators, or terrestrially hunt and sneak up on prey. It was also suggested in P. E. Wheeler's "The evolution of bipedality and loss of functional body hair in hominids", that a possible advantage of bipedalism in the savanna was reducing the amount of surface area of the body exposed to the sun, helping regulate body temperature. In fact, Elizabeth Vrba's turnover pulse hypothesis supports the savanna-based theory by explaining the shrinking of forested areas due to global warming and cooling, which forced animals out into the open grasslands and caused the need for hominids to acquire bipedality.
Others state hominines had already achieved the bipedal adaptation that was used in the savanna. The fossil evidence reveals that early bipedal hominins were still adapted to climbing trees at the time they were also walking upright. It is possible that bipedalism evolved in the trees, and was later applied to the savanna as a vestigial trait. Humans and orangutans are both unique to a bipedal reactive adaptation when climbing on thin branches, in which they have increased hip and knee extension in relation to the diameter of the branch, which can increase an arboreal feeding range and can be attributed to a convergent evolution of bipedalism evolving in arboreal environments. Hominine fossils found in dry grassland environments led anthropologists to believe hominines lived, slept, walked upright, and died only in those environments because no hominine fossils were found in forested areas. However, fossilization is a rare occurrence—the conditions must be just right in order for an organism that dies to become fossilized for somebody to find later, which is also a rare occurrence. The fact that no hominine fossils were found in forests does not ultimately lead to the conclusion that no hominines ever died there. The convenience of the savanna-based theory caused this point to be overlooked for over a hundred years.
Some of the fossils found actually showed that there was still an adaptation to arboreal life. For example, Lucy, the famous Australopithecus afarensis, found in Hadar in Ethiopia, which may have been forested at the time of Lucy's death, had curved fingers that would still give her the ability to grasp tree branches, but she walked bipedally. "Little Foot," a nearly-complete specimen of Australopithecus africanus, has a divergent big toe as well as the ankle strength to walk upright. "Little Foot" could grasp things using his feet like an ape, perhaps tree branches, and he was bipedal. Ancient pollen found in the soil in the locations in which these fossils were found suggest that the area used to be much more wet and covered in thick vegetation and has only recently become the arid desert it is now.
Traveling efficiency hypothesis
An alternative explanation is that the mixture of savanna and scattered forests increased terrestrial travel by proto-humans between clusters of trees, and bipedalism offered greater efficiency for long-distance travel between these clusters than quadrupedalism. In an experiment monitoring chimpanzee metabolic rate via oxygen consumption, it was found that the quadrupedal and bipedal energy costs were very similar, implying that this transition in early ape-like ancestors would not have been very difficult or energetically costing. This increased travel efficiency is likely to have been selected for as it assisted foraging across widely dispersed resources.
Postural feeding hypothesis
The postural feeding hypothesis has been recently supported by Dr. Kevin Hunt, a professor at Indiana University. This hypothesis asserts that chimpanzees were only bipedal when they eat. While on the ground, they would reach up for fruit hanging from small trees and while in trees, bipedalism was used to reach up to grab for an overhead branch. These bipedal movements may have evolved into regular habits because they were so convenient in obtaining food. Also, Hunt's hypotheses states that these movements coevolved with chimpanzee arm-hanging, as this movement was very effective and efficient in harvesting food. When analyzing fossil anatomy, Australopithecus afarensis has very similar features of the hand and shoulder to the chimpanzee, which indicates hanging arms. Also, the Australopithecus hip and hind limb very clearly indicate bipedalism, but these fossils also indicate very inefficient locomotive movement when compared to humans. For this reason, Hunt argues that bipedalism evolved more as a terrestrial feeding posture than as a walking posture.
A related study conducted by University of Birmingham, Professor Susannah Thorpe examined the most arboreal great ape, the orangutan, holding onto supporting branches in order to navigate branches that were too flexible or unstable otherwise. In more than 75 percent of observations, the orangutans used their forelimbs to stabilize themselves while navigating thinner branches. Increased fragmentation of forests where A. afarensis as well as other ancestors of modern humans and other apes resided could have contributed to this increase of bipedalism in order to navigate the diminishing forests. Findings also could shed light on discrepancies observed in the anatomy of A. afarensis, such as the ankle joint, which allowed it to "wobble" and long, highly flexible forelimbs. If bipedalism started from upright navigation in trees, it could explain both increased flexibility in the ankle as well as long forelimbs which grab hold of branches.
Provisioning model
One theory on the origin of bipedalism is the behavioral model presented by C. Owen Lovejoy, known as "male provisioning". Lovejoy theorizes that the evolution of bipedalism was linked to monogamy. In the face of long inter-birth intervals and low reproductive rates typical of the apes, early hominids engaged in pair-bonding that enabled greater parental effort directed towards rearing offspring. Lovejoy proposes that male provisioning of food would improve the offspring survivorship and increase the pair's reproductive rate. Thus the male would leave his mate and offspring to search for food and return carrying the food in his arms walking on his legs. This model is supported by the reduction ("feminization") of the male canine teeth in early hominids such as Sahelanthropus tchadensis and Ardipithecus ramidus, which along with low body size dimorphism in Ardipithecus and Australopithecus, suggests a reduction in inter-male antagonistic behavior in early hominids. In addition, this model is supported by a number of modern human traits associated with concealed ovulation (permanently enlarged breasts, lack of sexual swelling) and low sperm competition (moderate sized testes, low sperm mid-piece volume) that argues against recent adaptation to a polygynous reproductive system.
However, this model has been debated, as others have argued that early bipedal hominids were instead polygynous. Among most monogamous primates, males and females are about the same size. That is sexual dimorphism is minimal, and other studies have suggested that Australopithecus afarensis males were nearly twice the weight of females. However, Lovejoy's model posits that the larger range a provisioning male would have to cover (to avoid competing with the female for resources she could attain herself) would select for increased male body size to limit predation risk. Furthermore, as the species became more bipedal, specialized feet would prevent the infant from conveniently clinging to the mother - hampering the mother's freedom and thus make her and her offspring more dependent on resources collected by others. Modern monogamous primates such as gibbons tend to be also territorial, but fossil evidence indicates that Australopithecus afarensis lived in large groups. However, while both gibbons and hominids have reduced canine sexual dimorphism, female gibbons enlarge ('masculinize') their canines so they can actively share in the defense of their home territory. Instead, the reduction of the male hominid canine is consistent with reduced inter-male aggression in a pair-bonded though group living primate.
Early bipedalism in homininae model
Recent studies of 4.4 million years old Ardipithecus ramidus suggest bipedalism. It is thus possible that bipedalism evolved very early in homininae and was reduced in chimpanzee and gorilla when they became more specialized. Other recent studies of the foot structure of Ardipithecus ramidus suggest that the species was closely related to African-ape ancestors. This possibly provides a species close to the true connection between fully bipedal hominins and quadruped apes. According to Richard Dawkins in his book "The Ancestor's Tale", chimps and bonobos are descended from Australopithecus gracile type species while gorillas are descended from Paranthropus. These apes may have once been bipedal, but then lost this ability when they were forced back into an arboreal habitat, presumably by those australopithecines from whom eventually evolved hominins. Early hominines such as Ardipithecus ramidus may have possessed an arboreal type of bipedalism that later independently evolved towards knuckle-walking in chimpanzees and gorillas and towards efficient walking and running in modern humans (see figure). It is also proposed that one cause of Neanderthal extinction was a less efficient running.
Warning display (aposematic) model
Joseph Jordania from the University of Melbourne recently (2011) suggested that bipedalism was one of the central elements of the general defense strategy of early hominids, based on aposematism, or warning display and intimidation of potential predators and competitors with exaggerated visual and audio signals. According to this model, hominids were trying to stay as visible and as loud as possible all the time. Several morphological and behavioral developments were employed to achieve this goal: upright bipedal posture, longer legs, long tightly coiled hair on the top of the head, body painting, threatening synchronous body movements, loud voice and extremely loud rhythmic singing/stomping/drumming on external subjects. Slow locomotion and strong body odor (both characteristic for hominids and humans) are other features often employed by aposematic species to advertise their non-profitability for potential predators.
Other behavioural models
There are a variety of ideas which promote a specific change in behaviour as the key driver for the evolution of hominid bipedalism. For example, Wescott (1967) and later Jablonski & Chaplin (1993) suggest that bipedal threat displays could have been the transitional behaviour which led to some groups of apes beginning to adopt bipedal postures more often. Others (e.g. Dart 1925) have offered the idea that the need for more vigilance against predators could have provided the initial motivation. Dawkins (e.g. 2004) has argued that it could have begun as a kind of fashion that just caught on and then escalated through sexual selection. And it has even been suggested (e.g. Tanner 1981:165) that male phallic display could have been the initial incentive, as well as increased sexual signaling in upright female posture.
Thermoregulatory model
The thermoregulatory model explaining the origin of bipedalism is one of the simplest theories so far advanced, but it is a viable explanation. Dr. Peter Wheeler, a professor of evolutionary biology, proposes that bipedalism raises the amount of body surface area higher above the ground which results in a reduction in heat gain and helps heat dissipation. When a hominid is higher above the ground, the organism accesses more favorable wind speeds and temperatures. During heat seasons, greater wind flow results in a higher heat loss, which makes the organism more comfortable. Also, Wheeler explains that a vertical posture minimizes the direct exposure to the sun whereas quadrupedalism exposes more of the body to direct exposure. Analysis and interpretations of Ardipithecus reveal that this hypothesis needs modification to consider that the forest and woodland environmental preadaptation of early-stage hominid bipedalism preceded further refinement of bipedalism by the pressure of natural selection. This then allowed for the more efficient exploitation of the hotter conditions ecological niche, rather than the hotter conditions being hypothetically bipedalism's initial stimulus. A feedback mechanism from the advantages of bipedality in hot and open habitats would then in turn make a forest preadaptation solidify as a permanent state.
Carrying models
Charles Darwin wrote that "Man could not have attained his present dominant position in the world without the use of his hands, which are so admirably adapted to the act of obedience of his will". Darwin (1871:52) and many models on bipedal origins are based on this line of thought. Gordon Hewes (1961) suggested that the carrying of meat "over considerable distances" (Hewes 1961:689) was the key factor. Isaac (1978) and Sinclair et al. (1986) offered modifications of this idea, as indeed did Lovejoy (1981) with his "provisioning model" described above. Others, such as Nancy Tanner (1981), have suggested that infant carrying was key, while others again have suggested stone tools and weapons drove the change. This stone-tools theory is very unlikely, as though ancient humans were known to hunt, the discovery of tools was not discovered for thousands of years after the origin of bipedalism, chronologically precluding it from being a driving force of evolution. (Wooden tools and spears fossilize poorly and therefore it is difficult to make a judgment about their potential usage.)
Wading models
The observation that large primates, including especially the great apes, that predominantly move quadrupedally on dry land, tend to switch to bipedal locomotion in waist deep water, has led to the idea that the origin of human bipedalism may have been influenced by waterside environments. This idea, labelled "the wading hypothesis", was originally suggested by the Oxford marine biologist Alister Hardy who said: "It seems to me likely that Man learnt to stand erect first in water and then, as his balance improved, he found he became better equipped for standing up on the shore when he came out, and indeed also for running." It was then promoted by Elaine Morgan, as part of the aquatic ape hypothesis, who cited bipedalism among a cluster of other human traits unique among primates, including voluntary control of breathing, hairlessness and subcutaneous fat. The "aquatic ape hypothesis", as originally formulated, has not been accepted or considered a serious theory within the anthropological scholarly community. Others, however, have sought to promote wading as a factor in the origin of human bipedalism without referring to further ("aquatic ape" related) factors. Since 2000 Carsten Niemitz has published a series of papers and a book on a variant of the wading hypothesis, which he calls the "amphibian generalist theory" ().
Other theories have been proposed that suggest wading and the exploitation of aquatic food sources (providing essential nutrients for human brain evolution or critical fallback foods) may have exerted evolutionary pressures on human ancestors promoting adaptations which later assisted full-time bipedalism. It has also been thought that consistent water-based food sources had developed early hominid dependency and facilitated dispersal along seas and rivers.
Consequences
Prehistoric fossil records show that early hominins first developed bipedalism before being followed by an increase in brain size. The consequences of these two changes in particular resulted in painful and difficult labor due to the increased favor of a narrow pelvis for bipedalism being countered by larger heads passing through the constricted birth canal. This phenomenon is commonly known as the obstetrical dilemma.
Non-human primates habitually deliver their young on their own, but the same cannot be said for modern-day humans. Isolated birth appears to be rare and actively avoided cross-culturally, even if birthing methods may differ between said cultures. This is due to the fact that the narrowing of the hips and the change in the pelvic angle caused a discrepancy in the ratio of the size of the head to the birth canal. The result of this is that there is greater difficulty in birthing for hominins in general, let alone to be doing it by oneself.
Physiology
Bipedal movement occurs in a number of ways and requires many mechanical and neurological adaptations. Some of these are described below.
Biomechanics
Standing
Energy-efficient means of standing bipedally involve constant adjustment of balance, and of course these must avoid overcorrection. The difficulties associated with simple standing in upright humans are highlighted by the greatly increased risk of falling present in the elderly, even with minimal reductions in control system effectiveness.
Shoulder stability
Shoulder stability would decrease with the evolution of bipedalism. Shoulder mobility would increase because the need for a stable shoulder is only present in arboreal habitats. Shoulder mobility would support suspensory locomotion behaviors which are present in human bipedalism. The forelimbs are freed from weight-bearing requirements, which makes the shoulder a place of evidence for the evolution of bipedalism.
Walking
Unlike non-human apes that are able to practice bipedality such as Pan and Gorilla, hominins have the ability to move bipedally without the utilization of a bent-hip-bent-knee (BHBK) gait, which requires the engagement of both the hip and the knee joints. This human ability to walk is made possible by the spinal curvature humans have that non-human apes do not. Rather, walking is characterized by an "inverted pendulum" movement in which the center of gravity vaults over a stiff leg with each step. Force plates can be used to quantify the whole-body kinetic & potential energy, with walking displaying an out-of-phase relationship indicating exchange between the two. This model applies to all walking organisms regardless of the number of legs, and thus bipedal locomotion does not differ in terms of whole-body kinetics.
In humans, walking is composed of several separate processes:
Vaulting over a stiff stance leg
Passive ballistic movement of the swing leg
A short 'push' from the ankle prior to toe-off, propelling the swing leg
Rotation of the hips about the axis of the spine, to increase stride length
Rotation of the hips about the horizontal axis to improve balance during stance
Running
Early hominins underwent post-cranial changes in order to better adapt to bipedality, especially running. One of these changes is having longer hindlimbs proportional to the forelimbs and their effects. As previously mentioned, longer hindlimbs assist in thermoregulation by reducing the total surface area exposed to direct sunlight while simultaneously allowing for more space for cooling winds. Additionally, having longer limbs is more energy-efficient, since longer limbs mean that overall muscle strain is lessened. Better energy efficiency, in turn, means higher endurance, particularly when running long distances.
Running is characterized by a spring-mass movement. Kinetic and potential energy are in phase, and the energy is stored & released from a spring-like limb during foot contact, achieved by the plantar arch and the Achilles tendon in the foot and leg, respectively. Again, the whole-body kinetics are similar to animals with more limbs.
Musculature
Bipedalism requires strong leg muscles, particularly in the thighs. Contrast in domesticated poultry the well muscled legs, against the small and bony wings. Likewise in humans, the quadriceps and hamstring muscles of the thigh are both so crucial to bipedal activities that each alone is much larger than the well-developed biceps of the arms. In addition to the leg muscles, the increased size of the gluteus maximus in humans is an important adaptation as it provides support and stability to the trunk and lessens the amount of stress on the joints when running.
Respiration
Quadrupeds, have more restrictive breathing respire while moving than do bipedal humans. "Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running."
Respiration through bipedality means that there is better breath control in bipeds, which can be associated with brain growth. The modern brain utilizes approximately 20% of energy input gained through breathing and eating, as opposed to species like chimpanzees who use up twice as much energy as humans for the same amount of movement. This excess energy, leading to brain growth, also leads to the development of verbal communication. This is because breath control means that the muscles associated with breathing can be manipulated into creating sounds. This means that the onset of bipedality, leading to more efficient breathing, may be related to the origin of verbal language.
Bipedal robots
For nearly the whole of the 20th century, bipedal robots were very difficult to construct and robot locomotion involved only wheels, treads, or multiple legs. Recent cheap and compact computing power has made two-legged robots more feasible. Some notable biped robots are ASIMO, HUBO, MABEL and QRIO. Recently, spurred by the success of creating a fully passive, un-powered bipedal walking robot, those working on such machines have begun using principles gleaned from the study of human and animal locomotion, which often relies on passive mechanisms to minimize power consumption.
See also
Allometry
Orthograde posture
Quadrupedalism
Notes
References
Further reading
Darwin, C., "The Descent of Man and Selection in Relation to Sex", Murray (London), (1871).
Dart, R. A., "Australopithecus africanus: The Ape Man of South Africa" Nature, 145, 195–199, (1925).
Dawkins, R., "The Ancestor's Tale", Weidenfeld and Nicolson (London), (2004).
DeSilva, J., "First Steps: How Upright Walking Made Us Human" HarperCollins (New York), (2021)
Hewes, G. W., "Food Transport and the Origin of Hominid Bipedalism" American Anthropologist, 63, 687–710, (1961).
Hunt, K. D., "The Evolution of Human Bipedality" Journal of Human Evolution, 26, 183–202, (1994).
Isaac, G. I., "The Archeological Evidence for the Activities of Early African Hominids" In:Early Hominids of Africa (Jolly, C.J. (Ed.)), Duckworth (London), 219–254, (1978).
Tanner, N. M., "On Becoming Human", Cambridge University Press (Cambridge), (1981)
Wheeler, P. E. (1984) "The Evolution of Bipedality and Loss of Functional Body Hair in Hominoids." Journal of Human Evolution, 13, 91–98,
External links
The Origin of Bipedalism
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016)
Terrestrial locomotion
Animal anatomy
2 (number)
|
https://en.wikipedia.org/wiki/Bioinformatics
|
Bioinformatics () is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is referred to as computational biology.
Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences.
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions.
History
The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems).
Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology.
Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics.
Sequences
There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less.
Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991.
In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful.
Goals
In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures.
Important sub-disciplines within bioinformatics and computational biology include:
Development and implementation of computer programs to efficiently access, manage, and use various types of information.
Development of new mathematical algorithms and statistical measures to assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences.
The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis.
Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.
Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.
Sequence analysis
Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides.
DNA sequencing
Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing.
Sequence assembly
Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research.
Genome annotation
In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics.
Genome annotation can be classified into three levels: the nucleotide, protein, and process levels.
Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome.
The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function.
Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem.
The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium Haemophilus influenzae. The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).
Gene function prediction
While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions.
Computational evolutionary biology
Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to:
trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation,
build complex computational population genetics models to predict the outcome of the system over time
track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
Comparative genomics
The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.
Many of these studies are based on the detection of sequence homology to assign sequences to protein families.
Pan genomics
Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species.
Genetics of disease
As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides.
Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes.
Analysis of mutations in cancer
In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.
Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.
Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.
Gene and protein expression
Analysis of gene expression
The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of protein expression
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays.
Analysis of regulation
Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process.
For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments.
Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods.
Analysis of cellular organization
Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases.
Microscopy and image analysis
Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases.
Protein localization
Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools.
Nuclear organization of chromatin
Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space.
Structural bioinformatics
Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models.
Amino acid sequence
The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the secondary, tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time.
Homology
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins.
One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor.
Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.
Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies.
A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.
Network and systems biology
Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.
Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Molecular interaction networks
Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.
Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions.
Biodiversity informatics
Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change.
Others
Literature analysis
The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:
Abbreviation recognition – identify the long-form and abbreviation of biological terms
Named-entity recognition – recognizing biological terms such as gene names
Protein–protein interaction – identify which proteins interact with which proteins from text
The area of research draws from statistics and computational linguistics.
High-throughput image analysis
Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are:
high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics)
morphometrics
clinical image analysis and visualization
determining the real-time air-flow patterns in breathing lungs of living animals
quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury
making behavioral observations from extended video recordings of laboratory animals
infrared measurements for metabolic activity determination
inferring clone overlaps in DNA mapping, e.g. the Sulston score
High-throughput single cell data analysis
Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.
Ontologies and data integration
Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.
The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes.
Databases
Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private.
Some of the most commonly used databases are listed below:
Used in biological sequence analysis: Genbank, UniProt
Used in structure analysis: Protein Data Bank (PDB)
Used in finding Protein Families and Motif Finding: InterPro, Pfam
Used for Next Generation Sequencing: Sequence Read Archive
Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks
Used in design of synthetic genetic circuits: GenoCAD
Software and tools
Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions.
Open-source bioinformatics software
Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration.
Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD.
The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software.
Web services in bioinformatics
SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.
Bioinformatics workflow management systems
A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to
provide an easy-to-use environment for individual application scientists themselves to create their own workflows,
provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time,
simplify the process of sharing and reusing workflows between the scientists, and
enable scientists to track the provenance of the workflow execution results and the workflow creation steps.
Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE.
BioCompute and BioCompute Objects
In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University.
It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff.
In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators.
Education platforms
Bioinformatics is not only taught as in-person masters degree at many universities. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4283 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4283π operating system.
MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization (UC San Diego) and Genomic Data Science Specialization (Johns Hopkins) as well as EdX's Data Analysis for Life Sciences XSeries (Harvard).
Conferences
There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB).
See also
References
Further reading
Sehgal et al. : Structural, phylogenetic and docking studies of D-amino acid oxidase activator(DAOA ), a candidate schizophrenia gene. Theoretical Biology and Medical Modelling 2013 10 :3.
Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview , Communications of Computer Society of India, January 2007
Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. (Chapman & Hall/Crc Computer and Information Science Series)
Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001.
Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003.
Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins, third edition. Wiley, 2005.
Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 2007.
Cristianini, N. and Hahn, M. Introduction to Computational Genomics , Cambridge University Press, 2006. ( |)
Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 1998.
Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics Problems. Wiley, 2005.
Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002.
Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005.
Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University Press, 2005.
Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000.
Soinov, L. Bioinformatics and Pattern Recognition Come Together Journal of Pattern Recognition Research (JPRR ), Vol 1 (1) 2006 p. 37–41
Stevens, Hallam, Life Out of Sequence: A Data-Driven History of Bioinformatics, Chicago: The University of Chicago Press, 2013,
Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001.
Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report
Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology (1995)
Foundations of Computational and Systems Biology MIT Course
Computational Biology: Genomes, Networks, Evolution Free MIT Course
External links
Bioinformatics Resource Portal (SIB)
|
https://en.wikipedia.org/wiki/Barter
|
In trade, barter (derived from baretor) is a system of exchange in which participants in a transaction directly exchange goods or services for other goods or services without using a medium of exchange, such as money. Economists usually distinguish barter from gift economies in many ways; barter, for example, features immediate reciprocal exchange, not one delayed in time. Barter usually takes place on a bilateral basis, but may be multilateral (if it is mediated through a trade exchange). In most developed countries, barter usually exists parallel to monetary systems only to a very limited extent. Market actors use barter as a replacement for money as the method of exchange in times of monetary crisis, such as when currency becomes unstable (such as hyperinflation or a deflationary spiral) or simply unavailable for conducting commerce.
No ethnographic studies have shown that any present or past society has used barter without any other medium of exchange or measurement, and anthropologists have found no evidence that money emerged from barter. They instead found that gift-giving (credit extended on a personal basis with an inter-personal balance maintained over the long term) was the most usual means of exchange of goods and services. Nevertheless, economists since the times of Adam Smith (1723–1790) often inaccurately imagined pre-modern societies as examples to use the inefficiency of barter to explain the emergence of money, of "the" economy, and hence of the discipline of economics itself.
Economic theory
Adam Smith on the origin of money
Adam Smith sought to demonstrate that markets (and economies) pre-existed the state. He argued that money was not the creation of governments. Markets emerged, in his view, out of the division of labour, by which individuals began to specialize in specific crafts and hence had to depend on others for subsistence goods. These goods were first exchanged by barter. Specialization depended on trade but was hindered by the "double coincidence of wants" which barter requires, i.e., for the exchange to occur, each participant must want what the other has. To complete this hypothetical history, craftsmen would stockpile one particular good, be it salt or metal, that they thought no one would refuse. This is the origin of money according to Smith. Money, as a universally desired medium of exchange, allows each half of the transaction to be separated.
Barter is characterized in Adam Smith's "The Wealth of Nations" by a disparaging vocabulary: "haggling, swapping, dickering". It has also been characterized as negative reciprocity, or "selfish profiteering".
Anthropologists have argued, in contrast, "that when something resembling barter does occur in stateless societies it is almost always between strangers." Barter occurred between strangers, not fellow villagers, and hence cannot be used to naturalistically explain the origin of money without the state. Since most people engaged in trade knew each other, exchange was fostered through the extension of credit. Marcel Mauss, author of 'The Gift', argued that the first economic contracts were to not act in one's economic self-interest, and that before money, exchange was fostered through the processes of reciprocity and redistribution, not barter. Everyday exchange relations in such societies are characterized by generalized reciprocity, or a non-calculative familial "communism" where each takes according to their needs, and gives as they have.
Features of bartering
Often the following features are associated with barter transactions:
There is a demand focus for things of a different kind.
Most often, parties trade goods and services for goods or services that differ from what they are willing to forego.
The parties of the barter transaction are both equal and free.
Neither party has advantages over the other, and both are free to leave the trade at any point in time.
The transaction happens simultaneously.
The goods are normally traded at the same point in time. Nonetheless delayed barter in goods may rarely occur as well. In the case of services being traded however, the two parts of the trade may be separated.
The transaction is transformative.
A barter transaction "moves objects between the regimes of value", meaning that a good or service that is being traded may take up a new meaning or value under its recipient than that of its original owner.
There is no criterion of value.
There is no real way to value each side of the trade. There is bargaining taking place, not to do with the value of each party's good or service, but because each player in the transaction wants what is offered by the other.
Advantages
Since direct barter does not require payment in money, it can be utilized when money is in short supply, when there is little information about the credit worthiness of trade partners, or when there is a lack of trust between those trading.
Barter is an option to those who cannot afford to store their small supply of wealth in money, especially in hyperinflation situations where money devalues quickly.
Limitations
The limitations of barter are often explained in terms of its inefficiencies in facilitating exchange in comparison to money.
It is said that barter is 'inefficient' because:
There needs to be a 'double coincidence of wants'
For barter to occur between two parties, both parties need to have what the other wants.
There is no common measure of value/ No Standard Unit of Account
In a monetary economy, money plays the role of a measure of value of all goods, so their values can be assessed against each other; this role may be absent in a barter economy.
Indivisibility of certain goods
If a person wants to buy a certain amount of another's goods, but only has for payment one indivisible unit of another good which is worth more than what the person wants to obtain, a barter transaction cannot occur.
Lack of standards for deferred payments
This is related to the absence of a common measure of value, although if the debt is denominated in units of the good that will eventually be used in payment, it is not a problem.
Difficulty in storing wealth
If a society relies exclusively on perishable goods, storing wealth for the future may be impractical. However, some barter economies rely on durable goods like sheep or cattle for this purpose.
History
Silent trade
Other anthropologists have questioned whether barter is typically between "total" strangers, a form of barter known as "silent trade". Silent trade, also called silent barter, dumb barter ("dumb" here used in its old meaning of "mute"), or depot trade, is a method by which traders who cannot speak each other's language can trade without talking. However, Benjamin Orlove has shown that while barter occurs through "silent trade" (between strangers), it occurs in commercial markets as well. "Because barter is a difficult way of conducting trade, it will occur only where there are strong institutional constraints on the use of money or where the barter symbolically denotes a special social relationship and is used in well-defined conditions. To sum up, multipurpose money in markets is like lubrication for machines - necessary for the most efficient function, but not necessary for the existence of the market itself."
In his analysis of barter between coastal and inland villages in the Trobriand Islands, Keith Hart highlighted the difference between highly ceremonial gift exchange between community leaders, and the barter that occurs between individual households. The haggling that takes place between strangers is possible because of the larger temporary political order established by the gift exchanges of leaders. From this he concludes that barter is "an atomized interaction predicated upon the presence of society" (i.e. that social order established by gift exchange), and not typical between complete strangers.
Times of monetary crisis
As Orlove noted, barter may occur in commercial economies, usually during periods of monetary crisis. During such a crisis, currency may be in short supply, or highly devalued through hyperinflation. In such cases, money ceases to be the universal medium of exchange or standard of value. Money may be in such short supply that it becomes an item of barter itself rather than the means of exchange. Barter may also occur when people cannot afford to keep money (as when hyperinflation quickly devalues it).
An example of this would be during the Crisis in Bolivarian Venezuela, when Venezuelans resorted to bartering as a result of hyperinflation. The increasingly low value of bank notes, and their lack of circulation in suburban areas, meant that many Venezuelans, especially those living outside of larger cities, took to the trading over their own goods for even the most basic of transactions.
Additionally, in the wake of the 2008 financial crisis, barter exchanges reported a double-digit increase in membership, due to the scarcity of fiat money, and the degradation of monetary system sentiment.
Exchanges
Economic historian Karl Polanyi has argued that where barter is widespread, and cash supplies limited, barter is aided by the use of credit, brokerage, and money as a unit of account (i.e. used to price items). All of these strategies are found in ancient economies including Ptolemaic Egypt. They are also the basis for more recent barter exchange systems.
While one-to-one bartering is practised between individuals and businesses on an informal basis, organized barter exchanges have developed to conduct third party bartering which helps overcome some of the limitations of barter. A barter exchange operates as a broker and bank in which each participating member has an account that is debited when purchases are made, and credited when sales are made.
Modern barter and trade has evolved considerably to become an effective method of increasing sales, conserving cash, moving inventory, and making use of excess production capacity for businesses around the world. Businesses in a barter earn trade credits (instead of cash) that are deposited into their account. They then have the ability to purchase goods and services from other members utilizing their trade credits – they are not obligated to purchase from those whom they sold to, and vice versa. The exchange plays an important role because they provide the record-keeping, brokering expertise and monthly statements to each member. Commercial exchanges make money by charging a commission on each transaction either all on the buy side, all on the sell side, or a combination of both. Transaction fees typically run between 8 and 15%. A successful example is International Monetary Systems, which was founded in 1985 and is one of the first exchanges in North America opened after the TEFRA Act of 1982.
Organized Barter (Retail Barter)
Since the 1930s, organized barter has been a common type of barter where company's join a barter organization (barter company) which serves as a hub to exchange goods and services without money as a medium of exchange. Similarly to brokerage houses, barter company facilitates the exchange of goods and services between member companies, allowing members to acquire goods and services by providing their own as payment. Member companies are required to sign a barter agreement with the barter company as a condition of their membership. In turn, the barter company provides each member with the current levels of supply and demand for each good and service which can be purchased or sold in the system. These transactions are mediated by barter authorities of the member companies. The barter member companies can then acquire their desired goods or services from another member company within a predetermined time. Failure to deliver the good or service within the fixed time period results in the debt being settled in cash. Each member company pays an annual membership fee and purchase and sales commission outlined in the contract. Organized barter increases liquidity for member companies as it mitigates the requirement of cash to settle transactions, enabling sales and purchases to be made with excess capacity or surplus inventory. Additionally, organized barter facilitates competitive advantage within industries and sectors. Considering the quantity of transactions depending on the supply-demand balance of the goods and services within the barter organization, member companies tend to face minimal competition within their own operating sector.
Corporate Barter
Producers, wholesalers and distributors tend to engage in corporate barter as a method of exchanging goods and services with companies they are in business with. These bilateral barter transactions are targeted towards companies aiming to convert stagnant inventories into receivable goods or services, to increase market share without cash investments, and to protect liquidity. However, issues arise as to the imbalance of supply and demand of desired goods and services and the inability to efficiently match the value of goods and services exchanged in these transactions.
Labour notes
The Owenite socialists in Britain and the United States in the 1830s were the first to attempt to organize barter exchanges. Owenism developed a "theory of equitable exchange" as a critique of the exploitative wage relationship between capitalist and labourer, by which all profit accrued to the capitalist. To counteract the uneven playing field between employers and employed, they proposed "schemes of labour notes based on labour time, thus institutionalizing Owen's demand that human labour, not money, be made the standard of value." This alternate currency eliminated price variability between markets, as well as the role of merchants who bought low and sold high. The system arose in a period where paper currency was an innovation. Paper currency was an IOU circulated by a bank (a promise to pay, not a payment in itself). Both merchants and an unstable paper currency created difficulties for direct producers.
An alternate currency, denominated in labour time, would prevent profit taking by middlemen; all goods exchanged would be priced only in terms of the amount of labour that went into them as expressed in the maxim 'Cost the limit of price'. It became the basis of exchanges in London, and in America, where the idea was implemented at the New Harmony communal settlement by Josiah Warren in 1826, and in his Cincinnati 'Time store' in 1827. Warren ideas were adopted by other Owenites and currency reformers, even though the labour exchanges were relatively short lived.
In England, about 30 to 40 cooperative societies sent their surplus goods to an "exchange bazaar" for direct barter in London, which later adopted a similar labour note. The British Association for Promoting Cooperative Knowledge established an "equitable labour exchange" in 1830. This was expanded as the National Equitable Labour Exchange in 1832 on Grays Inn Road in London. These efforts became the basis of the British cooperative movement of the 1840s. In 1848, the socialist and first self-designated anarchist Pierre-Joseph Proudhon postulated a system of time chits.
Michael Linton this originated the term "local exchange trading system" (LETS) in 1983 and for a time ran the Comox Valley LETSystems in Courtenay, British Columbia. LETS networks use interest-free local credit so direct swaps do not need to be made. For instance, a member may earn credit by doing childcare for one person and spend it later on carpentry with another person in the same network. In LETS, unlike other local currencies, no scrip is issued, but rather transactions are recorded in a central location open to all members. As credit is issued by the network members, for the benefit of the members themselves, LETS are considered mutual credit systems.
Local currencies
The first exchange system was the Swiss WIR Bank. It was founded in 1934 as a result of currency shortages after the stock market crash of 1929. "WIR" is both an abbreviation of Wirtschaftsring (economic circle) and the word for "we" in German, reminding participants that the economic circle is also a community.
In Australia and New Zealand, the largest barter exchange is Bartercard, founded in 1991, with offices in the United Kingdom, United States, Cyprus, UAE, Thailand, and most recently, South Africa. Other than its name suggests, it uses an electronic local currency, the trade dollar. Since its inception, Bartercard has amassed a trading value of over US$10 billion, and increased its customer network to 35,000 cardholders.
Bartering in business
In business, barter has the benefit that one gets to know each other, one discourages investments for rent (which is inefficient) and one can impose trade sanctions on dishonest partners.
According to the International Reciprocal Trade Association, the industry trade body, more than 450,000 businesses transacted $10 billion globally in 2008 – and officials expect trade volume to grow by 15% in 2009.
It is estimated that over 450,000 businesses in the United States were involved in barter exchange activities in 2010. There are approximately 400 commercial and corporate barter companies serving all parts of the world. There are many opportunities for entrepreneurs to start a barter exchange. Several major cities in the U.S. and Canada do not currently have a local barter exchange. There are two industry groups in the United States, the National Association of Trade Exchanges (NATE) and the International Reciprocal Trade Association (IRTA). Both offer training and promote high ethical standards among their members. Moreover, each has created its own currency through which its member barter companies can trade. NATE's currency is known as the BANC and IRTA's currency is called Universal Currency (UC).
In Canada, barter continues to thrive. The largest b2b barter exchange is International Monetary Systems (IMS Barter), founded in 1985. P2P bartering has seen a renaissance in major Canadian cities through Bunz - built as a network of Facebook groups that went on to become a stand-alone bartering based app in January 2016. Within the first year, Bunz accumulated over 75,000 users in over 200 cities worldwide.
Corporate barter focuses on larger transactions, which is different from a traditional, retail oriented barter exchange. Corporate barter exchanges typically use media and advertising as leverage for their larger transactions. It entails the use of a currency unit called a "trade-credit". The trade-credit must not only be known and guaranteed but also be valued in an amount the media and advertising could have been purchased for had the "client" bought it themselves (contract to eliminate ambiguity and risk).
Soviet bilateral trade is occasionally called "barter trade", because although the purchases were denominated in U.S. dollars, the transactions were credited to an international clearing account, avoiding the use of hard cash.
Tax implications
In the United States, Karl Hess used bartering to make it harder for the IRS to seize his wages and as a form of tax resistance. Hess explained how he turned to barter in an op-ed for The New York Times in 1975. However the IRS now requires barter exchanges to be reported as per the Tax Equity and Fiscal Responsibility Act of 1982. Barter exchanges are considered taxable revenue by the IRS and must be reported on a 1099-B form. According to the IRS, "The fair market value of goods and services exchanged must be included in the income of both parties."
Other countries, though, do not have the reporting requirement that the U.S. does concerning proceeds from barter transactions, but taxation is handled the same way as a cash transaction. If one barters for a profit, one pays the appropriate tax; if one generates a loss in the transaction, they have a loss. Bartering for business is also taxed accordingly as business income or business expense. Many barter exchanges require that one register as a business.
In countries like Australia and New Zealand, barter transactions require the appropriate tax invoices declaring the value of the transaction and its reciprocal GST component. All records of barter transactions must also be kept for a minimum of five years after the transaction is made.
Recent developments
In Spain (particularly the Catalonia region) there is a growing number of exchange markets. These barter markets or swap meets work without money. Participants bring things they do not need and exchange them for the unwanted goods of another participant. Swapping among three parties often helps satisfy tastes when trying to get around the rule that money is not allowed.
Other examples are El Cambalache in San Cristobal de las Casas, Chiapas, Mexico and post-Soviet societies.
The recent blockchain technologies are making it possible to implement decentralized and autonomous barter exchanges that can be used by crowds on a massive scale. BarterMachine is an Ethereum smart contract based system that allows direct exchange of multiple types and quantities of tokens with others. It also provides a solution miner that allows users to compute direct bartering solutions in their browsers. Bartering solutions can be submitted to BarterMachine which will perform collective transfer of tokens among the blockchain addresses that belong to the users. If there are excess tokens left after the requirements of the users are satisfied, the leftover tokens will be given as reward to the solution miner.
See also
Collaborative consumption
Complementary currencies
Gift economy
International trade
List of international trade topics
Local exchange trading system
Natural economy
Private currency
Property caretaker
Quid pro quo
Simple living
Trading cards
Time banking
References
External links
Business terms
Cashless society
Economic systems
Pricing
Simple living
Tax avoidance
Trade
|
https://en.wikipedia.org/wiki/Beatmatching
|
Beatmatching or pitch cue is a disc jockey technique of pitch shifting or time stretching an upcoming track to match its tempo to that of the currently playing track, and to adjust them such that the beats (and, usually, the bars) are synchronized—e.g. the kicks and snares in two house records hit at the same time when both records are played simultaneously. Beatmatching is a component of beatmixing which employs beatmatching combined with equalization, attention to phrasing and track selection in an attempt to make a single mix that flows together and has a good structure.
The technique was developed to keep the people from leaving the dancefloor at the end of the song. These days it is considered basic among disc jockeys (DJs) in electronic dance music genres, and it is standard practice in clubs to keep the constant beat through the night, even if DJs change in the middle.
Technique
The beatmatching technique consists of the following steps:
While a record is playing, start a second record playing, but only monitored through headphones, not being fed to the main PA system. Use gain (or trim) control on the mixer to match the levels of the two records.
Restart and slip-cue the new record at the right time, on beat with the record currently playing.
If the beat on the new record hits before the beat on the current record, then the new record is too fast; reduce the pitch and manually slow the speed of the new record to bring the beats back in sync.
If the beat on the new record hits after the beat on the current record, then the new record is too slow; increase the pitch and manually increase the speed of the new record to bring the beats back in sync.
Continue this process until the two records are in sync with each other. It can be difficult to sync the two records perfectly, so manual adjustment of the records is necessary to maintain the beat synchronization.
Gradually fade in parts of the new track while fading out the old track. While in the mix, ensure that the tracks are still synchronized, adjusting the records if needed.
The fade can be repeated several times, for example, from the first track, fade to the second track, then back to first, then to second again.
One of the key things to consider when beatmatching is the tempo of both songs, and the musical theory behind the songs. Attempting to beatmatch songs with completely different beats per minute (BPM) will result in one of the songs sounding too fast or too slow.
When beatmatching, a popular technique is to vary the equalization of both tracks. For example, when the kicks are occurring on the same beat, a more seamless transition can occur if the lower frequencies are taken out of one of the songs, and the lower frequencies of the other song is boosted. Doing so creates a smoother transition.
Pitch and tempo
The pitch and tempo of a track are normally linked together: spin a disc 5% faster and both pitch and tempo will be 5% higher. However, some modern DJ software can change pitch and tempo independently using time-stretching and pitch-shifting, allowing harmonic mixing. There is also a feature in modern DJ software which may be called "master tempo" or "key adjust" which changes the tempo while keeping the original pitch.
History
Francis Grasso was one of the first people to beatmatch in the late 1960s, being taught the technique by Bob Lewis.
These days beat-matching is considered central to DJing, and features making it possible are a requirement for DJ-oriented players. In 1978, the Technics SL-1200MK2 turntable was released, whose comfortable and precise sliding pitch control and high torque direct drive motor made beat-matching easier and it became the standard among DJs. With the advent of the compact disc, DJ-oriented compact disc players with pitch control and other features enabling beat-matching (and sometimes scratching), dubbed CDJs, were introduced by various companies. More recently, software with similar capabilities has been developed to allow manipulation of digital audio files stored on computers using turntables with special vinyl records (e.g. Final Scratch, M-Audio Torq, Serato Scratch Live) or computer interface (e.g. Traktor DJ Studio, Mixxx, VirtualDJ). Other software including algorithmic beat-matching is Ableton Live, which allows for realtime music manipulation and deconstruction. Freeware software such as Rapid Evolution can detect the beats per minute and determine the percent BPM difference between songs.
Most modern DJ hardware and software now offer a "sync" feature which automatically adjusts the tempo between tracks being mixed so the DJ no longer needs to beatmatch manually.
See also
Clubdjpro
DJ mix
Harmonic mixing
Mashup
Segue
References
Audio mixing
Disco
DJing
American inventions
|
https://en.wikipedia.org/wiki/Backplane
|
A backplane (or "backplane system") is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used to connect several printed circuit boards together to make up a complete computer system. Backplanes commonly use a printed circuit board, but wire-wrapped backplanes have also been used in minicomputers and high-reliability applications.
A backplane is generally differentiated from a motherboard by the lack of on-board processing and storage elements. A backplane uses plug-in cards for storage and processing.
Usage
Early microcomputer systems like the Altair 8800 used a backplane for the processor and expansion cards.
Backplanes are normally used in preference to cables because of their greater reliability. In a cabled system, the cables need to be flexed every time that a card is added or removed from the system; this flexing eventually causes mechanical failures. A backplane does not suffer from this problem, so its service life is limited only by the longevity of its connectors. For example, DIN 41612 connectors (used in the VMEbus system) have three durability grades built to withstand (respectively) 50, 400 and 500 insertions and removals, or "mating cycles". To transmit information, Serial Back-Plane technology uses a low-voltage differential signaling transmission method for sending information.
In addition, there are bus expansion cables which will extend a computer bus to an external backplane, usually located in an enclosure, to provide more or different slots than the host computer provides. These cable sets have a transmitter board located in the computer, an expansion board in the remote backplane, and a cable between the two.
Active versus passive backplanes
Backplanes have grown in complexity from the simple Industry Standard Architecture (ISA) (used in the original IBM PC) or S-100 style where all the connectors were connected to a common bus. Due to limitations inherent in the Peripheral Component Interconnect (PCI) specification for driving slots, backplanes are now offered as passive and active.
True passive backplanes offer no active bus driving circuitry. Any desired arbitration logic is placed on the daughter cards. Active backplanes include chips which buffer the various signals to the slots.
The distinction between the two isn't always clear, but may become an important issue if a whole system is expected to not have a single point of failure (SPOF) . Common myth around passive backplane, even if it is single, is not usually considered a SPOF. Active back-planes are even more complicated and thus have a non-zero risk of malfunction. However one situation that can cause disruption both in the case of Active and Passive Back-planes is while performing maintenance activities i.e. while swapping boards there is always a possibility of damaging the Pins/Connectors on the Back-plane, this may cause full outage for the system as all boards mounted on the back-plane should be removed in order to fix the system. Therefore, we are seeing newer architectures where systems use high speed redundant connectivity to interconnect system boards point to point with No Single Point of Failure anywhere in the system.
Backplanes versus motherboards
When a backplane is used with a plug-in single-board computer (SBC) or system host board (SHB), the combination provides the same functionality as a motherboard, providing processing power, memory, I/O and slots for plug-in cards. While there are a few motherboards that offer more than 8 slots, that is the traditional limit. In addition, as technology progresses, the availability and number of a particular slot type may be limited in terms of what is currently offered by motherboard manufacturers.
However, backplane architecture is somewhat unrelated to the SBC technology plugged into it. There are some limitations to what can be constructed, in that the SBC chip set and processor have to provide the capability of supporting the slot types. In addition, virtually an unlimited number of slots can be provided with 20, including the SBC slot, as a practical though not an absolute limit. Thus, a PICMG backplane can provide any number and any mix of ISA, PCI, PCI-X, and PCI-e slots, limited only by the ability of the SBC to interface to and drive those slots. For example, an SBC with the latest i7 processor could interface with a backplane providing up to 19 ISA slots to drive legacy I/O cards.
Midplane
Some backplanes are constructed with slots for connecting to devices on both sides, and are referred to as midplanes. This ability to plug cards into either side of a midplane is often useful in larger systems made up primarily of modules attached to the midplane.
Midplanes are often used in computers, mostly in blade servers, where server blades reside on one side and the peripheral (power, networking, and other I/O) and service modules reside on the other. Midplanes are also popular in networking and telecommunications equipment where one side of the chassis accepts system processing cards and the other side of the chassis accepts network interface cards.
Orthogonal midplanes connect vertical cards on one side to horizontal boards on the other side.
One common orthogonal midplane connects many vertical telephone line cards on one side, each one connected to copper telephone wires, to a horizontal communications card on the other side.
A "virtual midplane" is an imaginary plane between vertical cards on one side that directly connect to horizontal boards on the other side; the card-slot aligners of the card cage and self-aligning connectors on the cards hold the cards in position.
Some people use the term "midplane" to describe a board that sits between and connects a hard drive hot-swap backplane and redundant power supplies.
Backplanes in storage
Servers commonly have a backplane to attach hot swappable hard disk drives and solid state drives; backplane pins pass directly into hard drive sockets without cables. They may have single connector to connect one disk array controller or multiple connectors that can be connected to one or more controllers in arbitrary way. Backplanes are commonly found in disk enclosures, disk arrays, and servers.
Backplanes for SAS and SATA HDDs most commonly use the SGPIO protocol as means of communication between the host adapter and the backplane. Alternatively SCSI Enclosure Services can be used. With Parallel SCSI subsystems, SAF-TE is used.
Platforms
PICMG
A single-board computer meeting the PICMG 1.3 specification and compatible with a PICMG 1.3 backplane is referred to as a System Host Board.
In the Intel Single-Board Computer world, PICMG provides standards for the backplane interface:
PICMG 1.0, 1.1 and 1.2 provide ISA and PCI support, with 1.2 adding PCIX support.
PICMG 1.3 provides PCI-Express support.
See also
Motherboard
Switched fabric
Daughterboard
M-Module
SS-50 Bus
STD Bus
STEbus
Eurocard (printed circuit board)
VXI
References
Further reading
Computer buses
|
https://en.wikipedia.org/wiki/Boomerang
|
A boomerang () is a thrown tool typically constructed with aerofoil sections and designed to spin about an axis perpendicular to the direction of its flight. A returning boomerang is designed to return to the thrower, while a non-returning boomerang is designed as a weapon to be thrown straight and is traditionally used by some Aboriginal Australians for hunting.
Historically, boomerangs have been used for hunting, sport, and entertainment and are made in various shapes and sizes to suit different purposes. Although considered an Australian icon, ancient boomerangs have also been discovered elsewhere in Africa, the Americas, and Eurasia.
Description
A boomerang is a throwing stick with aerodynamic properties, traditionally made of wood, but also of bone, horn, tusks and even iron. Modern boomerangs used for sport may be made from plywood or plastics such as ABS, polypropylene, phenolic paper, or carbon fibre-reinforced plastics.
Boomerangs come in many shapes and sizes depending on their geographic or tribal origins and intended function, including the traditional Australian type, the cross-stick, the pinwheel, the tumble-stick, the Boomabird, and other less common types.
Boomerangs return to the thrower, distinguishing them from throwing sticks.
Returning boomerangs fly, and are examples of the earliest heavier-than-air human-made flight. A returning boomerang has two or more aerofoil section wings arranged so that when spinning they create unbalanced aerodynamic forces that curve its path into an ellipse, returning to its point of origin when thrown correctly. Their typical L-shape makes them the most recognisable form of boomerang. Although used primarily for leisure or recreation, returning boomerangs are also used to decoy birds of prey, thrown above the long grass to frighten game birds into flight and into waiting nets. Non-traditional, modern, competition boomerangs come in many shapes, sizes and materials.
Throwing sticks, valari, or kylies, are primarily used as weapons. They lack the aerofoil sections, are generally heavier and designed to travel as straight and forcefully as possible to the target to bring down game. The Tamil valari variant, of ancient origin and mentioned in the Tamil Sangam literature "Purananuru", was one of these. The usual form of the Valari is two limbs set at an angle; one thin and tapering, the other rounded as a handle. Valaris come in many shapes and sizes. They are usually made of cast iron cast from moulds. Some may have wooden limbs tipped with iron or with lethally sharpened edges or with special double-edged and razor-sharp daggers known as kattari.
Etymology
The origin of the term is uncertain. One source asserts that the term entered the language in 1827, adapted from an extinct Aboriginal language of New South Wales, Australia, but mentions a variant, wo-mur-rang, which it dates to 1798. The first recorded encounter with a boomerang by Europeans was at Farm Cove (Port Jackson), in December 1804, when a weapon was witnessed during a tribal skirmish:
David Collins listed "Wo-mur-rāng" as one of eight Aboriginal "Names of clubs" in 1798. but was probably referring to the woomera, which is actually a spear-thrower. An anonymous 1790 manuscript on Aboriginal languages of New South Wales reported "Boo-mer-rit" as "the Scimiter".
In 1822, it was described in detail and recorded as a "bou-mar-rang" in the language of the Turuwal people (a sub-group of the Darug) of the Georges River near Port Jackson. The Turawal used other words for their hunting sticks but used "boomerang" to refer to a returning throw-stick.
History
Boomerangs were, historically, used as hunting weapons, percussive musical instruments, battle clubs, fire-starters, decoys for hunting waterfowl, and as recreational play toys. The smallest boomerang may be less than from tip to tip, and the largest over in length. Tribal boomerangs may be inscribed or painted with designs meaningful to their makers. Most boomerangs seen today are of the tourist or competition sort, and are almost invariably of the returning type.
Depictions of boomerangs being thrown at animals, such as kangaroos, appear in some of the oldest rock art in the world, the Indigenous Australian rock art of the Kimberley region, which is potentially up to 50,000 years old. Stencils and paintings of boomerangs also appear in the rock art of West Papua, including on Bird's Head Peninsula and Kaimana, likely dating to the Last Glacial Maximum, when lower sea levels led to cultural continuity between Papua and Arnhem Land in Northern Australia. The oldest surviving Australian Aboriginal boomerangs come from a cache found in a peat bog in the Wyrie Swamp of South Australia and date to 10,000 BC.
Although traditionally thought of as Australian, boomerangs have been found also in ancient Europe, Egypt, and North America. There is evidence of the use of non-returning boomerangs by the Native Americans of California and Arizona, and inhabitants of South India for killing birds and rabbits. Some boomerangs were not thrown at all, but were used in hand to hand combat by Indigenous Australians. Ancient Egyptian examples, however, have been recovered, and experiments have shown that they functioned as returning boomerangs. Hunting sticks discovered in Europe seem to have formed part of the Stone Age arsenal of weapons. One boomerang that was discovered in Obłazowa Cave in the Carpathian Mountains in Poland was made of mammoth's tusk and is believed, based on AMS dating of objects found with it, to be about 30,000 years old. In the Netherlands, boomerangs have been found in Vlaardingen and Velsen from the first century BC. King Tutankhamun, the famous pharaoh of ancient Egypt, who died over 3,300 years ago, owned a collection of boomerangs of both the straight flying (hunting) and returning variety.
No one knows for sure how the returning boomerang was invented, but some modern boomerang makers speculate that it developed from the flattened throwing stick, still used by Aboriginal Australians and other indigenous peoples around the world, including the Navajo in North America. A hunting boomerang is delicately balanced and much harder to make than a returning one. The curving flight characteristic of returning boomerangs was probably first noticed by early hunters trying to "tune" their throwing sticks to fly straight.
It is thought by some that the shape and elliptical flight path of the returning boomerang makes it useful for hunting birds and small animals, or that noise generated by the movement of the boomerang through the air, or, by a skilled thrower, lightly clipping leaves of a tree whose branches house birds, would help scare the birds towards the thrower. It is further supposed by some that this was used to frighten flocks or groups of birds into nets that were usually strung up between trees or thrown by hidden hunters. In southeastern Australia, it is claimed that boomerangs were made to hover over a flock of ducks; mistaking it for a hawk, the ducks would dive away, toward hunters armed with nets or clubs.
Traditionally, most boomerangs used by Aboriginal groups in Australia were non-returning. These weapons, sometimes called "throwsticks" or "kylies", were used for hunting a variety of prey, from kangaroos to parrots; at a range of about , a non-returning boomerang could inflict mortal injury to a large animal. A throwstick thrown nearly horizontally may fly in a nearly straight path and could fell a kangaroo on impact to the legs or knees, while the long-necked emu could be killed by a blow to the neck. Hooked non-returning boomerangs, known as "beaked kylies", used in northern Central Australia, have been claimed to kill multiple birds when thrown into a dense flock. Throwsticks are used as multi-purpose tools by today's Aboriginal peoples, and besides throwing could be wielded as clubs, used for digging, used to start friction fires, and are sonorous when two are struck together.
Recent evidence also suggests that boomerangs were used as war weapons.
Modern use
Today, boomerangs are mostly used for recreation. There are different types of throwing contests: accuracy of return; Aussie round; trick catch; maximum time aloft; fast catch; and endurance (see below). The modern sport boomerang (often referred to as a 'boom' or 'rang') is made of Finnish birch plywood, hardwood, plastic or composite materials and comes in many different shapes and colours. Most sport boomerangs typically weigh less than , with MTA boomerangs (boomerangs used for the maximum-time-aloft event) often under .
Boomerangs have also been suggested as an alternative to clay pigeons in shotgun sports, where the flight of the boomerang better mimics the flight of a bird offering a more challenging target.
The modern boomerang is often computer-aided designed with precision airfoils. The number of "wings" is often more than 2 as more lift is provided by 3 or 4 wings than by 2. Among the latest inventions is a round-shaped boomerang, which has a different look but using the same returning principle as traditional boomerangs. This allows for safer catch for players.
In 1992, German astronaut Ulf Merbold performed an experiment aboard Spacelab that established that boomerangs function in zero gravity as they do on Earth. French Astronaut Jean-François Clervoy aboard Mir repeated this in 1997. In 2008, Japanese astronaut Takao Doi again repeated the experiment on board the International Space Station.
Beginning in the later part of the twentieth century, there has been a bloom in the independent creation of unusually designed art boomerangs. These often have little or no resemblance to the traditional historical ones and on first sight some of these objects may not look like boomerangs at all. The use of modern thin plywoods and synthetic plastics have greatly contributed to their success. Designs are very diverse and can range from animal inspired forms, humorous themes, complex calligraphic and symbolic shapes, to the purely abstract. Painted surfaces are similarly richly diverse. Some boomerangs made primarily as art objects do not have the required aerodynamic properties to return.
Aerodynamics
A returning boomerang is a rotating wing. It consists of two or more arms, or wings, connected at an angle; each wing is shaped as an airfoil section. Although it is not a requirement that a boomerang be in its traditional shape, it is usually flat.
Boomerangs can be made for right- or left-handed throwers. The difference between right and left is subtle, the planform is the same but the leading edges of the aerofoil sections are reversed. A right-handed boomerang makes a counter-clockwise, circular flight to the left while a left-handed boomerang flies clockwise to the right. Most sport boomerangs weigh between , have a wingspan, and a range.
A falling boomerang starts spinning, and most then fall in a spiral. When the boomerang is thrown with high spin, a boomerang flies in a curved rather than a straight line. When thrown correctly, a boomerang returns to its starting point. As the wing rotates and the boomerang moves through the air, the airflow over the wings creates lift on both "wings". However, during one-half of each blade's rotation, it sees a higher airspeed, because the rotation tip speed and the forward speed add, and when it is in the other half of the rotation, the tip speed subtracts from the forward speed. Thus if thrown nearly upright, each blade generates more lift at the top than the bottom. While it might be expected that this would cause the boomerang to tilt around the axis of travel, because the boomerang has significant angular momentum, the gyroscopic precession causes the plane of rotation to tilt about an axis that is 90 degrees to the direction of flight, causing it to turn. When thrown in the horizontal plane, as with a Frisbee, instead of in the vertical, the same gyroscopic precession will cause the boomerang to fly violently, straight up into the air and then crash.
Fast Catch boomerangs usually have three or more symmetrical wings (seen from above), whereas a Long Distance boomerang is most often shaped similar to a question mark. Maximum Time Aloft boomerangs mostly have one wing considerably longer than the other. This feature, along with carefully executed bends and twists in the wings help to set up an "auto-rotation" effect to maximise the boomerang's hover time in descending from the highest point in its flight.
Some boomerangs have turbulators — bumps or pits on the top surface that act to increase the lift as boundary layer transition activators (to keep attached turbulent flow instead of laminar separation).
Throwing technique
Boomerangs are generally thrown in unobstructed, open spaces at least twice as large as the range of the boomerang. The flight direction to the left or right depends upon the design of the boomerang itself, not the thrower. A right-handed or left-handed boomerang can be thrown with either hand, but throwing a boomerang with the non-matching hand requires a throwing motion that many throwers find awkward. The following technique applies to a right-handed boomerang; the directions are mirrored for a left-handed boomerang. Different boomerang designs have different flight characteristics and are suitable for different conditions. The accuracy of the throw depends on understanding the weight and aerodynamics of that particular boomerang, and the strength, consistency and direction of the wind; from this, the thrower chooses the angle of tilt, the angle against the wind, the elevation of the trajectory, the degree of spin and the strength of the throw. A great deal of trial and error is required to perfect the throw over time.
A properly thrown boomerang will travel out parallel to the ground, sometimes climbing gently, perform a graceful, anti-clockwise, circular or tear-drop shaped arc, flatten out and return in a hovering motion, coming in from the left or spiralling in from behind. Ideally, the hover will allow a practiced catcher to clamp their hands shut horizontally on the boomerang from above and below, sandwiching the centre between their hands.
The grip used depends on size and shape; smaller boomerangs are held between finger and thumb at one end, while larger, heavier or wider boomerangs need one or two fingers wrapped over the top edge in order to induce a spin. The aerofoil-shaped section must face the inside of the thrower, and the flatter side outwards. It is usually inclined outwards, from a nearly vertical position to 20° or 30°; the stronger the wind, the closer to vertical. The elbow of the boomerang can point forwards or backwards, or it can be gripped for throwing; it just needs to start spinning on the required inclination, in the desired direction, with the right force.
The boomerang is aimed to the right of the oncoming wind; the exact angle depends on the strength of the wind and the boomerang itself. Left-handed boomerangs are thrown to the left of the wind and will fly a clockwise flight path. The trajectory is either parallel to the ground or slightly upwards. The boomerang can return without the aid of any wind, but even very slight winds must be taken into account however calm they might seem. Little or no wind is preferable for an accurate throw, light winds up to are manageable with skill. If the wind is strong enough to fly a kite, then it may be too strong unless a skilled thrower is using a boomerang designed for stability in stronger winds. Gusty days are a great challenge, and the thrower must be keenly aware of the ebb and flow of the wind strength, finding appropriate lulls in the gusts to launch their boomerang.
Competitions and records
A world record achievement was made on 3 June 2007 by Tim Lendrum in Aussie Round. Lendrum scored 96 out of 100, giving him a national record as well as an equal world record throwing an "AYR" made by expert boomerang maker Adam Carroll.
In international competition, a world cup is held every second year. , teams from Germany and the United States dominated international competition. The individual World Champion title was won in 2000, 2002, 2004, 2012, and 2016 by Swiss thrower Manuel Schütz. In 1992, 1998, 2006, and 2008 Fridolin Frost from Germany won the title.
The team competitions of 2012 and 2014 were won by Boomergang (an international team). World champions were Germany in 2012 and Japan in 2014 for the first time. Boomergang was formed by individuals from several countries, including the Colombian Alejandro Palacio. In 2016 USA became team world champion.
Competition disciplines
Modern boomerang tournaments usually involve some or all of the events listed below In all disciplines the boomerang must travel at least from the thrower. Throwing takes place individually. The thrower stands at the centre of concentric rings marked on an open field.
Events include:
Aussie Round: considered by many to be the ultimate test of boomeranging skills. The boomerang should ideally cross the circle and come right back to the centre. Each thrower has five attempts. Points are awarded for distance, accuracy and the catch.
Accuracy: points are awarded according to how close the boomerang lands to the centre of the rings. The thrower must not touch the boomerang after it has been thrown. Each thrower has five attempts. In major competitions there are two accuracy disciplines: Accuracy 100 and Accuracy 50.
Endurance: points are awarded for the number of catches achieved in 5 minutes.
Fast Catch: the time taken to throw and catch the boomerang five times. The winner has the fastest timed catches.
Trick Catch/Doubling: points are awarded for trick catches behind the back, between the feet, and so on. In Doubling, the thrower has to throw two boomerangs at the same time and catch them in sequence in a special way.
Consecutive Catch: points are awarded for the number of catches achieved before the boomerang is dropped. The event is not timed.
MTA 100 (Maximal Time Aloft, ): points are awarded for the length of time spent by the boomerang in the air. The field is normally a circle measuring 100 m. An alternative to this discipline, without the 100 m restriction is called MTA unlimited.
Long Distance: the boomerang is thrown from the middle point of a baseline. The furthest distance travelled by the boomerang away from the baseline is measured. On returning, the boomerang must cross the baseline again but does not have to be caught. A special section is dedicated to LD below.
Juggling: as with Consecutive Catch, only with two boomerangs. At any given time one boomerang must be in the air.
World records
Guinness World Record – Smallest Returning Boomerang
Non-discipline record: Smallest Returning Boomerang: Sadir Kattan of Australia in 1997 with long and wide. This tiny boomerang flew the required , before returning to the accuracy circles on 22 March 1997 at the Australian National Championships.
Guinness World Record – Longest Throw of Any Object by a Human
A boomerang was used to set a Guinness World Record with a throw of by David Schummy on 15 March 2005 at Murarrie Recreation Ground, Australia. This broke the record set by Erin Hemmings who threw an Aerobie on 14 July 2003 at Fort Funston, San Francisco.
Long-distance versions
Long-distance boomerang throwers aim to have the boomerang go the furthest possible distance while returning close to the throwing point. In competition the boomerang must intersect an imaginary surface defined as an infinite vertical projection of a line centred on the thrower. Outside of competitions, the definition is not so strict, and throwers may be happy simply not to walk too far to recover the boomerang.
General properties
Long-distance boomerangs are optimised to have minimal drag while still having enough lift to fly and return. For this reason, they have a very narrow throwing window, which discourages many beginners from continuing with this discipline. For the same reason, the quality of manufactured long-distance boomerangs is often difficult to determine.
Today's long-distance boomerangs have almost all an S or ? – question mark shape and have a beveled edge on both sides (the bevel on the bottom side is sometimes called an undercut). This is to minimise drag and lower the lift. Lift must be low because the boomerang is thrown with an almost total layover (flat). Long-distance boomerangs are most frequently made of composite material, mainly fibre glass epoxy composites.
Flight path
The projection of the flight path of long-distance boomerang on the ground resembles a water drop. For older types of long-distance boomerangs (all types of so-called big hooks), the first and last third of the flight path are very low, while the middle third is a fast climb followed by a fast descent. Nowadays, boomerangs are made in a way that their whole flight path is almost planar with a constant climb during the first half of the trajectory and then a rather constant descent during the second half.
From theoretical point of view, distance boomerangs are interesting also for the following reason: for achieving a different behaviour during different flight phases, the ratio of the rotation frequency to the forward velocity has a U-shaped function, i.e., its derivative crosses 0. Practically, it means that the boomerang being at the furthest point has a very low forward velocity. The kinetic energy of the forward component is then stored in the potential energy. This is not true for other types of boomerangs, where the loss of kinetic energy is non-reversible (the MTAs also store kinetic energy in potential energy during the first half of the flight, but then the potential energy is lost directly by the drag).
Related terms
In Noongar language, kylie is a flat curved piece of wood similar in appearance to a boomerang that is thrown when hunting for birds and animals. "Kylie" is one of the Aboriginal words for the hunting stick used in warfare and for hunting animals. Instead of following curved flight paths, kylies fly in straight lines from the throwers. They are typically much larger than boomerangs, and can travel very long distances; due to their size and hook shapes, they can cripple or kill an animal or human opponent. The word is perhaps an English corruption of a word meaning "boomerang" taken from one of the Western Desert languages, for example, the Warlpiri word "karli".
Cultural references
Trademarks of Australian companies using the boomerang as a symbol, emblem or logo proliferate, usually removed from Aboriginal context and symbolising "returning" or to distinguish an Australian brand. Early examples included Bain's White Ant Exterminator (1896); Webendorfer Bros. explosives (1898); E. A. Adams Foods (1920); and by the (still current) Boomerang Cigarette Papers Pty. Ltd.
"Aboriginalia", including the boomerang, as symbols of Australia dates from the late 1940s and early 1950s and was in widespread use by a largely European arts, crafts and design community. By the 1960s, the Australian tourism industry extended it to the very branding of Australia, particularly to overseas and domestic tourists as souvenirs and gifts and thus Aboriginal culture. At the very time when Aboriginal people and culture were subject to policies that removed them from their traditional lands and sought to assimilate them (physiologically and culturally) into mainstream white Australian culture, causing the Stolen Generations, Aboriginalia found an ironically "nostalgic", entry point into Australian popular culture at important social locations: holiday resorts and in Australian domestic interiors. In the 21st century, souvenir objects depicting Aboriginal peoples, symbolism and motifs including the boomerang, from the 1940s–1970s, regarded as kitsch and sold largely to tourists in the first instance, became highly sought after by both Aboriginal and non-Aboriginal collectors and has captured the imagination of Aboriginal artists and cultural commentators.
See also
List of premodern combat weapons
List of martial arts weapons
Australian Aboriginal artefacts
Batarang
Bat'leth
Captain Boomerang
Chakram
CAC Boomerang, a World War II fighter-plane
Flying wing, tailess boomerang shaped aircraft
Frisbee
Googie, boomerang-shaped architecture
Shuriken
Throwing stick
Valari
Melee weapon
References
Further reading
Boomerang (Encyclopedia.com)
Nishiyama, Yutaka, Why do boomerangs come back?, Int. J. of Pure and Appl. Math. 78(3), 335–347, 2012.
Valde-Nowak et al. (1987). "Upper Palaeolithic boomerang made of a mammoth tusk in south Poland". Nature 329: 436–438 (1 October 1987); doi:10.1038/329436a0.
External links
International Federation of Boomerang Associations
Boomerang aerodynamics: an online dissertation
Explanation of the origin of the word 'Boomerang'
How to Throw a Boomerang
1790s neologisms
Australian Aboriginal bushcraft
Individual sports
Recreational weapons
Sports equipment
Throwing clubs
Australian inventions
Sports originating in Australia
Physical activity and dexterity toys
Australian English
Hunting equipment
National symbols of Australia
Primitive weapons
Weapons of Australia
|
https://en.wikipedia.org/wiki/Bodybuilding
|
Bodybuilding is the practice of progressive resistance exercise to build, control, and develop one's muscles via hypertrophy. An individual who engages in this activity is referred to as a bodybuilder. It is primarily undertaken for aesthetic purposes over functional ones, distinguishing it from similar activities such as powerlifting, which focuses solely on increasing the physical load one can exert.
In professional bodybuilding, competitors appear onstage in line-ups and perform specified poses (and later individual posing routines) for a panel of judges who rank them based on conditioning, muscularity, posing, size, stage presentation, and symmetry. Bodybuilders prepare for competitions by exercising and eliminating non-essential body fat. This is enhanced at the final stage by a combination of carbohydrate loading and dehydration to achieve maximum muscle definition and vascularity. Some bodybuilders also tan and shave their bodies prior to competition.
Bodybuilding requires significant time and effort to reach the desired results. A novice bodybuilder may be able to gain of muscle per year if they lift weights for seven hours per week, but muscle gains begin to slow down after the first two years to about per year. After five years, gains can decrease to as little as per year. Some bodybuilders use anabolic steroids and other performance-enhancing drugs to build muscles and recover from injuries faster, however using performance enhancing drugs can have serious health risks. Furthermore most competitions prohibit the use of these substances. Despite some calls for drug testing to be implemented, the National Physique Committee (considered the leading amateur bodybuilding federation) does not require testing.
The winner of the annual IFBB Mr. Olympia contest is recognized as the world's top male professional bodybuilder. Since 1950, the NABBA Universe Championships have been considered the top amateur bodybuilding contests, with notable winners including Reg Park, Lee Priest, Steve Reeves, and Arnold Schwarzenegger.
History
Early history
Stone-lifting competitions were practiced in ancient Egypt, Greece, and Tamilakam. Western weightlifting developed in Europe from 1880 to 1953, with strongmen displaying feats of strength for the public and challenging each other. The focus was not on their physique, and they possessed relatively large bellies and fatty limbs compared to bodybuilders of today.
Eugen Sandow
Bodybuilding developed in the late 19th century, promoted in England by German Eugen Sandow, now considered as the "Father of Modern Bodybuilding". He allowed audiences to enjoy viewing his physique in "muscle display performances". Although audiences were thrilled to see a well-developed physique, the men simply displayed their bodies as part of strength demonstrations or wrestling matches. Sandow had a stage show built around these displays through his manager, Florenz Ziegfeld. The Oscar-winning 1936 musical film The Great Ziegfeld depicts the beginning of modern bodybuilding, when Sandow began to display his body for carnivals.
Sandow was so successful at flexing and posing his physique that he later created several businesses around his fame, and was among the first to market products branded with his name. He was credited with inventing and selling the first exercise equipment for the masses: machined dumbbells, spring pulleys, and tension bands. Even his image was sold by the thousands in "cabinet cards" and other prints.
First large-scale bodybuilding competition
Sandow organized the first bodybuilding contest on September 14, 1901, called the "Great Competition". It was held at the Royal Albert Hall in London. Judged by Sandow, Sir Charles Lawes, and Sir Arthur Conan Doyle, the contest was a great success and many bodybuilding enthusiasts were turned away due to the overwhelming number of audience members. The trophy presented to the winner was a gold statue of Sandow sculpted by Frederick Pomeroy. The winner was William L. Murray of Nottingham. The silver Sandow trophy was presented to second-place winner D. Cooper. The bronze Sandow trophy now the most famous of all was presented to third-place winner A.C. Smythe. In 1950, this same bronze trophy was presented to Steve Reeves for winning the inaugural NABBA Mr. Universe contest. It would not resurface again until 1977 when the winner of the IFBB Mr. Olympia contest, Frank Zane, was presented with a replica of the bronze trophy. Since then, Mr. Olympia winners have been consistently awarded a replica of the bronze Sandow.
The first large-scale bodybuilding competition in America took place from December 28, 1903 to January 2, 1904, at Madison Square Garden in New York City. The competition was promoted by Bernarr Macfadden, the father of physical culture and publisher of original bodybuilding magazines such as Health & Strength. The winner was Al Treloar, who was declared "The Most Perfectly Developed Man in the World". Treloar won a thousand dollar cash prize, a substantial sum at that time. Two weeks later, Thomas Edison made a film of Treloar's posing routine. Edison had also made two films of Sandow a few years before. Those were the first three motion pictures featuring a bodybuilder. In the early 20th century, Macfadden and Charles Atlas continued to promote bodybuilding across the world.
Notable early bodybuilders
Many other important bodybuilders in the early history of bodybuilding prior to 1930 include: Earle Liederman (writer of some of bodybuilding's earliest books), Zishe Breitbart, Georg Hackenschmidt, Emy Nkemena, George F. Jowett, Finn Hateral (a pioneer in the art of posing), Frank Saldo, Monte Saldo, William Bankier, Launceston Elliot, Sig Klein, Sgt. Alfred Moss, Joe Nordquist, Lionel Strongfort ("Strongfortism"), Gustav Frištenský, Ralph Parcaut (a champion wrestler who also authored an early book on "physical culture"), and Alan P. Mead (who became a muscle champion despite the fact that he lost a leg in World War I). Actor Francis X. Bushman, who was a disciple of Sandow, started his career as a bodybuilder and sculptor's model before beginning his famous silent movie career.
1950s1960s
Bodybuilding became more popular in the 1950s and 1960s with the emergence of strength and gymnastics champions, and the simultaneous popularization of bodybuilding magazines, training principles, nutrition for bulking up and cutting down, the use of protein and other food supplements, and the opportunity to enter physique contests. The number of bodybuilding organizations grew, and most notably the International Federation of Bodybuilders (IFBB) was founded in 1946 by Canadian brothers Joe and Ben Weider. Other bodybuilding organizations included the Amateur Athletic Union (AAU), National Amateur Bodybuilding Association (NABBA), and the World Bodybuilding Guild (WBBG). Consequently, the contests grew both in number and in size. Besides the many "Mr. XXX" (insert town, city, state, or region) championships, the most prestigious titles were Mr. America, Mr. World, Mr. Universe, Mr. Galaxy, and ultimately Mr. Olympia, which was started in 1965 by the IFBB and is now considered the most important bodybuilding competition in the world.
During the 1950s, the most successful and most famous competing bodybuilders were Bill Pearl, Reg Park, Leroy Colbert, and Clarence Ross. Certain bodybuilders rose to fame thanks to the relatively new medium of television, as well as cinema. The most notable were Jack LaLanne, Steve Reeves, Reg Park, and Mickey Hargitay. While there were well-known gyms throughout the country during the 1950s (such as Vince's Gym in North Hollywood, California and Vic Tanny's chain gyms), there were still segments of the United States that had no "hardcore" bodybuilding gyms until the advent of Gold's Gym in the mid-1960s. Finally, the famed Muscle Beach in Santa Monica continued its popularity as the place to be for witnessing acrobatic acts, feats of strength, and the like. The movement grew more in the 1960s with increased TV and movie exposure, as bodybuilders were typecast in popular shows and movies.
1970s1990s
New organizations
In the 1970s, bodybuilding had major publicity thanks to the appearance of Arnold Schwarzenegger, Franco Columbu, Lou Ferrigno, Mike Mentzer and others in the 1977 docudrama Pumping Iron. By this time, the IFBB dominated the competitive bodybuilding landscape and the Amateur Athletic Union (AAU) took a back seat. The National Physique Committee (NPC) was formed in 1981 by Jim Manion, who had just stepped down as chairman of the AAU Physique Committee. The NPC has gone on to become the most successful bodybuilding organization in the United States and is the amateur division of the IFBB. The late 1980s and early 1990s saw the decline of AAU-sponsored bodybuilding contests. In 1999, the AAU voted to discontinue its bodybuilding events.
Anabolic/androgenic steroid use
This period also saw the rise of anabolic steroids in bodybuilding and many other sports. More significant use began with Arnold Schwarzenegger, Sergio Oliva, and Lou Ferrigno in the late 1960s and early 1970s, and continuing through the 1980s with Lee Haney, the 1990s with Dorian Yates, Ronnie Coleman, and Markus Rühl, and up to the present day. Bodybuilders such as Greg Kovacs attained mass and size never seen previously but were not successful at the pro level. Others were renowned for their spectacular development of a particular body part, like Tom Platz or Paul Demayo for their leg muscles. At the time of shooting Pumping Iron, Schwarzenegger, while never admitting to steroid use until long after his retirement, said, "You have to do anything you can to get the advantage in competition". He would later say that he did not regret using steroids.
To combat anabolic steroid use and in the hopes of becoming a member of the IOC, the IFBB introduced doping tests for both steroids and other banned substances. Although doping tests occurred, the majority of professional bodybuilders still used anabolic steroids for competition. During the 1970s, the use of anabolic steroids was openly discussed, partly due to the fact they were legal. In the Anabolic Steroid Control Act of 1990, U.S. Congress placed anabolic steroids into Schedule III of the Controlled Substances Act (CSA). In Canada, steroids are listed under Schedule IV of the Controlled Drugs and Substances Act, enacted by the federal Parliament in 1996.
World Bodybuilding Federation
In 1990, professional wrestling promoter Vince McMahon attempted to form his own bodybuilding organization known as the World Bodybuilding Federation (WBF). It operated as a sister to the World Wrestling Federation (WWF, now WWE), which provided cross-promotion via its performers and personalities. Tom Platz served as the WBF's director of talent development, and announced the new organization during an ambush of that year's Mr. Olympia (which, unbeknownst to organizers, McMahon and Platz had attended as representatives of an accompanying magazine, Bodybuilding Lifestyles). It touted efforts to bring bigger prize money and more "dramatic" events to the sport of bodybuilding—which resulted in its championships being held as pay-per-view events with WWF-inspired sports entertainment features and showmanship. The organization signed high-valued contracts with a number of IFBB regulars.
The IFBB's inaugural championship in June 1991 (won by Gary Strydom) received mixed reviews. The WBF would be indirectly impacted by a steroid scandal involving the WWF, prompting the organization to impose a drug testing policy prior to the 1992 championship. The drug testing policy hampered the quality of the 1992 championship, while attempts to increase interest by hiring WCW wrestler Lex Luger as a figurehead (hosting a WBF television program on USA Network, and planning to make a guest pose during the 1992 championship before being injured in a motorcycle accident) and attempting to sign Lou Ferrigno (who left the organization shortly after the drug testing policy was announced) did not come to fruition. The second PPV received a minuscule audience, and the WBF dissolved only one month later in July 1992.
2000s
In 2003, Joe Weider sold Weider Publications to American Media, Inc. (AMI). The position of president of the IFBB was filled by Rafael Santonja following the death of Ben Weider in October 2008. In 2004, contest promoter Wayne DeMilia broke ranks with the IFBB and AMI took over the promotion of the Mr. Olympia contest: in 2017 AMI took the contest outright.
In the early 21st century, patterns of consumption and recreation similar to those of the United States became more widespread in Europe and especially in Eastern Europe following the collapse of the Soviet Union. This resulted in the emergence of whole new populations of bodybuilders from former Eastern Bloc states.
Olympic sport discussion
In the early 2000s, the IFBB was attempting to make bodybuilding an Olympic sport. It obtained full IOC membership in 2000 and was attempting to get approved as a demonstration event at the Olympics, which would hopefully lead to it being added as a full contest. This did not happen and Olympic recognition for bodybuilding remains controversial since many argue that bodybuilding is not a sport.
Social media
The advent of social media had a profound influence on fitness and bodybuilding. It is common to see platforms such as Instagram, TikTok, and YouTube flooded with fitness-related content, changing how the average person views and interacts with fitness culture. Gym clothing brands like YoungLA and Rawgear leveraged this platform to create their brands. By recruiting fitness ambassadors—real people who embody their brand values—these companies personalize their marketing strategy and create a more relatable image. These ambassadors, often in the form of fitness influencers or personal trainers, promote the brand by sharing their workout routines, dietary plans, and gym clothing. YouTube in particular has seen a surge in fitness content, ranging from gym vlogs to detailed discussions on workout attire. This not only provides consumers with an abundance of free resources to aid their fitness journey, but also creates a more informed consumer base.
Another growing trend with gym-related social media is the phenomenon of gym-shaming; a video posted by content creator Jessica Fernandez on Twitch that went viral showed her lifting weights in a gym while a man in the background stared at her, sparking a widespread debate about narcissism and an increasingly toxic gym culture in the age of social media. The video led to criticism of an emerging trend in which gyms, once known as places for focused workouts, are now being treated as filming locations for aspiring or established influencers with bystanders being unintentionally placed under the public eye in the process. Bodybuilder Joey Swoll, who voiced his concerns over this culture, addressed the controversy by stating that while harassment in gyms needs to be addressed, the man in Fernandez's video was not guilty of it. Although social media is giving more attention to the world of bodybuilding, there are still some areas that are controversial.
Areas
Professional bodybuilding
In the modern bodybuilding industry, the term "professional" generally means a bodybuilder who has won qualifying competitions as an amateur and has earned a "pro card" from their respective organization. Professionals earn the right to compete in competitions that include monetary prizes. A pro card also prohibits the athlete from competing in federations other than the one from which they have received the pro card. Depending on the level of success, these bodybuilders may receive monetary compensation from sponsors, much like athletes in other sports.
Natural bodybuilding
Due to the growing concerns of the high cost, health consequences, and illegal nature of some steroids, many organizations have formed in response and have deemed themselves "natural" bodybuilding competitions. In addition to the concerns noted, many promoters of bodybuilding have sought to shed the "freakish" perception that the general public has of bodybuilding and have successfully introduced a more mainstream audience to the sport of bodybuilding by including competitors whose physiques appear much more attainable and realistic.
In natural contests, the testing protocol ranges among organizations from lie detectors to urinalysis. Penalties also range from organization to organization from suspensions to strict bans from competition. It is also important to note that natural organizations also have their own list of banned substances and it is important to refer to each organization's website for more information about which substances are banned from competition. There are many natural bodybuilding organizations; some of the larger ones include: MuscleMania, Ultimate Fitness Events (UFE), INBF/WNBF, and INBA/PNBA. These organizations either have an American or worldwide presence and are not limited to the country in which they are headquartered.
Men's physique
Due to those who found open-bodybuilding to be "too big" or "ugly" and unhealthy, a new category was started in 2013. The first Men's Physique Olympia winner was Mark Wingson, who was followed by Jeremy Buendia for four consecutive years. Like open-bodybuilding, the federations in which bodybuilders can compete are natural divisions as well as normal ones. The main difference between the two is that men's physique competitors pose in board shorts rather than a traditional posing suit and open-bodybuilders are much larger and are more muscular than the men's physique competitors. Open-bodybuilders have an extensive routine for posing while the Physique category is primarily judged by the front and back poses. Many of the men's physique competitors are not above 200 lbs and have a bit of a more attainable and aesthetic physique in comparison to open-bodybuilders. Although this category started off slowly, it has grown tremendously, and currently men's physique seems to be a more popular class than open-bodybuilding.
Classic physique
This is the middle ground of both Men's Physique and Bodybuilding. The competitors in this category are not nearly as big as bodybuilders but not as small as men's physique competitors. They pose and perform in men's boxer briefs to show off the legs, unlike Men's Physique which hide the legs in board shorts. Classic physique started in 2016. Danny Hester was the first classic physique Mr. Olympia and as of 2022, Chris Bumstead is the 4x reigning Mr. Olympia.
Female bodybuilding
The female movement of the 1960s, combined with Title IX and the all around fitness revolution, gave birth to new alternative perspectives of feminine beauty that included an athletic physique of toned muscle. This athletic physique was found in various popular media outlets such as fashion magazines. Female bodybuilders changed the limits of traditional femininity as their bodies showed that muscles are not only just for men.
The first U.S. Women's National Physique Championship, promoted by Henry McGhee and held in 1978 in Canton, Ohio, is generally regarded as the first true female bodybuilding contest—that is, the first contest where the entrants were judged solely on muscularity. In 1980, the first Ms. Olympia (initially known as the "Miss" Olympia), the most prestigious contest for professionals, was held. The first winner was Rachel McLish, who had also won the NPC's USA Championship earlier in the year. The contest was a major turning point for female bodybuilding. McLish inspired many future competitors to start training and competing.
In 1985, the documentary Pumping Iron II: The Women was released. It documented the preparation of several women for the 1983 Caesars Palace World Cup Championship. Competitors prominently featured in the film were Kris Alexander, Lori Bowen, Lydia Cheng, Carla Dunlap, Bev Francis, and McLish. At the time, Francis was actually a powerlifter, though she soon made a successful transition to bodybuilding, becoming one of the leading competitors of the late 1980s and early 1990s.
The related areas of fitness and figure competition increased in popularity, surpassing that of female bodybuilding, and provided an alternative for women who choose not to develop the level of muscularity necessary for bodybuilding. McLish would closely resemble what is thought of today as a fitness and figure competitor, instead of what is now considered a female bodybuilder. Fitness competitions also adopted gymnastic elements.
E. Wilma Conner competed in the 2011 NPC Armbrust Pro Gym Warrior Classic Championships in Loveland, Colorado, at the age of 75 years and 349 days.
Competition
In competitive bodybuilding, bodybuilders aspire to present an "aesthetically pleasing" body on stage. In prejudging, competitors do a series of mandatory poses: the front lat spread, rear lat spread, front double biceps, back double biceps, side chest, side triceps, Most Muscular (men only), abdominals and thighs. Each competitor also performs a personal choreographed routine to display their physique. A posedown is usually held at the end of a posing round, while judges are finishing their scoring. Bodybuilders usually spend a lot of time practising their posing in front of mirrors or under the guidance of their coach.
In contrast to strongman or powerlifting competitions, where physical strength is paramount, or to Olympic weightlifting, where the main point is equally split between strength and technique, bodybuilding competitions typically emphasize condition, size, and symmetry. Different organizations emphasize particular aspects of competition, and sometimes have different categories in which to compete.
Preparations
Bulking and cutting
The general strategy adopted by most present-day competitive bodybuilders is to make muscle gains for most of the year (known as the "off-season") and, approximately 12–14 weeks from competition, lose a maximum of body fat (referred to as "cutting") while preserving as much muscular mass as possible. The bulking phase entails remaining in a net positive energy balance (calorie surplus). The amount of a surplus in which a person remains is based on the person's goals, as a bigger surplus and longer bulking phase will create more fat tissue. The surplus of calories relative to one's energy balance will ensure that muscles remain in a state of anabolism.
The cutting phase entails remaining in a net negative energy balance (calorie deficit). The main goal of cutting is to oxidize fat while preserving as much muscle as possible. The larger the calorie deficit, the faster one will lose weight. However, a large calorie deficit will also create the risk of losing muscle tissue.
The bulking and cutting strategy is effective because there is a well-established link between muscle hypertrophy and being in a state of positive energy balance. A sustained period of caloric surplus will allow the athlete to gain more fat-free mass than they could otherwise gain under eucaloric conditions. Some gain in fat mass is expected, which athletes seek to oxidize in a cutting period while maintaining as much lean mass as possible.
Clean bulking
The attempt to increase muscle mass in one's body without any gain in fat is called clean bulking. Competitive bodybuilders focus their efforts to achieve a peak appearance during a brief "competition season". Clean bulking takes longer and is a more refined approach to achieving the body fat and muscle mass percentage a person is looking for. A common tactic for keeping fat low and muscle mass high is to have higher calorie and lower calorie days to maintain a balance between gain and loss. Many clean bulk diets start off with a moderate amount of carbs, moderate amount of protein, and a low amount of fats. To maintain a clean bulk, it is important to reach calorie goals every day. Macronutrient goals (carbs, fats, and proteins) will be different for each person, but it is ideal to get as close as possible.
Dirty bulking
"Dirty bulking" is the process of eating at a massive caloric surplus without trying to figure out the exact amount of ingested macronutrients. Weightlifters who attempt to gain mass quickly with no aesthetic concerns often choose to do this.
Muscle growth
Bodybuilders use three main strategies to maximize muscle hypertrophy:
Strength training through weights or elastic/hydraulic resistance.
Specialized nutrition, incorporating extra protein and supplements when necessary.
Adequate rest, including sleep and recuperation between workouts.
Weight training
Intensive weight training causes micro-tears to the muscles being trained; this is generally known as microtrauma. These micro-tears in the muscle contribute to the soreness felt after exercise, called delayed onset muscle soreness (DOMS). It is the repair of these micro-traumas that results in muscle growth. Normally, this soreness becomes most apparent a day or two after a workout. However, as muscles become adapted to the exercises, soreness tends to decrease.
Weight training aims to build muscle by prompting two different types of hypertrophy: sarcoplasmic and myofibrillar. Sarcoplasmic hypertrophy leads to larger muscles and so is favored by bodybuilders more than myofibrillar hypertrophy, which builds athletic strength. Sarcoplasmic hypertrophy is triggered by increasing repetitions, whereas myofibrillar hypertrophy is triggered by lifting heavier weight. In either case, there is an increase in both size and strength of the muscles (compared to what happens if that same individual does not lift weights at all), although the emphasis is different.
Nutrition
The high levels of muscle growth and repair achieved by bodybuilders require a specialized diet. Generally speaking, bodybuilders require more calories than the average person of the same weight to provide the protein and energy requirements needed to support their training and increase muscle mass. In preparation of a contest, a sub-maintenance level of food energy is combined with cardiovascular exercise to lose body fat. Proteins, carbohydrates and fats are the three major macronutrients that the human body needs in order to build muscle. The ratios of calories from carbohydrates, proteins, and fats vary depending on the goals of the bodybuilder.
Carbohydrates
Carbohydrates play an important role for bodybuilders. They give the body energy to deal with the rigors of training and recovery. Carbohydrates also promote secretion of insulin, a hormone enabling cells to get the glucose they need. Insulin also carries amino acids into cells and promotes protein synthesis. Insulin has steroid-like effects in terms of muscle gains. It is impossible to promote protein synthesis without the existence of insulin, which means that without ingesting carbohydrates or protein—which also induces the release of insulin—it is impossible to add muscle mass. Bodybuilders seek out low-glycemic polysaccharides and other slowly digesting carbohydrates, which release energy in a more stable fashion than high-glycemic sugars and starches. This is important as high-glycemic carbohydrates cause a sharp insulin response, which places the body in a state where it is likely to store additional food energy as fat. However, bodybuilders frequently do ingest some quickly digesting sugars (often in form of pure dextrose or maltodextrin) just before, during, and/or just after a workout. This may help to replenish glycogen stored within the muscle, and to stimulate muscle protein synthesis.
Protein
The motor proteins actin and myosin generate the forces exerted by contracting muscles. Cortisol decreases amino acid uptake by muscle and inhibits protein synthesis. Current recommendations suggest that bodybuilders should consume 25–30% of protein per total calorie intake to further their goal of maintaining and improving their body composition. This is a widely debated topic, with many arguing that 1 gram of protein per pound of body weight per day is ideal, some suggesting that less is sufficient, while others recommending 1.5, 2, or more. It is believed that protein needs to be consumed frequently throughout the day, especially during/after a workout, and before sleep. There is also some debate concerning the best type of protein to take. Chicken, turkey, beef, pork, fish, eggs and dairy foods are high in protein, as are some nuts, seeds, beans, and lentils. Casein or whey are often used to supplement the diet with additional protein. Whey is the type of protein contained in many popular brands of protein supplements and is preferred by many bodybuilders because of its high biological value (BV) and quick absorption rates. Whey protein also has a bigger effect than casein on insulin levels, triggering about double the amount of insulin release. That effect is somewhat overcome by combining casein and whey.
Bodybuilders were previously thought to require protein with a higher BV than that of soy, which was additionally avoided due to its alleged estrogenic (female hormone) properties, though more recent studies have shown that soy actually contains phytoestrogens which compete with estrogens in the male body and can block estrogenic actions. Soy, flax, and other plant-based foods that contain phytoestrogens are also beneficial because they can inhibit some pituitary functions while stimulating the liver's P450 system (which eliminates hormones, drugs, and waste from the body) to more actively process and excrete excess estrogen.
Meals
Some bodybuilders often split their food intake into 5 to 7 meals of equal nutritional content and eat at regular intervals (e.g., every 2 to 3 hours). This approach serves two purposes: to limit overindulging in the cutting phase, and to allow for the consumption of large volumes of food during the bulking phase. Eating more frequently does not increase basal metabolic rate when compared to 3 meals a day. While food does have a metabolic cost to digest, absorb, and store, called the thermic effect of food, it depends on the quantity and type of food, not how the food is spread across the meals of the day. Well-controlled studies using whole-body calorimetry and doubly labeled water have demonstrated that there is no metabolic advantage to eating more frequently.
Dietary supplements
The important role of nutrition in building muscle and losing fat means bodybuilders may consume a wide variety of dietary supplements. Various products are used in an attempt to augment muscle size, increase the rate of fat loss, improve joint health, increase natural testosterone production, enhance training performance and prevent potential nutrient deficiencies.
Performance-enhancing substances
Some bodybuilders use drugs such as anabolic steroids and precursor substances such as prohormones to increase muscle hypertrophy. Anabolic steroids cause hypertrophy of both types (I and II) of muscle fibers, likely caused by an increased synthesis of muscle proteins. They also provoke undesired side effects including hepatotoxicity, gynecomastia, acne, the early onset of male pattern baldness and a decline in the body's own testosterone production, which can cause testicular atrophy. Other performance-enhancing substances used by competitive bodybuilders include human growth hormone (HGH). HGH is also used by female bodybuilders to obtain bigger muscles "while maintaining a 'female appearance'".
Muscle growth is more difficult to achieve in older adults than younger adults because of biological aging, which leads to many metabolic changes detrimental to muscle growth; for instance, by diminishing growth hormone and testosterone levels. Some recent clinical studies have shown that low-dose HGH treatment for adults with HGH deficiency changes the body composition by increasing muscle mass, decreasing fat mass, increasing bone density and muscle strength, improves cardiovascular parameters, and affects the quality of life without significant side effects.
In rodents, knockdown of metallothionein gene expression results in activation of the Akt pathway and increases in myotube size, in type IIb fiber hypertrophy, and ultimately in muscle strength.
Injecting oil into muscles
Some bodybuilders inject oils or other compounds into their muscles (sometimes known as "synthol") in order to enhance their size or appearance. This practice can have serious health consequences.
Rest
Although muscle stimulation occurs when lifting weights, muscle growth occurs afterward during rest periods. Some bodybuilders add a massage at the end of each workout to their routine as a method of recovering.
Overtraining
Overtraining occurs when a bodybuilder has trained to the point where their workload exceeds their recovery capacity. There are many reasons why overtraining occurs, including lack of adequate nutrition, lack of recovery time between workouts, insufficient sleep, and training at a high intensity for too long (a lack of splitting apart workouts). Training at a high intensity too frequently also stimulates the central nervous system (CNS) and can result in a hyperadrenergic state that interferes with sleep patterns. To avoid overtraining, intense frequent training must be met with at least an equal amount of purposeful recovery. Timely provision of carbohydrates, proteins, and various micronutrients such as vitamins, minerals, phytochemicals, even nutritional supplements are critical. A mental disorder, informally called bigorexia (by analogy with anorexia), may account for overtraining in some individuals. Sufferers feel as if they are never big enough or muscular enough, which forces them to overtrain in order to try to reach their goal physique.
An article by Muscle & Fitness magazine, "Overtrain for Big Gains", claimed that overtraining for a brief period can be beneficial. Overtraining can be used advantageously, as when a bodybuilder is purposely overtrained for a brief period of time to super compensate during a regeneration phase. These are known as "shock micro-cycles" and were a key training technique used by Soviet athletes.
See also
References
External links
Body modification
Athletic sports
Individual sports
Weight training
Body shape
Articles containing video clips
Physical exercise
|
https://en.wikipedia.org/wiki/Bioterrorism
|
Bioterrorism is terrorism involving the intentional release or dissemination of biological agents. These agents include bacteria, viruses, insects, fungi, and/or toxins, and may be in a naturally occurring or a human-modified form, in much the same way as in biological warfare. Further, modern agribusiness is vulnerable to anti-agricultural attacks by terrorists, and such attacks can seriously damage economy as well as consumer confidence. The latter destructive activity is called agrobioterrorism and is a subtype of agro-terrorism.
Definition
Bioterrorism is the deliberate release of viruses, bacteria, toxins, or other harmful agents to cause illness or death in people, animals, or plants. These agents are typically found in nature, but could be mutated or altered to increase their ability to cause disease, make them resistant to current medicines, or to increase their ability to be spread into the environment. Biological agents can be spread through the air, water, or in food. Biological agents are attractive to terrorists because they are extremely difficult to detect and do not cause illness for several hours to several days. Some bioterrorism agents, like the smallpox virus, can be spread from person to person and some, like anthrax, cannot. Bioterrorism may be favored because biological agents are relatively easy and inexpensive to obtain, can be easily disseminated, and can cause widespread fear and panic beyond the actual physical damage. Military leaders, however, have learned that, as a military asset, bioterrorism has some important limitations; it is difficult to use a bioweapon in a way that only affects the enemy and not friendly forces. A biological weapon is useful to terrorists mainly as a method of creating mass panic and disruption to a state or a country. However, technologists such as Bill Joy have warned of the potential power which genetic engineering might place in the hands of future bio-terrorists.
The use of agents that do not cause harm to humans, but disrupt the economy, have also been discussed. One such pathogen is the foot-and-mouth disease (FMD) virus, which is capable of causing widespread economic damage and public concern (as witnessed in the 2001 and 2007 FMD outbreaks in the UK), while having almost no capacity to infect humans.
History
By the time World War I began, attempts to use anthrax were directed at animal populations. This generally proved to be ineffective.
Shortly after the start of World War I, Germany launched a biological sabotage campaign in the United States, Russia, Romania, and France. At that time, Anton Dilger lived in Germany, but in 1915 he was sent to the United States carrying cultures of glanders, a virulent disease of horses and mules. Dilger set up a laboratory in his home in Chevy Chase, Maryland. He used stevedores working the docks in Baltimore to infect horses with glanders while they were waiting to be shipped to Britain. Dilger was under suspicion as being a German agent, but was never arrested. Dilger eventually fled to Madrid, Spain, where he died during the Influenza Pandemic of 1918. In 1916, the Russians arrested a German agent with similar intentions. Germany and its allies infected French cavalry horses and many of Russia's mules and horses on the Eastern Front. These actions hindered artillery and troop movements, as well as supply convoys.
In 1972, police in Chicago arrested two college students, Allen Schwander and Stephen Pera, who had planned to poison the city's water supply with typhoid and other bacteria. Schwander had founded a terrorist group, "R.I.S.E.", while Pera collected and grew cultures from the hospital where he worked. The two men fled to Cuba after being released on bail. Schwander died of natural causes in 1974, while Pera returned to the U.S. in 1975 and was put on probation.
In 1980, the World Health Organization (WHO) announced the eradication of smallpox, a highly contagious and incurable disease. Although the disease has been eliminated in the wild, frozen stocks of smallpox virus are still maintained by the governments of the United States and Russia. Disastrous consequences are feared if rogue politicians or terrorists were to get hold of the smallpox strains. Since vaccination programs are now terminated, the world population is more susceptible to smallpox than ever before.
In Oregon in 1984, followers of the Bhagwan Shree Rajneesh attempted to control a local election by incapacitating the local population. They infected salad bars in 11 restaurants, produce in grocery stores, doorknobs, and other public domains with Salmonella typhimurium bacteria in the city of The Dalles, Oregon. The attack infected 751 people with severe food poisoning. There were no fatalities. This incident was the first known bioterrorist attack in the United States in the 20th century. It was also the single largest bioterrorism attack on U.S. soil.
In June 1993, the religious group Aum Shinrikyo released anthrax in Tokyo. Eyewitnesses reported a foul odor. The attack was a failure, because it did not infect a single person. The reason for this is due to the fact that the group used the vaccine strain of the bacterium. The spores which were recovered from the site of the attack showed that they were identical to an anthrax vaccine strain that was given to animals at the time. These vaccine strains are missing the genes that cause a symptomatic response.
In September and October 2001, several cases of anthrax broke out in the United States, apparently deliberately caused. Letters laced with infectious anthrax were concurrently delivered to news media offices and the U.S. Congress, alongside an ambiguously related case in Chile. The letters killed five people.
Scenarios
There are multiple considerable scenarios, how terrorists might employ biological agents. In 2000, tests conducted by various US agencies showed that indoor attacks in densely populated spaces are much more serious than outdoor attacks. Such enclosed spaces are large buildings, trains, indoor arenas, theaters, malls, tunnels and similar. Contra-measures against such scenarios are building architecture and ventilation systems engineering. In 1993, sewage was spilled out into a river, subsequently drawn into the water system and affected 400,000 people in Milwaukee, Wisconsin. The disease-causing organism was cryptosporidium parvum. This man-made disaster can be a template for a terrorist scenario. Nevertheless, terrorist scenarios are considered more likely near the points of delivery than at the water sources before the water treatment. Release of biological agents is more likely for a single building or a neighborhood. Counter-measures against this scenario include the further limitation of access to the water supply systems, tunnels, and infrastructure. Agricultural crop-duster flights might be misused as delivery devices for biological agents as well. Counter-measures against this scenario are background checks of employees of crop-dusting companies and surveillance procedures.
In the most common hoax scenario, no biological agents are employed. For instance, an envelope with powder in it that says, “You've just been exposed to anthrax.” Such hoaxes have been shown to have a large psychological impact on the population.
Anti-agriculture attacks are considered to require relatively little expertise and technology. Biological agents that attack livestock, fish, vegetation, and crops are mostly not contagious to humans and are therefore easier for attackers to handle. Even a few cases of infection can disrupt a country's agricultural production and exports for months, as evidenced by FMD outbreaks.
Types of agents
Under current United States law, bio-agents which have been declared by the U.S. Department of Health and Human Services or the U.S. Department of Agriculture to have the "potential to pose a severe threat to public health and safety" are officially defined as "select agents." The CDC categorizes these agents (A, B or C) and administers the Select Agent Program, which regulates the laboratories which may possess, use, or transfer select agents within the United States. As with US attempts to categorize harmful recreational drugs, designer viruses are not yet categorized and avian H5N1 has been shown to achieve high mortality and human-communication in a laboratory setting.
Category A
These high-priority agents pose a risk to national security, can be easily transmitted and disseminated, result in high mortality, have potential major public health impact, may cause public panic, or require special action for public health preparedness.
SARS and COVID-19, though not as lethal as other diseases, was concerning to scientists and policymakers for its social and economic disruption potential. After the global containment of the pandemic, the United States President George W. Bush stated "...A global influenza pandemic that infects millions and lasts from one to three years could be far worse."
Tularemia or "rabbit fever": Tularemia has a very low fatality rate if treated, but can severely incapacitate. The disease is caused by the Francisella tularensis bacterium, and can be contracted through contact with fur, inhalation, ingestion of contaminated water or insect bites. Francisella tularensis is very infectious. A small number of organisms (10–50 or so) can cause disease. If F. tularensis were used as a weapon, the bacteria would likely be made airborne for exposure by inhalation. People who inhale an infectious aerosol would generally experience severe respiratory illness, including life-threatening pneumonia and systemic infection, if they are not treated. The bacteria that cause tularemia occur widely in nature and could be isolated and grown in quantity in a laboratory, although manufacturing an effective aerosol weapon would require considerable sophistication.
Anthrax: Anthrax is a non-contagious disease caused by the spore-forming bacterium Bacillus anthracis. The ability of Anthrax to produce within small spores, or bacilli bacterium, makes it readily permeable to porous skin and can cause abrupt symptoms within 24 hours of exposure. The dispersal of this pathogen among densely populated areas is said to carry less than one percent mortality rate, for cutaneous exposure, to a ninety percent or higher mortality for untreated inhalational infections. An anthrax vaccine does exist but requires many injections for stable use. When discovered early, anthrax can be cured by administering antibiotics (such as ciprofloxacin). Its first modern incidence in biological warfare were when Scandinavian "freedom fighters" supplied by the German General Staff used anthrax with unknown results against the Imperial Russian Army in Finland in 1916. In 1993, the Aum Shinrikyo used anthrax in an unsuccessful attempt in Tokyo with zero fatalities. Anthrax was used in a series of attacks by a microbiologist at the US Army Medical Research Institute of Infection Disease on the offices of several United States Senators in late 2001. The anthrax was in a powder form and it was delivered by the mail. This bioterrorist attack inevitably prompted seven cases of cutaneous anthrax and eleven cases of inhalation anthrax, with five leading to deaths. Additionally, an estimated 10 to 26 cases had prevented fatality through treatment supplied to over 30,000 individuals. Anthrax is one of the few biological agents that federal employees have been vaccinated for. In the US an anthrax vaccine, Anthrax Vaccine Adsorbed (AVA) exists and requires five injections for stable use. Other anthrax vaccines also exist. The strain used in the 2001 anthrax attacks was identical to the strain used by the USAMRIID.
Smallpox: Smallpox is a highly contagious virus. It is transmitted easily through the atmosphere and has a high mortality rate (20–40%). Smallpox was eradicated in the world in the 1970s, thanks to a worldwide vaccination program. However, some virus samples are still available in Russian and American laboratories. Some believe that after the collapse of the Soviet Union, cultures of smallpox have become available in other countries. Although people born pre-1970 will have been vaccinated for smallpox under the WHO program, the effectiveness of vaccination is limited since the vaccine provides high level of immunity for only 3 to 5 years. Revaccination's protection lasts longer. As a biological weapon smallpox is dangerous because of the highly contagious nature of both the infected and their pox. Also, the infrequency with which vaccines are administered among the general population since the eradication of the disease would leave most people unprotected in the event of an outbreak. Smallpox occurs only in humans, and has no external hosts or vectors.
Botulinum toxin: The neurotoxin Botulinum is the deadliest toxin known to man, and is produced by the bacterium Clostridium botulinum. Botulism causes death by respiratory failure and paralysis. Furthermore, the toxin is readily available worldwide due to its cosmetic applications in injections.
Bubonic plague: Plague is a disease caused by the Yersinia pestis bacterium. Rodents are the normal host of plague, and the disease is transmitted to humans by flea bites and occasionally by aerosol in the form of pneumonic plague. The disease has a history of use in biological warfare dating back many centuries, and is considered a threat due to its ease of culture and ability to remain in circulation among local rodents for a long period of time. The weaponized threat comes mainly in the form of pneumonic plague (infection by inhalation) It was the disease that caused the Black Death in Medieval Europe.
Viral hemorrhagic fevers: This includes hemorrhagic fevers caused by members of the family Filoviridae (Marburg virus and Ebola virus), and by the family Arenaviridae (for example Lassa virus and Machupo virus). Ebola virus disease, in particular, has caused high fatality rates ranging from 25 to 90% with a 50% average. No cure currently exists, although vaccines are in development. The Soviet Union investigated the use of filoviruses for biological warfare, and the Aum Shinrikyo group unsuccessfully attempted to obtain cultures of Ebola virus. Death from Ebola virus disease is commonly due to multiple organ failure and hypovolemic shock. Marburg virus was first discovered in Marburg, Germany. No treatments currently exist aside from supportive care. The arenaviruses have a somewhat reduced case-fatality rate compared to disease caused by filoviruses, but are more widely distributed, chiefly in central Africa and South America.
Category B
Category B agents are moderately easy to disseminate and have low mortality rates.
Brucellosis (Brucella species)
Epsilon toxin of Clostridium perfringens
Food safety threats (for example, Salmonella species, E coli O157:H7, Shigella, Staphylococcus aureus)
Glanders (Burkholderia mallei)
Melioidosis (Burkholderia pseudomallei)
Psittacosis (Chlamydia psittaci)
Q fever (Coxiella burnetii)
Ricin toxin from Ricinus communis (castor beans)
Abrin toxin from Abrus precatorius (Rosary peas)
Staphylococcal enterotoxin B
Typhus (Rickettsia prowazekii)
Viral encephalitis (alphaviruses, for example,: Venezuelan equine encephalitis, eastern equine encephalitis, western equine encephalitis)
Water supply threats (for example, Vibrio cholerae, Cryptosporidium parvum)
Category C
Category C agents are emerging pathogens that might be engineered for mass dissemination because of their availability, ease of production and dissemination, high mortality rate, or ability to cause a major health impact.
Nipah virus
Hantavirus
Planning and response
Planning may involve the development of biological identification systems. Until recently in the United States, most biological defense strategies have been geared to protecting soldiers on the battlefield rather than ordinary people in cities. Financial cutbacks have limited the tracking of disease outbreaks. Some outbreaks, such as food poisoning due to E. coli or Salmonella, could be of either natural or deliberate origin.
Preparedness
Export controls on biological agents are not applied uniformly, providing terrorists a route for acquisition. Laboratories are working on advanced detection systems to provide early warning, identify contaminated areas and populations at risk, and to facilitate prompt treatment. Methods for predicting the use of biological agents in urban areas as well as assessing the area for the hazards associated with a biological attack are being established in major cities. In addition, forensic technologies are working on identifying biological agents, their geographical origins and/or their initial source. Efforts include decontamination technologies to restore facilities without causing additional environmental concerns.
Early detection and rapid response to bioterrorism depend on close cooperation between public health authorities and law enforcement; however, such cooperation is lacking. National detection assets and vaccine stockpiles are not useful if local and state officials do not have access to them.
Aspects of protection against bioterrorism in the United States include:
Detection and resilience strategies in combating bioterrorism. This occurs primarily through the efforts of the Office of Health Affairs (OHA), a part of the Department of Homeland Security (DHS), whose role is to prepare for an emergency situation that impacts the health of the American populace. Detection has two primary technological factors. First there is OHA's BioWatch program in which collection devices are disseminated to thirty high risk areas throughout the country to detect the presence of aerosolized biological agents before symptoms present in patients. This is significant primarily because it allows a more proactive response to a disease outbreak rather than the more passive treatment of the past.
Implementation of the Generation-3 automated detection system. This advancement is significant simply because it enables action to be taken in four to six hours due to its automatic response system, whereas the previous system required aerosol detectors to be manually transported to laboratories. Resilience is a multifaceted issue as well, as addressed by OHA. One way in which this is ensured is through exercises that establish preparedness; programs like the Anthrax Response Exercise Series exist to ensure that, regardless of the incident, all emergency personnel will be aware of the role they must fill. Moreover, by providing information and education to public leaders, emergency medical services and all employees of the DHS, OHS suggests it can significantly decrease the impact of bioterrorism.
Enhancing the technological capabilities of first responders is accomplished through numerous strategies. The first of these strategies was developed by the Science and Technology Directorate (S&T) of DHS to ensure that the danger of suspicious powders could be effectively assessed, (as many dangerous biological agents such as anthrax exist as a white powder). By testing the accuracy and specificity of commercially available systems used by first responders, the hope is that all biologically harmful powders can be rendered ineffective.
Enhanced equipment for first responders. One recent advancement is the commercialization of a new form of Tyvex™ armor which protects first responders and patients from chemical and biological contaminants. There has also been a new generation of Self-Contained Breathing Apparatuses (SCBA) which has been recently made more robust against bioterrorism agents. All of these technologies combine to form what seems like a relatively strong deterrent to bioterrorism. However, New York City as an entity has numerous organizations and strategies that effectively serve to deter and respond to bioterrorism as it comes. From here the logical progression is into the realm of New York City's specific strategies to prevent bioterrorism.
Excelsior Challenge. In the second week of September 2016, the state of New York held a large emergency response training exercise called the Excelsior Challenge, with over 100 emergency responders participating. According to WKTV, "This is the fourth year of the Excelsior Challenge, a training exercise designed for police and first responders to become familiar with techniques and practices should a real incident occur." The event was held over three days and hosted by the State Preparedness Training Center in Oriskany, New York. Participants included bomb squads, canine handlers, tactical team officers and emergency medical services. In an interview with Homeland Preparedness News, Bob Stallman, assistant director at the New York State Preparedness Training Center, said, "We're constantly seeing what’s happening around the world and we tailor our training courses and events for those types of real-world events." For the first time, the 2016 training program implemented New York's new electronic system. The system, called NY Responds, electronically connects every county in New York to aid in disaster response and recovery. As a result, "counties have access to a new technology known as Mutualink, which improves interoperability by integrating telephone, radio, video, and file-sharing into one application to allow local emergency staff to share real-time information with the state and other counties." The State Preparedness Training Center in Oriskany was designed by the State Division of Homeland Security, and Emergency Services (DHSES) in 2006. It cost $42 million to construct on over 1100 acres and is available for training 360 days a year. Students from SUNY Albany's College of Emergency Preparedness, Homeland Security and Cybersecurity, were able to participate in this year's exercise and learn how "DHSES supports law enforcement specialty teams."
Project BioShield. The accrual of vaccines and treatments for potential biological threats, also known as medical countermeasures has been an important aspect in preparing for a potential bioterrorist attack; this took the form of a program beginning in 2004, referred to as Project BioShield. The significance of this program should not be overlooked as “there is currently enough smallpox vaccine to inoculate every United States citizen… and a variety of therapeutic drugs to treat the infected.” The Department of Defense also has a variety of laboratories currently working to increase the quantity and efficacy of countermeasures that comprise the national stockpile. Efforts have also been taken to ensure that these medical countermeasures can be disseminated effectively in the event of a bioterrorist attack. The National Association of Chain Drug Stores championed this cause by encouraging the participation of the private sector in improving the distribution of such countermeasures if required.
On a CNN news broadcast in 2011, the CNN chief medical correspondent, Dr. Sanjay Gupta, weighed in on the American government's recent approach to bioterrorist threats. He explains how, even though the United States would be better fending off bioterrorist attacks now than they would be a decade ago, the amount of money available to fight bioterrorism over the last three years has begun to decrease. Looking at a detailed report that examined the funding decrease for bioterrorism in fifty-one American cities, Dr. Gupta stated that the cities "wouldn’t be able to distribute vaccines as well" and "wouldn't be able to track viruses." He also said that film portrayals of global pandemics, such as Contagion, were actually quite possible and may occur in the United States under the right conditions.
A news broadcast by MSNBC in 2010 also stressed the low levels of bioterrorism preparedness in the United States. The broadcast stated that a bipartisan report gave the Obama administration a failing grade for its efforts to respond to a bioterrorist attack. The news broadcast invited the former New York City police commissioner, Howard Safir, to explain how the government would fare in combating such an attack. He said how "biological and chemical weapons are probable and relatively easy to disperse." Furthermore, Safir thought that efficiency in bioterrorism preparedness is not necessarily a question of money, but is instead dependent on putting resources in the right places. The broadcast suggested that the nation was not ready for something more serious.
In a September 2016 interview conducted by Homeland Preparedness News, Daniel Gerstein, a senior policy researcher for the RAND Corporation, stresses the importance in preparing for potential bioterrorist attacks on the nation. He implored the U.S. government to take the proper and necessary actions to implement a strategic plan of action to save as many lives as possible and to safeguard against potential chaos and confusion. He believes that because there have been no significant instances of bioterrorism since the anthrax attacks in 2001, the government has allowed itself to become complacent making the country that much more vulnerable to unsuspecting attacks, thereby further endangering the lives of U.S. citizens.
Gerstein formerly served in the Science and Technology Directorate of the Department of Homeland Security from 2011 to 2014. He claims there has not been a serious plan of action since 2004 during George W. Bush's presidency, in which he issued a Homeland Security directive delegating responsibilities among various federal agencies. He also stated that the blatant mishandling of the Ebola virus outbreak in 2014 attested to the government's lack of preparation. This past May, legislation that would create a national defense strategy was introduced in the Senate, coinciding with the timing of ISIS-affiliated terrorist groups get closer to weaponizing biological agents. In May 2016, Kenyan officials apprehended two members of an Islamic extremist group in motion to set off a biological bomb containing anthrax. Mohammed Abdi Ali, the believed leader of the group, who was a medical intern, was arrested along with his wife, a medical student. The two were caught just before carrying out their plan. The Blue Ribbon Study Panel on Biodefense, which comprises a group of experts on national security and government officials, in which Gerstein had previously testified to, submitted its National Blueprint for Biodefense to Congress in October 2015 listing their recommendations for devising an effective plan.
Bill Gates said in a February 18, 2017 Business Insider op-ed (published near the time of his Munich Security Conference speech) that it is possible for an airborne pathogen to kill at least 30 million people over the course of a year. In a New York Times report, the Gates Foundation predicted that a modern outbreak similar to the Spanish Influenza pandemic (which killed between 50 million and 100 million people) could end up killing more than 360 million people worldwide, even considering widespread availability of vaccines and other healthcare tools. The report cited increased globalization, rapid international air travel, and urbanization as increased reasons for concern. In a March 9, 2017, interview with CNBC, former U.S. Senator Joe Lieberman, who was co-chair of the bipartisan Blue Ribbon Study Panel on Biodefense, said a worldwide pandemic could end the lives of more people than a nuclear war. Lieberman also expressed worry that a terrorist group like ISIS could develop a synthetic influenza strain and introduce it to the world to kill civilians. In July 2017, Robert C. Hutchinson, former agent at the Department of Homeland Security, called for a "whole-of-government" response to the next global health threat, which he described as including strict security procedures at our borders and proper execution of government preparedness plans.
Also, novel approaches in biotechnology, such as synthetic biology, could be used in the future to design new types of biological warfare agents. Special attention has to be laid on future experiments (of concern) that:
Would demonstrate how to render a vaccine ineffective;
Would confer resistance to therapeutically useful antibiotics or antiviral agents;
Would enhance the virulence of a pathogen or render a nonpathogen virulent;
Would increase transmissibility of a pathogen;
Would alter the host range of a pathogen;
Would enable the evasion of diagnostic/detection tools;
Would enable the weaponization of a biological agent or toxin
Most of the biosecurity concerns in synthetic biology, however, are focused on the role of DNA synthesis and the risk of producing genetic material of lethal viruses (e.g. 1918 Spanish flu, polio) in the lab. The CRISPR/Cas system has emerged as a promising technique for gene editing. It was hailed by The Washington Post as "the most important innovation in the synthetic biology space in nearly 30 years." While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. However, due to its ease of use and accessibility, it has raised a number of ethical concerns, especially surrounding its use in the biohacking space.
Biosurveillance
In 1999, the University of Pittsburgh's Center for Biomedical Informatics deployed the first automated bioterrorism detection system, called RODS (Real-Time Outbreak Disease Surveillance). RODS is designed to collect data from many data sources and use them to perform signal detection, that is, to detect a possible bioterrorism event at the earliest possible moment. RODS, and other systems like it, collect data from sources including clinic data, laboratory data, and data from over-the-counter drug sales. In 2000, Michael Wagner, the codirector of the RODS laboratory, and Ron Aryel, a subcontractor, conceived the idea of obtaining live data feeds from "non-traditional" (non-health-care) data sources. The RODS laboratory's first efforts eventually led to the establishment of the National Retail Data Monitor, a system which collects data from 20,000 retail locations nationwide.
On February 5, 2002, George W. Bush visited the RODS laboratory and used it as a model for a $300 million spending proposal to equip all 50 states with biosurveillance systems. In a speech delivered at the nearby Masonic temple, Bush compared the RODS system to a modern "DEW" line (referring to the Cold War ballistic missile early warning system).
The principles and practices of biosurveillance, a new interdisciplinary science, were defined and described in the Handbook of Biosurveillance, edited by Michael Wagner, Andrew Moore and Ron Aryel, and published in 2006. Biosurveillance is the science of real-time disease outbreak detection. Its principles apply to both natural and man-made epidemics (bioterrorism).
Data which potentially could assist in early detection of a bioterrorism event include many categories of information. Health-related data such as that from hospital computer systems, clinical laboratories, electronic health record systems, medical examiner record-keeping systems, 911 call center computers, and veterinary medical record systems could be of help; researchers are also considering the utility of data generated by ranching and feedlot operations, food processors, drinking water systems, school attendance recording, and physiologic monitors, among others.
In Europe, disease surveillance is beginning to be organized on the continent-wide scale needed to track a biological emergency. The system not only monitors infected persons, but attempts to discern the origin of the outbreak.
Researchers have experimented with devices to detect the existence of a threat:
Tiny electronic chips that would contain living nerve cells to warn of the presence of bacterial toxins (identification of broad range toxins)
Fiber-optic tubes lined with antibodies coupled to light-emitting molecules (identification of specific pathogens, such as anthrax, botulinum, ricin)
Some research shows that ultraviolet avalanche photodiodes offer the high gain, reliability and robustness needed to detect anthrax and other bioterrorism agents in the air. The fabrication methods and device characteristics were described at the 50th Electronic Materials Conference in Santa Barbara on June 25, 2008. Details of the photodiodes were also published in the February 14, 2008, issue of the journal Electronics Letters and the November 2007 issue of the journal IEEE Photonics Technology Letters.
The United States Department of Defense conducts global biosurveillance through several programs, including the Global Emerging Infections Surveillance and Response System.
Another powerful tool developed within New York City for use in countering bioterrorism is the development of the New York City Syndromic Surveillance System. This system is essentially a way of tracking disease progression throughout New York City, and was developed by the New York City Department of Health and Mental Hygiene (NYC DOHMH) in the wake of the 9/11 attacks. The system works by tracking the symptoms of those taken into the emergency department—based on the location of the hospital to which they are taken and their home address—and assessing any patterns in symptoms. These established trends can then be observed by medical epidemiologists to determine if there are any disease outbreaks in any particular locales; maps of disease prevalence can then be created rather easily. This is an obviously beneficial tool in fighting bioterrorism as it provides a means through which such attacks could be discovered in their nascence; assuming bioterrorist attacks result in similar symptoms across the board, this strategy allows New York City to respond immediately to any bioterrorist threats that they may face with some level of alacrity.
Response to bioterrorism incident or threat
Government agencies which would be called on to respond to a bioterrorism incident would include law enforcement, hazardous materials and decontamination units, and emergency medical units, if available.
The US military has specialized units, which can respond to a bioterrorism event; among them are the United States Marine Corps' Chemical Biological Incident Response Force and the U.S. Army's 20th Support Command (CBRNE), which can detect, identify, and neutralize threats, and decontaminate victims exposed to bioterror agents. US response would include the Centers for Disease Control.
Historically, governments and authorities have relied on quarantines to protect their populations. International bodies such as the World Health Organization already devote some of their resources to monitoring epidemics and have served clearing-house roles in historical epidemics.
Media attention toward the seriousness of biological attacks increased in 2013 to 2014. In July 2013, Forbes published an article with the title "Bioterrorism: A Dirty Little Threat With Huge Potential Consequences." In November 2013, Fox News reported on a new strain of botulism, saying that the Centers for Disease and Control lists botulism as one of two agents that have "the highest risks of mortality and morbidity", noting that there is no antidote for botulism. USA Today reported that the U.S. military in November was trying to develop a vaccine for troops from the bacteria that cause the disease Q fever, an agent the military once used as a biological weapon. In February 2014, the former special assistant and senior director for biodefense policy to President George W. Bush called the bioterrorism risk imminent and uncertain and Congressman Bill Pascrell called for increasing federal measures against bioterrorism as a "matter of life or death." The New York Times wrote a story saying the United States would spend $40 million to help certain low and middle-income countries deal with the threats of bioterrorism and infectious diseases.
Bill Gates has warned that bioterrorism could kill more people than nuclear war.
In February 2018, a CNN employee discovered on an airplane a "sensitive, top-secret document in the seatback pouch explaining how the Department of Homeland Security would respond to a bioterrorism attack at the Super Bowl."
2017 U.S. budget proposal affecting bioterrorism programs
President Donald Trump promoted his first budget around keeping America safe. However, one aspect of defense would receive less money: "protecting the nation from deadly pathogens, man-made or natural," according to The New York Times. Agencies tasked with biosecurity get a decrease in funding under the Administration's budget proposal.
For example:
The Office of Public Health Preparedness and Response would be cut by $136 million, or 9.7 percent. The office tracks outbreaks of disease.
The National Center for Emerging and Zoonotic Infectious Diseases would be cut by $65 million, or 11 percent. The center is a branch of the Centers for Disease Control and Prevention that fights threats like anthrax and the Ebola virus, and additionally towards research on HIV/AIDS vaccines.
Within the National Institutes of Health, the National Institute of Allergy and Infectious Diseases (NIAID) would lose 18 percent of its budget. NIAID oversees responses to Zika, Ebola and HIV/AIDS vaccine research.
"The next weapon of mass destruction may not be a bomb," Lawrence O. Gostin, the director of the World Health Organization's Collaborating Center on Public Health Law and Human Rights, told The New York Times. "It may be a tiny pathogen that you can't see, smell or taste, and by the time we discover it, it'll be too late."
Lack of international standards on public health experiments
Tom Inglesy, the CEO and director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health and an internationally recognized expert on public health preparedness, pandemic and emerging infectious disease said in 2017 that the lack of an internationally standardized approval process that could be used to guide countries in conducting public health experiments for resurrecting a disease that has already been eradicated increases the risk that the disease could be used in bioterrorism. This was in reference to the lab synthesis of horsepox in 2017 by researchers at the University of Alberta. The researchers recreated horsepox, an extinct cousin of the smallpox virus, in order to research new ways to treat cancer.
In popular culture
Incidents
See also
Biodefence
Biological Weapons Convention
Biorisk
Biosecurity
Project Bacchus
Select agent
References
Bibliography
Further reading
Resolution 1540 "affirms that the proliferation of nuclear, chemical and biological weapons and their means of delivery constitutes a threat to international peace and security. The resolution obliges States, inter alia, to refrain from supporting by any means non-State actors from developing, acquiring, manufacturing, possessing, transporting, transferring or using nuclear, chemical or biological weapons and their means of delivery".
NOVA: Bioterror
Carus, W. Seth Working Paper: Bioterrorism and Biocrimes. The Illicit Use of Biological Agents Since 1900, Feb 2001 revision. (Final published version: )
United States
Recommended Policy Guidance for Departmental Development of Review Mechanisms for Potential Pandemic Pathogen Care and Oversight (P3CO). Obama Administration. January 9, 2017.
Terrorism by method
|
https://en.wikipedia.org/wiki/Brewing
|
Brewing is the production of beer by steeping a starch source (commonly cereal grains, the most popular of which is barley) in water and fermenting the resulting sweet liquid with yeast. It may be done in a brewery by a commercial brewer, at home by a homebrewer, or communally. Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests that emerging civilizations, including ancient Egypt, China, and Mesopotamia, brewed beer. Since the nineteenth century the brewing industry has been part of most western economies.
The basic ingredients of beer are water and a fermentable starch source such as malted barley. Most beer is fermented with a brewer's yeast and flavoured with hops. Less widely used starch sources include millet, sorghum and cassava. Secondary sources (adjuncts), such as maize (corn), rice, or sugar, may also be used, sometimes to reduce cost, or to add a feature, such as adding wheat to aid in retaining the foamy head of the beer. The most common starch source is ground cereal or "grist" - the proportion of the starch or cereal ingredients in a beer recipe may be called grist, grain bill, or simply mash ingredients.
Steps in the brewing process include malting, milling, mashing, lautering, boiling, fermenting, conditioning, filtering, and packaging. There are three main fermentation methods: warm, cool and spontaneous. Fermentation may take place in an open or closed fermenting vessel; a secondary fermentation may also occur in the cask or bottle. There are several additional brewing methods, such as Burtonisation, double dropping, and Yorkshire Square, as well as post-fermentation treatment such as filtering, and barrel-ageing.
History
Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests emerging civilizations including China, ancient Egypt, and Mesopotamia brewed beer. Descriptions of various beer recipes can be found in cuneiform (the oldest known writing) from ancient Mesopotamia. In Mesopotamia the brewer's craft was the only profession which derived social sanction and divine protection from female deities/goddesses, specifically: Ninkasi, who covered the production of beer, Siris, who was used in a metonymic way to refer to beer, and Siduri, who covered the enjoyment of beer. In pre-industrial times, and in developing countries, women are frequently the main brewers.
As almost any cereal containing certain sugars can undergo spontaneous fermentation due to wild yeasts in the air, it is possible that beer-like beverages were independently developed throughout the world soon after a tribe or culture had domesticated cereal. Chemical tests of ancient pottery jars reveal that beer was produced as far back as about 7,000 years ago in what is today Iran. This discovery reveals one of the earliest known uses of fermentation and is the earliest evidence of brewing to date. In Mesopotamia, the oldest evidence of beer is believed to be a 6,000-year-old Sumerian tablet depicting people drinking a beverage through reed straws from a communal bowl. A 3900-year-old Sumerian poem honouring Ninkasi, the patron goddess of brewing, contains the oldest surviving beer recipe, describing the production of beer from barley via bread. The invention of bread and beer has been argued to be responsible for humanity's ability to develop technology and build civilization. The earliest chemically confirmed barley beer to date was discovered at Godin Tepe in the central Zagros Mountains of Iran, where fragments of a jug, at least 5,000 years old was found to be coated with beerstone, a by-product of the brewing process. Beer may have been known in Neolithic Europe as far back as 5,000 years ago, and was mainly brewed on a domestic scale.
Ale produced before the Industrial Revolution continued to be made and sold on a domestic scale, although by the 7th century AD beer was also being produced and sold by European monasteries. During the Industrial Revolution, the production of beer moved from artisanal manufacture to industrial manufacture, and domestic manufacture ceased to be significant by the end of the 19th century. The development of hydrometers and thermometers changed brewing by allowing the brewer more control of the process, and greater knowledge of the results. Today, the brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional breweries. More than 133 billion litres (35 billion gallons) are sold per year—producing total global revenues of $294.5 billion (£147.7 billion) in 2006.
Ingredients
The basic ingredients of beer are water; a starch source, such as malted barley, able to be fermented (converted into alcohol); a brewer's yeast to produce the fermentation; and a flavouring, such as hops, to offset the sweetness of the malt. A mixture of starch sources may be used, with a secondary saccharide, such as maize (corn), rice, or sugar, these often being termed adjuncts, especially when used as a lower-cost substitute for malted barley. Less widely used starch sources include millet, sorghum, and cassava root in Africa, potato in Brazil, and agave in Mexico, among others. The most common starch source is ground cereal or "grist" - the proportion of the starch or cereal ingredients in a beer recipe may be called grist, grain bill, or simply mash ingredients.
Water
Beer is composed mostly of water. Regions have water with different mineral components; as a result, different regions were originally better suited to making certain types of beer, thus giving them a regional character. For example, Dublin has hard water well suited to making stout, such as Guinness; while Pilsen has soft water well suited to making pale lager, such as Pilsner Urquell. The waters of Burton in England contain gypsum, which benefits making pale ale to such a degree that brewers of pale ales will add gypsum to the local water in a process known as Burtonisation.
Starch source
The starch source in a beer provides the fermentable material and is a key determinant of the strength and flavour of the beer. The most common starch source used in beer is malted grain. Grain is malted by soaking it in water, allowing it to begin germination, and then drying the partially germinated grain in a kiln. Malting grain produces enzymes that will allow conversion from starches in the grain into fermentable sugars during the mash process. Different roasting times and temperatures are used to produce different colours of malt from the same grain. Darker malts will produce darker beers.
Nearly all beer includes barley malt as the majority of the starch. This is because of its fibrous husk, which is important not only in the sparging stage of brewing (in which water is washed over the mashed barley grains to form the wort) but also as a rich source of amylase, a digestive enzyme that facilitates conversion of starch into sugars. Other malted and unmalted grains (including wheat, rice, oats, and rye, and, less frequently, maize (corn) and sorghum) may be used. In recent years, a few brewers have produced gluten-free beer made with sorghum with no barley malt for people who cannot digest gluten-containing grains like wheat, barley, and rye.
Hops
Hops are the female flower clusters or seed cones of the hop vine Humulus lupulus, which are used as a flavouring and preservative agent in nearly all beer made today. Hops had been used for medicinal and food flavouring purposes since Roman times; by the 7th century in Carolingian monasteries in what is now Germany, beer was being made with hops, though it isn't until the thirteenth century that widespread cultivation of hops for use in beer is recorded. Before the thirteenth century, beer was flavoured with plants such as yarrow, wild rosemary, and bog myrtle, and other ingredients such as juniper berries, aniseed and ginger, which would be combined into a mixture known as gruit and used as hops are now used; between the thirteenth and the sixteenth century, during which hops took over as the dominant flavouring, beer flavoured with gruit was known as ale, while beer flavoured with hops was known as beer. Some beers today, such as Fraoch by the Scottish Heather Ales company and Cervoise Lancelot by the French Brasserie-Lancelot company, use plants other than hops for flavouring.
Hops contain several characteristics that brewers desire in beer: they contribute a bitterness that balances the sweetness of the malt; they provide floral, citrus, and herbal aromas and flavours; they have an antibiotic effect that favours the activity of brewer's yeast over less desirable microorganisms; and they aid in "head retention", the length of time that the foam on top of the beer (the beer head) will last. The preservative in hops comes from the lupulin glands which contain soft resins with alpha and beta acids. Though much studied, the preservative nature of the soft resins is not yet fully understood, though it has been observed that unless stored at a cool temperature, the preservative nature will decrease. Brewing is the sole major commercial use of hops.
Yeast
Yeast is the microorganism that is responsible for fermentation in beer. Yeast metabolises the sugars extracted from grains, which produces alcohol and carbon dioxide, and thereby turns wort into beer. In addition to fermenting the beer, yeast influences the character and flavour.
The dominant types of yeast used to make beer are Saccharomyces cerevisiae, known as ale yeast, and Saccharomyces pastorianus, known as lager yeast; Brettanomyces ferments lambics, and Torulaspora delbrueckii ferments Bavarian weissbier. Before the role of yeast in fermentation was understood, fermentation involved wild or airborne yeasts, and a few styles such as lambics still use this method today. Emil Christian Hansen, a Danish biochemist employed by the Carlsberg Laboratory, developed pure yeast cultures which were introduced into the Carlsberg brewery in 1883, and pure yeast strains are now the main fermenting source used worldwide.
Clarifying agent
Some brewers add one or more clarifying agents to beer, which typically precipitate (collect as a solid) out of the beer along with protein solids and are found only in trace amounts in the finished product. This process makes the beer appear bright and clean, rather than the cloudy appearance of ethnic and older styles of beer such as wheat beers.
Examples of clarifying agents include isinglass, obtained from swim bladders of fish; Irish moss, a seaweed; kappa carrageenan, from the seaweed kappaphycus; polyclar (a commercial brand of clarifier); and gelatin. If a beer is marked "suitable for Vegans", it was generally clarified either with seaweed or with artificial agents, although the "Fast Cask" method invented by Marston's in 2009 may provide another method.
Brewing process
There are several steps in the brewing process, which may include malting, mashing, lautering, boiling, fermenting, conditioning, filtering, and packaging. The brewing equipment needed to make beer has grown more sophisticated over time, and now covers most aspects of the brewing process.
Malting is the process where barley grain is made ready for brewing. Malting is broken down into three steps in order to help to release the starches in the barley. First, during steeping, the grain is added to a vat with water and allowed to soak for approximately 40 hours. During germination, the grain is spread out on the floor of the germination room for around 5 days. The final part of malting is kilning when the malt goes through a very high temperature drying in a kiln; with gradual temperature increase over several hours. When kilning is complete, the grains are now termed malt, and they will be milled or crushed to break apart the kernels and expose the cotyledon, which contains the majority of the carbohydrates and sugars; this makes it easier to extract the sugars during mashing.
Mashing converts the starches released during the malting stage into sugars that can be fermented. The milled grain is mixed with hot water in a large vessel known as a mash tun. In this vessel, the grain and water are mixed together to create a cereal mash. During the mash, naturally occurring enzymes present in the malt convert the starches (long chain carbohydrates) in the grain into smaller molecules or simple sugars (mono-, di-, and tri-saccharides). This "conversion" is called saccharification which occurs between the temperatures . The result of the mashing process is a sugar-rich liquid or "wort", which is then strained through the bottom of the mash tun in a process known as lautering. Prior to lautering, the mash temperature may be raised to about (known as a mashout) to free up more starch and reduce mash viscosity. Additional water may be sprinkled on the grains to extract additional sugars (a process known as sparging).
The wort is moved into a large tank known as a "copper" or kettle where it is boiled with hops and sometimes other ingredients such as herbs or sugars. This stage is where many chemical reactions take place, and where important decisions about the flavour, colour, and aroma of the beer are made. The boiling process serves to terminate enzymatic processes, precipitate proteins, isomerize hop resins, and concentrate and sterilize the wort. Hops add flavour, aroma and bitterness to the beer. At the end of the boil, the hopped wort settles to clarify in a vessel called a "whirlpool", where the more solid particles in the wort are separated out.
After the whirlpool, the wort is drawn away from the compacted hop trub, and rapidly cooled via a heat exchanger to a temperature where yeast can be added. A variety of heat exchanger designs are used in breweries, with the most common a plate-style. Water or glycol run in channels in the opposite direction of the wort, causing a rapid drop in temperature. It is very important to quickly cool the wort to a level where yeast can be added safely as yeast is unable to grow in very high temperatures, and will start to die in temperatures above . After the wort goes through the heat exchanger, the cooled wort goes into a fermentation tank. A type of yeast is selected and added, or "pitched", to the fermentation tank. When the yeast is added to the wort, the fermenting process begins, where the sugars turn into alcohol, carbon dioxide and other components. When the fermentation is complete the brewer may rack the beer into a new tank, called a conditioning tank. Conditioning of the beer is the process in which the beer ages, the flavour becomes smoother, and flavours that are unwanted dissipate. After conditioning for a week to several months, the beer may be filtered and force carbonated for bottling, or fined in the cask.
Mashing
Mashing is the process of combining a mix of milled grain (typically malted barley with supplementary grains such as corn, sorghum, rye or wheat), known as the "grist" or "grain bill", and water, known as "liquor", and heating this mixture in a vessel called a "mash tun". Mashing is a form of steeping, and defines the act of brewing, such as with making tea, sake, and soy sauce. Technically, wine, cider and mead are not brewed but rather vinified, as there is no steeping process involving solids. Mashing allows the enzymes in the malt to break down the starch in the grain into sugars, typically maltose to create a malty liquid called wort. There are two main methods – infusion mashing, in which the grains are heated in one vessel; and decoction mashing, in which a proportion of the grains are boiled and then returned to the mash, raising the temperature. Mashing involves pauses at certain temperatures (notably ), and takes place in a "mash tun" – an insulated brewing vessel with a false bottom. The end product of mashing is called a "mash".
Mashing usually takes 1 to 2 hours, and during this time the various temperature rests activate different enzymes depending upon the type of malt being used, its modification level, and the intention of the brewer. The activity of these enzymes convert the starches of the grains to dextrins and then to fermentable sugars such as maltose. A mash rest from activates various proteases, which break down proteins that might otherwise cause the beer to be hazy. This rest is generally used only with undermodified (i.e. undermalted) malts which are decreasingly popular in Germany and the Czech Republic, or non-malted grains such as corn and rice, which are widely used in North American beers. A mash rest at activates β-glucanase, which breaks down gummy β-glucans in the mash, making the sugars flow out more freely later in the process. In the modern mashing process, commercial fungal based β-glucanase may be added as a supplement. Finally, a mash rest temperature of is used to convert the starches in the malt to sugar, which is then usable by the yeast later in the brewing process. Doing the latter rest at the lower end of the range favours β-amylase enzymes, producing more low-order sugars like maltotriose, maltose, and glucose which are more fermentable by the yeast. This in turn creates a beer lower in body and higher in alcohol. A rest closer to the higher end of the range favours α-amylase enzymes, creating more higher-order sugars and dextrins which are less fermentable by the yeast, so a fuller-bodied beer with less alcohol is the result. Duration and pH variances also affect the sugar composition of the resulting wort.
Lautering
Lautering is the separation of the wort (the liquid containing the sugar extracted during mashing) from the grains. This is done either in a mash tun outfitted with a false bottom, in a lauter tun, or in a mash filter. Most separation processes have two stages: first wort run-off, during which the extract is separated in an undiluted state from the spent grains, and sparging, in which extract which remains with the grains is rinsed off with hot water. The lauter tun is a tank with holes in the bottom small enough to hold back the large bits of grist and hulls (the ground or milled cereal). The bed of grist that settles on it is the actual filter. Some lauter tuns have provision for rotating rakes or knives to cut into the bed of grist to maintain good flow. The knives can be turned so they push the grain, a feature used to drive the spent grain out of the vessel. The mash filter is a plate-and-frame filter. The empty frames contain the mash, including the spent grains, and have a capacity of around one hectoliter. The plates contain a support structure for the filter cloth. The plates, frames, and filter cloths are arranged in a carrier frame like so: frame, cloth, plate, cloth, with plates at each end of the structure. Newer mash filters have bladders that can press the liquid out of the grains between spargings. The grain does not act like a filtration medium in a mash filter.
Boiling
After mashing, the beer wort is boiled with hops (and other flavourings if used) in a large tank known as a "copper" or brew kettle – though historically the mash vessel was used and is still in some small breweries. The boiling process is where chemical reactions take place, including sterilization of the wort to remove unwanted bacteria, releasing of hop flavours, bitterness and aroma compounds through isomerization, stopping of enzymatic processes, precipitation of proteins, and concentration of the wort. Finally, the vapours produced during the boil volatilise off-flavours, including dimethyl sulfide precursors. The boil is conducted so that it is even and intense – a continuous "rolling boil". The boil on average lasts between 45 and 90 minutes, depending on its intensity, the hop addition schedule, and volume of water the brewer expects to evaporate. At the end of the boil, solid particles in the hopped wort are separated out, usually in a vessel called a "whirlpool".
Brew kettle or copper
Copper is the traditional material for the boiling vessel for two main reasons: firstly because copper transfers heat quickly and evenly; secondly because the bubbles produced during boiling, which could act as an insulator against the heat, do not cling to the surface of copper, so the wort is heated in a consistent manner. The simplest boil kettles are direct-fired, with a burner underneath. These can produce a vigorous and favourable boil, but are also apt to scorch the wort where the flame touches the kettle, causing caramelisation and making cleanup difficult. Most breweries use a steam-fired kettle, which uses steam jackets in the kettle to boil the wort. Breweries usually have a boiling unit either inside or outside of the kettle, usually a tall, thin cylinder with vertical tubes, called a calandria, through which wort is pumped.
Whirlpool
At the end of the boil, solid particles in the hopped wort are separated out, usually in a vessel called a "whirlpool" or "settling tank". The whirlpool was devised by Henry Ranulph Hudston while working for the Molson Brewery in 1960 to utilise the so-called tea leaf paradox to force the denser solids known as "trub" (coagulated proteins, vegetable matter from hops) into a cone in the centre of the whirlpool tank. Whirlpool systems vary: smaller breweries tend to use the brew kettle, larger breweries use a separate tank, and design will differ, with tank floors either flat, sloped, conical or with a cup in the centre. The principle in all is that by swirling the wort the centripetal force will push the trub into a cone at the centre of the bottom of the tank, where it can be easily removed.
Hopback
A hopback is a traditional additional chamber that acts as a sieve or filter by using whole hops to clear debris (or "trub") from the unfermented (or "green") wort, as the whirlpool does, and also to increase hop aroma in the finished beer. It is a chamber between the brewing kettle and wort chiller. Hops are added to the chamber, the hot wort from the kettle is run through it, and then immediately cooled in the wort chiller before entering the fermentation chamber. Hopbacks utilizing a sealed chamber facilitate maximum retention of volatile hop aroma compounds that would normally be driven off when the hops contact the hot wort. While a hopback has a similar filtering effect as a whirlpool, it operates differently: a whirlpool uses centrifugal forces, a hopback uses a layer of whole hops to act as a filter bed. Furthermore, while a whirlpool is useful only for the removal of pelleted hops (as flowers do not tend to separate as easily), in general hopbacks are used only for the removal of whole flower hops (as the particles left by pellets tend to make it through the hopback). The hopback has mainly been substituted in modern breweries by the whirlpool.
Wort cooling
After the whirlpool, the wort must be brought down to fermentation temperatures before yeast is added. In modern breweries this is achieved through a plate heat exchanger. A plate heat exchanger has many ridged plates, which form two separate paths. The wort is pumped into the heat exchanger, and goes through every other gap between the plates. The cooling medium, usually water, goes through the other gaps. The ridges in the plates ensure turbulent flow. A good heat exchanger can drop wort to while warming the cooling medium from about to . The last few plates often use a cooling medium which can be cooled to below the freezing point, which allows a finer control over the wort-out temperature, and also enables cooling to around . After cooling, oxygen is often dissolved into the wort to revitalize the yeast and aid its reproduction. Some of the craft brewery, particularly those wanting to create steam beer, utilize coolship instead.
While boiling, it is useful to recover some of the energy used to boil the wort. On its way out of the brewery, the steam created during the boil is passed over a coil through which unheated water flows. By adjusting the rate of flow, the output temperature of the water can be controlled. This is also often done using a plate heat exchanger. The water is then stored for later use in the next mash, in equipment cleaning, or wherever necessary. Another common method of energy recovery takes place during the wort cooling. When cold water is used to cool the wort in a heat exchanger, the water is significantly warmed. In an efficient brewery, cold water is passed through the heat exchanger at a rate set to maximize the water's temperature upon exiting. This now-hot water is then stored in a hot water tank.
Fermenting
Fermentation takes place in fermentation vessels which come in various forms, from enormous cylindroconical vessels, through open stone vessels, to wooden vats. After the wort is cooled and aerated – usually with sterile air – yeast is added to it, and it begins to ferment. It is during this stage that sugars won from the malt are converted into alcohol and carbon dioxide, and the product can be called beer for the first time.
Most breweries today use cylindroconical vessels, or CCVs, which have a conical bottom and a cylindrical top. The cone's angle is typically around 60°, an angle that will allow the yeast to flow towards the cone's apex, but is not so steep as to take up too much vertical space. CCVs can handle both fermenting and conditioning in the same tank. At the end of fermentation, the yeast and other solids which have fallen to the cone's apex can be simply flushed out of a port at the apex. Open fermentation vessels are also used, often for show in brewpubs, and in Europe in wheat beer fermentation. These vessels have no tops, which makes harvesting top-fermenting yeasts very easy. The open tops of the vessels make the risk of infection greater, but with proper cleaning procedures and careful protocol about who enters fermentation chambers, the risk can be well controlled. Fermentation tanks are typically made of stainless steel. If they are simple cylindrical tanks with beveled ends, they are arranged vertically, as opposed to conditioning tanks which are usually laid out horizontally. Only a very few breweries still use wooden vats for fermentation as wood is difficult to keep clean and infection-free and must be repitched more or less yearly.
Fermentation methods
There are three main fermentation methods, warm, cool, and wild or spontaneous. Fermentation may take place in open or closed vessels. There may be a secondary fermentation which can take place in the brewery, in the cask or in the bottle.
Brewing yeasts are traditionally classed as "top-cropping" (or "top-fermenting") and "bottom-cropping" (or "bottom-fermenting"); the yeasts classed as top-fermenting are generally used in warm fermentations, where they ferment quickly, and the yeasts classed as bottom-fermenting are used in cooler fermentations where they ferment more slowly. Yeast were termed top or bottom cropping, because the yeast was collected from the top or bottom of the fermenting wort to be reused for the next brew. This terminology is somewhat inappropriate in the modern era; after the widespread application of brewing mycology it was discovered that the two separate collecting methods involved two different yeast species that favoured different temperature regimes, namely Saccharomyces cerevisiae in top-cropping at warmer temperatures and Saccharomyces pastorianus in bottom-cropping at cooler temperatures. As brewing methods changed in the 20th century, cylindro-conical fermenting vessels became the norm and the collection of yeast for both Saccharomyces species is done from the bottom of the fermenter. Thus the method of collection no longer implies a species association. There are a few remaining breweries who collect yeast in the top-cropping method, such as Samuel Smiths brewery in Yorkshire, Marstons in Staffordshire and several German hefeweizen producers.
For both types, yeast is fully distributed through the beer while it is fermenting, and both equally flocculate (clump together and precipitate to the bottom of the vessel) when fermentation is finished. By no means do all top-cropping yeasts demonstrate this behaviour, but it features strongly in many English yeasts that may also exhibit chain forming (the failure of budded cells to break from the mother cell), which is in the technical sense different from true flocculation. The most common top-cropping brewer's yeast, Saccharomyces cerevisiae, is the same species as the common baking yeast. However, baking and brewing yeasts typically belong to different strains, cultivated to favour different characteristics: baking yeast strains are more aggressive, in order to carbonate dough in the shortest amount of time; brewing yeast strains act slower, but tend to tolerate higher alcohol concentrations (normally 12–15% abv is the maximum, though under special treatment some ethanol-tolerant strains can be coaxed up to around 20%). Modern quantitative genomics has revealed the complexity of Saccharomyces species to the extent that yeasts involved in beer and wine production commonly involve hybrids of so-called pure species. As such, the yeasts involved in what has been typically called top-cropping or top-fermenting ale may be both Saccharomyces cerevisiae and complex hybrids of Saccharomyces cerevisiae and Saccharomyces kudriavzevii. Three notable ales, Chimay, Orval and Westmalle, are fermented with these hybrid strains, which are identical to wine yeasts from Switzerland.
Warm fermentation
In general, yeasts such as Saccharomyces cerevisiae are fermented at warm temperatures between , occasionally as high as , while the yeast used by Brasserie Dupont for saison ferments even higher at . They generally form a foam on the surface of the fermenting beer, which is called barm, as during the fermentation process its hydrophobic surface causes the flocs to adhere to CO2 and rise; because of this, they are often referred to as "top-cropping" or "top-fermenting" – though this distinction is less clear in modern brewing with the use of cylindro-conical tanks. Generally, warm-fermented beers, which are usually termed ale, are ready to drink within three weeks after the beginning of fermentation, although some brewers will condition or mature them for several months.
Cool fermentation
When a beer has been brewed using a cool fermentation of around , compared to typical warm fermentation temperatures of , then stored (or lagered) for typically several weeks (or months) at temperatures close to freezing point, it is termed a "lager". During the lagering or storage phase several flavour components developed during fermentation dissipate, resulting in a "cleaner" flavour. Though it is the slow, cool fermentation and cold conditioning (or lagering) that defines the character of lager, the main technical difference is with the yeast generally used, which is Saccharomyces pastorianus. Technical differences include the ability of lager yeast to metabolize melibiose, and the tendency to settle at the bottom of the fermenter (though ales yeasts can also become bottom settling by selection); though these technical differences are not considered by scientists to be influential in the character or flavour of the finished beer, brewers feel otherwise - sometimes cultivating their own yeast strains which may suit their brewing equipment or for a particular purpose, such as brewing beers with a high abv.
Brewers in Bavaria had for centuries been selecting cold-fermenting yeasts by storing ("lagern") their beers in cold alpine caves. The process of natural selection meant that the wild yeasts that were most cold tolerant would be the ones that would remain actively fermenting
in the beer that was stored in the caves. A sample of these Bavarian yeasts was sent from the Spaten brewery in Munich to the Carlsberg brewery in Copenhagen in 1845 who began brewing with it. In 1883 Emile Hansen completed a study on pure yeast culture isolation and the pure strain obtained from Spaten went into industrial production in 1884 as Carlsberg yeast No 1. Another specialized pure yeast production plant was installed at the Heineken Brewery in Rotterdam the following year and together they began the supply of pure cultured yeast to brewers across Europe. This yeast strain was originally classified as Saccharomyces carlsbergensis, a now defunct species name which has been superseded by the currently accepted taxonomic classification Saccharomyces pastorianus.
Spontaneous fermentation
Lambic beers are historically brewed in Brussels and the nearby Pajottenland region of Belgium without any yeast inoculation. The wort is cooled in open vats (called "coolships"), where the yeasts and microbiota present in the brewery (such as Brettanomyces) are allowed to settle to create a spontaneous fermentation, and are then conditioned or matured in oak barrels for typically one to three years.
Conditioning
After an initial or primary fermentation, beer is conditioned, matured or aged, in one of several ways, which can take from 2 to 4 weeks, several months, or several years, depending on the brewer's intention for the beer. The beer is usually transferred into a second container, so that it is no longer exposed to the dead yeast and other debris (also known as "trub") that have settled to the bottom of the primary fermenter. This prevents the formation of unwanted flavours and harmful compounds such as acetaldehyde.
Kräusening
Kräusening (pronounced ) is a conditioning method in which fermenting wort is added to the finished beer. The active yeast will restart fermentation in the finished beer, and so introduce fresh carbon dioxide; the conditioning tank will be then sealed so that the carbon dioxide is dissolved into the beer producing a lively "condition" or level of carbonation. The kräusening method may also be used to condition bottled beer.
Lagering
Lagers are stored at cellar temperature or below for 1–6 months while still on the yeast. The process of storing, or conditioning, or maturing, or aging a beer at a low temperature for a long period is called "lagering", and while it is associated with lagers, the process may also be done with ales, with the same result – that of cleaning up various chemicals, acids and compounds.
Secondary fermentation
During secondary fermentation, most of the remaining yeast will settle to the bottom of the second fermenter, yielding a less hazy product.
Bottle fermentation
Some beers undergo an additional fermentation in the bottle giving natural carbonation. This may be a second and/or third fermentation. They are bottled with a viable yeast population in suspension. If there is no residual fermentable sugar left, sugar or wort or both may be added in a process known as priming. The resulting fermentation generates CO2 that is trapped in the bottle, remaining in solution and providing natural carbonation. Bottle-conditioned beers may be either filled unfiltered direct from the fermentation or conditioning tank, or filtered and then reseeded with yeast.
Cask conditioning
Cask ale (or cask-conditioned beer) is unfiltered, unpasteurised beer that is conditioned by a secondary fermentation in a metal, plastic or wooden cask. It is dispensed from the cask by being either poured from a tap by gravity, or pumped up from a cellar via a beer engine (hand pump). Sometimes a cask breather is used to keep the beer fresh by allowing carbon dioxide to replace oxygen as the beer is drawn off the cask. Until 2018, the Campaign for Real Ale (CAMRA) defined real ale as beer "served without the use of extraneous carbon dioxide", which would disallow the use of a cask breather, a policy which was reversed in April 2018 to allow beer served with the use of cask breathers to meet its definition of real ale.
Barrel-ageing
Barrel-ageing (US: Barrel aging) is the process of ageing beer in wooden barrels to achieve a variety of effects in the final product. Sour beers such as lambics are fully fermented in wood, while other beers are aged in barrels which were previously used for maturing wines or spirits. In 2016 "Craft Beer and Brewing" wrote: "Barrel-aged beers are so trendy that nearly every taphouse and beer store has a section of them.
Filtering
Filtering stabilises the flavour of beer, holding it at a point acceptable to the brewer, and preventing further development from the yeast, which under poor conditions can release negative components and flavours. Filtering also removes haze, clearing the beer, and so giving it a "polished shine and brilliance". Beer with a clear appearance has been commercially desirable for brewers since the development of glass vessels for storing and drinking beer, along with the commercial success of pale lager, which - due to the lagering process in which haze and particles settle to the bottom of the tank and so the beer "drops bright" (clears) - has a natural bright appearance and shine.
There are several forms of filters; they may be in the form of sheets or "candles", or they may be a fine powder such as diatomaceous earth (also called kieselguhr), which is added to the beer to form a filtration bed which allows liquid to pass, but holds onto suspended particles such as yeast. Filters range from rough filters that remove much of the yeast and any solids (e.g., hops, grain particles) left in the beer, to filters tight enough to strain colour and body from the beer. Filtration ratings are divided into rough, fine, and sterile. Rough filtration leaves some cloudiness in the beer, but it is noticeably clearer than unfiltered beer. Fine filtration removes almost all cloudiness. Sterile filtration removes almost all microorganisms.
Sheet (pad) filters
These filters use sheets that allow only particles smaller than a given size to pass through. The sheets are placed into a filtering frame, sanitized (with boiling water, for example) and then used to filter the beer. The sheets can be flushed if the filter becomes blocked. The sheets are usually disposable and are replaced between filtration sessions. Often the sheets contain powdered filtration media to aid in filtration.
Pre-made filters have two sides. One with loose holes, and the other with tight holes. Flow goes from the side with loose holes to the side with the tight holes, with the intent that large particles get stuck in the large holes while leaving enough room around the particles and filter medium for smaller particles to go through and get stuck in tighter holes.
Sheets are sold in nominal ratings, and typically 90% of particles larger than the nominal rating are caught by the sheet.
Kieselguhr filters
Filters that use a powder medium are considerably more complicated to operate, but can filter much more beer before regeneration. Common media include diatomaceous earth and perlite.
By-products
Brewing by-products are "spent grain" and the sediment (or "dregs") from the filtration process which may be dried and resold as "brewers dried yeast" for poultry feed, or made into yeast extract which is used in brands such as Vegemite and Marmite. The process of turning the yeast sediment into edible yeast extract was discovered by German scientist Justus von Liebig.
Brewer's spent grain (also called spent grain, brewer's grain or draff) is the main by-product of the brewing process; it consists of the residue of malt and grain which remains in the lauter tun after the lautering process. It consists primarily of grain husks, pericarp, and fragments of endosperm. As it mainly consists of carbohydrates and proteins, and is readily consumed by animals, spent grain is used in animal feed. Spent grains can also be used as fertilizer, whole grains in bread, as well as in the production of flour and biogas. Spent grain is also an ideal medium for growing mushrooms, such as shiitake, and already some breweries are either growing their own mushrooms or supplying spent grain to mushroom farms. Spent grains can be used in the production of red bricks, to improve the open porosity and reduce thermal conductivity of the ceramic mass.
Brewing industry
The brewing industry is a global business, consisting of several dominant multinational companies and many thousands of other producers known as microbreweries or regional breweries or craft breweries depending on size, region, and marketing preference. More than are sold per year—producing total global revenues of $294.5 billion (£147.7 billion) as of 2006. SABMiller became the largest brewing company in the world when it acquired Royal Grolsch, brewer of Dutch premium beer brand Grolsch. InBev was the second-largest beer-producing company in the world and Anheuser-Busch held the third spot, but after the acquisition of Anheuser-Busch by InBev, the new Anheuser-Busch InBev company is currently the largest brewer in the world.
Brewing at home is subject to regulation and prohibition in many countries. Restrictions on homebrewing were lifted in the UK in 1963, Australia followed suit in 1972, and the US in 1978, though individual states were allowed to pass their own laws limiting production.
References
Sources
Bamforth, Charles; Food, Fermentation and Micro-organisms, Wiley-Blackwell, 2005,
Bamforth, Charles; Beer: Tap into the Art and Science of Brewing, Oxford University Press, 2009
Boulton, Christopher; Encyclopaedia of Brewing, Wiley-Blackwell, 2013,
Briggs, Dennis E., et al.; Malting and Brewing Science, Aspen Publishers, 1982,
Ensminger, Audrey; Foods & Nutrition Encyclopedia, CRC Press, 1994,
Esslinger, Hans Michael; Handbook of Brewing: Processes, Technology, Markets, Wiley-VCH, 2009,
Hornsey, Ian Spencer; Brewing, Royal Society of Chemistry, 1999,
Hui, Yiu H.; Food Biotechnology, Wiley-IEEE, 1994,
Hui, Yiu H., and Smith, J. Scott; Food Processing: Principles and Applications, Wiley-Blackwell, 2004,
Andrew G.H. Lea, John Raymond Piggott, John R. Piggott ; Fermented Beverage Production, Kluwer Academic/Plenum Publishers, 2003,
McFarland, Ben; World's Best Beers, Sterling Publishing, 2009,
Oliver, Garrett (ed); The Oxford Companion to Beer, Oxford University Press, 2011
Priest, Fergus G.; Handbook of Brewing, CRC Press, 2006,
Stevens, Roger, et al.; Brewing: Science and Practice, Woodhead Publishing, 2004,
Unger, Richard W.; Beer in the Middle Ages and the Renaissance, University of Pennsylvania Press, 2004,
External links
An overview of the microbiology behind beer brewing from the Science Creative Quarterly
A pictorial overview of the brewing process at the Heriot-Watt University Pilot Brewery
Fermentation in food processing
|
https://en.wikipedia.org/wiki/BIOS
|
In computing, BIOS (, ; Basic Input/Output System, also known as the System BIOS, ROM BIOS, BIOS ROM or PC BIOS) is firmware used to provide runtime services for operating systems and programs and to perform hardware initialization during the booting process (power-on startup). The BIOS firmware comes pre-installed on an IBM PC or IBM PC compatible's system board and exists in some UEFI-based systems to maintain compatibility with operating systems that do not support UEFI native operation. The name originates from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS originally proprietary to the IBM PC has been reverse engineered by some companies (such as Phoenix Technologies) looking to create compatible systems. The interface of that original system serves as a de facto standard.
The BIOS in modern PCs initializes and tests the system hardware components (Power-on self-test), and loads a boot loader from a mass storage device which then initializes a kernel. In the era of DOS, the BIOS provided BIOS interrupt calls for the keyboard, display, storage, and other input/output (I/O) devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS interrupt calls after startup.
Most BIOS implementations are specifically designed to work with a particular computer or motherboard model, by interfacing with various devices especially system chipset. Originally, BIOS firmware was stored in a ROM chip on the PC motherboard. In later computer systems, the BIOS contents are stored on flash memory so it can be rewritten without removing the chip from the motherboard. This allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it also creates a possibility for the computer to become infected with BIOS rootkits. Furthermore, a BIOS upgrade that fails could brick the motherboard. The last version of Microsoft Windows to officially support running on PCs which use legacy BIOS firmware is Windows 10 as Windows 11 requires a UEFI-compliant system.
Unified Extensible Firmware Interface (UEFI) is a successor to the legacy PC BIOS, aiming to address its technical limitations.
History
The term BIOS (Basic Input/Output System) was created by Gary Kildall and first appeared in the CP/M operating system in 1975, describing the machine-specific part of CP/M loaded during boot time that interfaces directly with the hardware. (A CP/M machine usually has only a simple boot loader in its ROM.)
Versions of MS-DOS, PC DOS or DR-DOS contain a file called variously "IO.SYS", "IBMBIO.COM", "IBMBIO.SYS", or "DRBIOS.SYS"; this file is known as the "DOS BIOS" (also known as the "DOS I/O System") and contains the lower-level hardware-specific part of the operating system. Together with the underlying hardware-specific but operating system-independent "System BIOS", which resides in ROM, it represents the analogue to the "CP/M BIOS".
The BIOS originally proprietary to the IBM PC has been reverse engineered by some companies (such as Phoenix Technologies) looking to create compatible systems.
With the introduction of PS/2 machines, IBM divided the System BIOS into real- and protected-mode portions. The real-mode portion was meant to provide backward compatibility with existing operating systems such as DOS, and therefore was named "CBIOS" (for "Compatibility BIOS"), whereas the "ABIOS" (for "Advanced BIOS") provided new interfaces specifically suited for multitasking operating systems such as OS/2.
User interface
The BIOS of the original IBM PC and XT had no interactive user interface. Error codes or messages were displayed on the screen, or coded series of sounds were generated to signal errors when the power-on self-test (POST) had not proceeded to the point of successfully initializing a video display adapter. Options on the IBM PC and XT were set by switches and jumpers on the main board and on expansion cards. Starting around the mid-1990s, it became typical for the BIOS ROM to include a "BIOS configuration utility" (BCU) or "BIOS setup utility", accessed at system power-up by a particular key sequence. This program allowed the user to set system configuration options, of the type formerly set using DIP switches, through an interactive menu system controlled through the keyboard. In the interim period, IBM-compatible PCsincluding the IBM ATheld configuration settings in battery-backed RAM and used a bootable configuration program on floppy disk, not in the ROM, to set the configuration options contained in this memory. The floppy disk was supplied with the computer, and if it was lost the system settings could not be changed. The same applied in general to computers with an EISA bus, for which the configuration program was called an EISA Configuration Utility (ECU).
A modern Wintel-compatible computer provides a setup routine essentially unchanged in nature from the ROM-resident BIOS setup utilities of the late 1990s; the user can configure hardware options using the keyboard and video display. The modern Wintel machine may store the BIOS configuration settings in flash ROM, perhaps the same flash ROM that holds the BIOS itself.
Operation
System startup
Early Intel processors started at physical address 000FFFF0h. Systems with later processors provide logic to start running the BIOS from the system ROM.
If the system has just been powered up or the reset button was pressed ("cold boot"), the full power-on self-test (POST) is run. If Ctrl+Alt+Delete was pressed ("warm boot"), a special flag value stored in nonvolatile BIOS memory ("CMOS") tested by the BIOS allows bypass of the lengthy POST and memory detection.
The POST identifies, tests and initializes system devices such as the CPU, chipset, RAM, motherboard, video card, keyboard, mouse, hard disk drive, optical disc drive and other hardware, including integrated peripherals.
Early IBM PCs had a routine in the POST that would download a program into RAM through the keyboard port and run it. This feature was intended for factory test or diagnostic purposes.
Boot process
After the option ROM scan is completed and all detected ROM modules with valid checksums have been called, or immediately after POST in a BIOS version that does not scan for option ROMs, the BIOS calls INT 19h to start boot processing. Post-boot, programs loaded can also call INT 19h to reboot the system, but they must be careful to disable interrupts and other asynchronous hardware processes that may interfere with the BIOS rebooting process, or else the system may hang or crash while it is rebooting.
When INT 19h is called, the BIOS attempts to locate boot loader software on a "boot device", such as a hard disk, a floppy disk, CD, or DVD. It loads and executes the first boot software it finds, giving it control of the PC.
The BIOS uses the boot devices set in Nonvolatile BIOS memory (CMOS), or, in the earliest PCs, DIP switches. The BIOS checks each device in order to see if it is bootable by attempting to load the first sector (boot sector). If the sector cannot be read, the BIOS proceeds to the next device. If the sector is read successfully, some BIOSes will also check for the boot sector signature 0x55 0xAA in the last two bytes of the sector (which is 512 bytes long), before accepting a boot sector and considering the device bootable.
When a bootable device is found, the BIOS transfers control to the loaded sector. The BIOS does not interpret the contents of the boot sector other than to possibly check for the boot sector signature in the last two bytes. Interpretation of data structures like partition tables and BIOS Parameter Blocks is done by the boot program in the boot sector itself or by other programs loaded through the boot process.
A non-disk device such as a network adapter attempts booting by a procedure that is defined by its option ROM or the equivalent integrated into the motherboard BIOS ROM. As such, option ROMs may also influence or supplant the boot process defined by the motherboard BIOS ROM.
With the El Torito optical media boot standard, the optical drive actually emulates a 3.5" high-density floppy disk to the BIOS for boot purposes. Reading the "first sector" of a CD-ROM or DVD-ROM is not a simply defined operation like it is on a floppy disk or a hard disk. Furthermore, the complexity of the medium makes it difficult to write a useful boot program in one sector. The bootable virtual floppy disk can contain software that provides access to the optical medium in its native format.
Boot priority
The user can select the boot priority implemented by the BIOS. For example, most computers have a hard disk that is bootable, but sometimes there is a removable-media drive that has higher boot priority, so the user can cause a removable disk to be booted.
In most modern BIOSes, the boot priority order can be configured by the user. In older BIOSes, limited boot priority options are selectable; in the earliest BIOSes, a fixed priority scheme was implemented, with floppy disk drives first, fixed disks (i.e., hard disks) second, and typically no other boot devices supported, subject to modification of these rules by installed option ROMs. The BIOS in an early PC also usually would only boot from the first floppy disk drive or the first hard disk drive, even if there were two drives installed.
Boot failure
On the original IBM PC and XT, if no bootable disk was found, ROM BASIC was started by calling INT 18h. Since few programs used BASIC in ROM, clone PC makers left it out; then a computer that failed to boot from a disk would display "No ROM BASIC" and halt (in response to INT 18h).
Later computers would display a message like "No bootable disk found"; some would prompt for a disk to be inserted and a key to be pressed to retry the boot process. A modern BIOS may display nothing or may automatically enter the BIOS configuration utility when the boot process fails.
Boot environment
The environment for the boot program is very simple: the CPU is in real mode and the general-purpose and segment registers are undefined, except SS, SP, CS, and DL. CS:IP always points to physical address 0x07C00. What values CS and IP actually have is not well defined. Some BIOSes use a CS:IP of 0x0000:0x7C00 while others may use 0x07C0:0x0000. Because boot programs are always loaded at this fixed address, there is no need for a boot program to be relocatable. DL may contain the drive number, as used with INT 13h, of the boot device. SS:SP points to a valid stack that is presumably large enough to support hardware interrupts, but otherwise SS and SP are undefined. (A stack must be already set up in order for interrupts to be serviced, and interrupts must be enabled in order for the system timer-tick interrupt, which BIOS always uses at least to maintain the time-of-day count and which it initializes during POST, to be active and for the keyboard to work. The keyboard works even if the BIOS keyboard service is not called; keystrokes are received and placed in the 15-character type-ahead buffer maintained by BIOS.) The boot program must set up its own stack, because the size of the stack set up by BIOS is unknown and its location is likewise variable; although the boot program can investigate the default stack by examining SS:SP, it is easier and shorter to just unconditionally set up a new stack.
At boot time, all BIOS services are available, and the memory below address 0x00400 contains the interrupt vector table. BIOS POST has initialized the system timers, interrupt controller(s), DMA controller(s), and other motherboard/chipset hardware as necessary to bring all BIOS services to ready status. DRAM refresh for all system DRAM in conventional memory and extended memory, but not necessarily expanded memory, has been set up and is running. The interrupt vectors corresponding to the BIOS interrupts have been set to point at the appropriate entry points in the BIOS, hardware interrupt vectors for devices initialized by the BIOS have been set to point to the BIOS-provided ISRs, and some other interrupts, including ones that BIOS generates for programs to hook, have been set to a default dummy ISR that immediately returns. The BIOS maintains a reserved block of system RAM at addresses 0x00400–0x004FF with various parameters initialized during the POST. All memory at and above address 0x00500 can be used by the boot program; it may even overwrite itself.
Extensions (option ROMs)
Peripheral cards such as hard disk drive host bus adapters and video cards have their own firmware, and BIOS extension option ROM may be a part of the expansion card firmware, which provide additional functionality to BIOS. Code in option ROMs runs before the BIOS boots the operating system from mass storage. These ROMs typically test and initialize hardware, add new BIOS services, or replace existing BIOS services with their own services. For example, a SCSI controller usually has a BIOS extension ROM that adds support for hard drives connected through that controller. An extension ROM could in principle contain operating system, or it could implement an entirely different boot process such as network booting. Operation of an IBM-compatible computer system can be completely changed by removing or inserting an adapter card (or a ROM chip) that contains a BIOS extension ROM.
The motherboard BIOS typically contains code for initializing and bootstrapping integrated display and integrated storage. In addition, plug-in adapter cards such as SCSI, RAID, network interface cards, and video cards often include their own BIOS (e.g. Video BIOS), complementing or replacing the system BIOS code for the given component. Even devices built into the motherboard can behave in this way; their option ROMs can be a part of the motherboard BIOS.
An add-in card requires an option ROM if the card is not supported by the motherboard BIOS and the card needs to be initialized or made accessible through BIOS services before the operating system can be loaded (usually this means it is required in the boot process). An additional advantage of ROM on some early PC systems (notably including the IBM PCjr) was that ROM was faster than main system RAM. (On modern systems, the case is very much the reverse of this, and BIOS ROM code is usually copied ("shadowed") into RAM so it will run faster.)
Boot procedure
If an expansion ROM wishes to change the way the system boots (such as from a network device or a SCSI adapter) in a cooperative way, it can use the BIOS Boot Specification (BBS) API to register its ability to do so. Once the expansion ROMs have registered using the BBS APIs, the user can select among the available boot options from within the BIOS's user interface. This is why most BBS compliant PC BIOS implementations will not allow the user to enter the BIOS's user interface until the expansion ROMs have finished executing and registering themselves with the BBS API.
Also, if an expansion ROM wishes to change the way the system boots unilaterally, it can simply hook INT 19h or other interrupts normally called from interrupt 19h, such as INT 13h, the BIOS disk service, to intercept the BIOS boot process. Then it can replace the BIOS boot process with one of its own, or it can merely modify the boot sequence by inserting its own boot actions into it, by preventing the BIOS from detecting certain devices as bootable, or both. Before the BIOS Boot Specification was promulgated, this was the only way for expansion ROMs to implement boot capability for devices not supported for booting by the native BIOS of the motherboard.
Initialization
After the motherboard BIOS completes its POST, most BIOS versions search for option ROM modules, also called BIOS extension ROMs, and execute them. The motherboard BIOS scans for extension ROMs in a portion of the "upper memory area" (the part of the x86 real-mode address space at and above address 0xA0000) and runs each ROM found, in order. To discover memory-mapped option ROMs, a BIOS implementation scans the real-mode address space from 0x0C0000 to 0x0F0000 on 2 KB (2,048 bytes) boundaries, looking for a two-byte ROM signature: 0x55 followed by 0xAA. In a valid expansion ROM, this signature is followed by a single byte indicating the number of 512-byte blocks the expansion ROM occupies in real memory, and the next byte is the option ROM's entry point (also known as its "entry offset"). If the ROM has a valid checksum, the BIOS transfers control to the entry address, which in a normal BIOS extension ROM should be the beginning of the extension's initialization routine.
At this point, the extension ROM code takes over, typically testing and initializing the hardware it controls and registering interrupt vectors for use by post-boot applications. It may use BIOS services (including those provided by previously initialized option ROMs) to provide a user configuration interface, to display diagnostic information, or to do anything else that it requires. It is possible that an option ROM will not return to BIOS, pre-empting the BIOS's boot sequence altogether.
An option ROM should normally return to the BIOS after completing its initialization process. Once (and if) an option ROM returns, the BIOS continues searching for more option ROMs, calling each as it is found, until the entire option ROM area in the memory space has been scanned.
Physical placement
Option ROMs normally reside on adapter cards. However, the original PC, and perhaps also the PC XT, have a spare ROM socket on the motherboard (the "system board" in IBM's terms) into which an option ROM can be inserted, and the four ROMs that contain the BASIC interpreter can also be removed and replaced with custom ROMs which can be option ROMs. The IBM PCjr is unique among PCs in having two ROM cartridge slots on the front. Cartridges in these slots map into the same region of the upper memory area used for option ROMs, and the cartridges can contain option ROM modules that the BIOS would recognize. The cartridges can also contain other types of ROM modules, such as BASIC programs, that are handled differently. One PCjr cartridge can contain several ROM modules of different types, possibly stored together in one ROM chip.
Operating system services
The BIOS ROM is customized to the particular manufacturer's hardware, allowing low-level services (such as reading a keystroke or writing a sector of data to diskette) to be provided in a standardized way to programs, including operating systems. For example, an IBM PC might have either a monochrome or a color display adapter (using different display memory addresses and hardware), but a single, standard, BIOS system call may be invoked to display a character at a specified position on the screen in text mode or graphics mode.
The BIOS provides a small library of basic input/output functions to operate peripherals (such as the keyboard, rudimentary text and graphics display functions and so forth). When using MS-DOS, BIOS services could be accessed by an application program (or by MS-DOS) by executing an INT 13h interrupt instruction to access disk functions, or by executing one of a number of other documented BIOS interrupt calls to access video display, keyboard, cassette, and other device functions.
Operating systems and executive software that are designed to supersede this basic firmware functionality provide replacement software interfaces to application software. Applications can also provide these services to themselves. This began even in the 1980s under MS-DOS, when programmers observed that using the BIOS video services for graphics display were very slow. To increase the speed of screen output, many programs bypassed the BIOS and programmed the video display hardware directly. Other graphics programmers, particularly but not exclusively in the demoscene, observed that there were technical capabilities of the PC display adapters that were not supported by the IBM BIOS and could not be taken advantage of without circumventing it. Since the AT-compatible BIOS ran in Intel real mode, operating systems that ran in protected mode on 286 and later processors required hardware device drivers compatible with protected mode operation to replace BIOS services.
In modern PCs running modern operating systems (such as Windows and Linux) the BIOS interrupt calls is used only during booting and initial loading of operating systems. Before the operating system's first graphical screen is displayed, input and output are typically handled through BIOS. A boot menu such as the textual menu of Windows, which allows users to choose an operating system to boot, to boot into the safe mode, or to use the last known good configuration, is displayed through BIOS and receives keyboard input through BIOS.
Many modern PCs can still boot and run legacy operating systems such as MS-DOS or DR-DOS that rely heavily on BIOS for their console and disk I/O, providing that the system has a BIOS, or a CSM-capable UEFI firmware.
Processor microcode updates
Intel processors have reprogrammable microcode since the P6 microarchitecture. AMD processors have reprogrammable microcode since the K7 microarchitecture. The BIOS contain patches to the processor microcode that fix errors in the initial processor microcode; microcode is loaded into processor's SRAM so reprogramming is not persistent, thus loading of microcode updates is performed each time the system is powered up. Without reprogrammable microcode, an expensive processor swap would be required; for example, the Pentium FDIV bug became an expensive fiasco for Intel as it required a product recall because the original Pentium processor's defective microcode could not be reprogrammed. Operating systems can update main processor microcode also.
Identification
Some BIOSes contain a software licensing description table (SLIC), a digital signature placed inside the BIOS by the original equipment manufacturer (OEM), for example Dell. The SLIC is inserted into the ACPI data table and contains no active code.
Computer manufacturers that distribute OEM versions of Microsoft Windows and Microsoft application software can use the SLIC to authenticate licensing to the OEM Windows Installation disk and system recovery disc containing Windows software. Systems with a SLIC can be preactivated with an OEM product key, and they verify an XML formatted OEM certificate against the SLIC in the BIOS as a means of self-activating (see System Locked Preinstallation, SLP). If a user performs a fresh install of Windows, they will need to have possession of both the OEM key (either SLP or COA) and the digital certificate for their SLIC in order to bypass activation. This can be achieved if the user performs a restore using a pre-customised image provided by the OEM. Power users can copy the necessary certificate files from the OEM image, decode the SLP product key, then perform SLP activation manually.
Overclocking
Some BIOS implementations allow overclocking, an action in which the CPU is adjusted to a higher clock rate than its manufacturer rating for guaranteed capability. Overclocking may, however, seriously compromise system reliability in insufficiently cooled computers and generally shorten component lifespan. Overclocking, when incorrectly performed, may also cause components to overheat so quickly that they mechanically destroy themselves.
Modern use
Some older operating systems, for example MS-DOS, rely on the BIOS to carry out most input/output tasks within the PC.
Calling real mode BIOS services directly is inefficient for protected mode (and long mode) operating systems. BIOS interrupt calls are not used by modern multitasking operating systems after they initially load.
In 1990s, BIOS provided some protected mode interfaces for Microsoft Windows and Unix-like operating systems, such as Advanced Power Management (APM), Plug and Play BIOS, Desktop Management Interface (DMI), VESA BIOS Extensions (VBE), e820 and MultiProcessor Specification (MPS). Starting from the 2000, most BIOSes provide ACPI, SMBIOS, VBE and e820 interfaces for modern operating systems.
After operating systems load, the System Management Mode code is still running in SMRAM. Since 2010, BIOS technology is in a transitional process toward UEFI.
Configuration
Setup utility
Historically, the BIOS in the IBM PC and XT had no built-in user interface. The BIOS versions in earlier PCs (XT-class) were not software configurable; instead, users set the options via DIP switches on the motherboard. Later computers, including all IBM-compatibles with 80286 CPUs, had a battery-backed nonvolatile BIOS memory (CMOS RAM chip) that held BIOS settings. These settings, such as video-adapter type, memory size, and hard-disk parameters, could only be configured by running a configuration program from a disk, not built into the ROM. A special "reference diskette" was inserted in an IBM AT to configure settings such as memory size.
Early BIOS versions did not have passwords or boot-device selection options. The BIOS was hard-coded to boot from the first floppy drive, or, if that failed, the first hard disk. Access control in early AT-class machines was by a physical keylock switch (which was not hard to defeat if the computer case could be opened). Anyone who could switch on the computer could boot it.
Later, 386-class computers started integrating the BIOS setup utility in the ROM itself, alongside the BIOS code; these computers usually boot into the BIOS setup utility if a certain key or key combination is pressed, otherwise the BIOS POST and boot process are executed.
A modern BIOS setup utility has a text user interface (TUI) or graphical user interface (GUI) accessed by pressing a certain key on the keyboard when the PC starts. Usually, the key is advertised for short time during the early startup, for example "Press DEL to enter Setup". The actual key depends on specific hardware. Features present in the BIOS setup utility typically include:
Configuring, enabling and disabling the hardware components
Setting the system time
Setting the boot order
Setting various passwords, such as a password for securing access to the BIOS user interface and preventing malicious users from booting the system from unauthorized portable storage devices, or a password for booting the system
Hardware monitoring
A modern BIOS setup screen often features a PC Health Status or a Hardware Monitoring tab, which directly interfaces with a Hardware Monitor chip of the mainboard. This makes it possible to monitor CPU and chassis temperature, the voltage provided by the power supply unit, as well as monitor and control the speed of the fans connected to the motherboard.
Once the system is booted, hardware monitoring and computer fan control is normally done directly by the Hardware Monitor chip itself, which can be a separate chip, interfaced through I2C or SMBus, or come as a part of a Super I/O solution, interfaced through Industry Standard Architecture (ISA) or Low Pin Count (LPC). Some operating systems, like NetBSD with envsys and OpenBSD with sysctl hw.sensors, feature integrated interfacing with hardware monitors.
However, in some circumstances, the BIOS also provides the underlying information about hardware monitoring through ACPI, in which case, the operating system may be using ACPI to perform hardware monitoring.
Reprogramming
In modern PCs the BIOS is stored in rewritable EEPROM or NOR flash memory, allowing the contents to be replaced and modified. This rewriting of the contents is sometimes termed flashing. It can be done by a special program, usually provided by the system's manufacturer, or at POST, with a BIOS image in a hard drive or USB flash drive. A file containing such contents is sometimes termed "a BIOS image". A BIOS might be reflashed in order to upgrade to a newer version to fix bugs or provide improved performance or to support newer hardware.
Hardware
The original IBM PC BIOS (and cassette BASIC) was stored on mask-programmed read-only memory (ROM) chips in sockets on the motherboard. ROMs could be replaced, but not altered, by users. To allow for updates, many compatible computers used re-programmable BIOS memory devices such as EPROM, EEPROM and later flash memory (usually NOR flash) devices. According to Robert Braver, the president of the BIOS manufacturer Micro Firmware, Flash BIOS chips became common around 1995 because the electrically erasable PROM (EEPROM) chips are cheaper and easier to program than standard ultraviolet erasable PROM (EPROM) chips. Flash chips are programmed (and re-programmed) in-circuit, while EPROM chips need to be removed from the motherboard for re-programming. BIOS versions are upgraded to take advantage of newer versions of hardware and to correct bugs in previous revisions of BIOSes.
Beginning with the IBM AT, PCs supported a hardware clock settable through BIOS. It had a century bit which allowed for manually changing the century when the year 2000 happened. Most BIOS revisions created in 1995 and nearly all BIOS revisions in 1997 supported the year 2000 by setting the century bit automatically when the clock rolled past midnight, 31 December 1999.
The first flash chips were attached to the ISA bus. Starting in 1998, the BIOS flash moved to the LPC bus, following a new standard implementation known as "firmware hub" (FWH). In 2005, the BIOS flash memory moved to the SPI bus.
The size of the BIOS, and the capacity of the ROM, EEPROM, or other media it may be stored on, has increased over time as new features have been added to the code; BIOS versions now exist with sizes up to 32 megabytes. For contrast, the original IBM PC BIOS was contained in an 8 KB mask ROM. Some modern motherboards are including even bigger NAND flash memory ICs on board which are capable of storing whole compact operating systems, such as some Linux distributions. For example, some ASUS notebooks included Splashtop OS embedded into their NAND flash memory ICs. However, the idea of including an operating system along with BIOS in the ROM of a PC is not new; in the 1980s, Microsoft offered a ROM option for MS-DOS, and it was included in the ROMs of some PC clones such as the Tandy 1000 HX.
Another type of firmware chip was found on the IBM PC AT and early compatibles. In the AT, the keyboard interface was controlled by a microcontroller with its own programmable memory. On the IBM AT, that was a 40-pin socketed device, while some manufacturers used an EPROM version of this chip which resembled an EPROM. This controller was also assigned the A20 gate function to manage memory above the one-megabyte range; occasionally an upgrade of this "keyboard BIOS" was necessary to take advantage of software that could use upper memory.
The BIOS may contain components such as the Memory Reference Code (MRC), which is responsible for the memory initialization (e.g. SPD and memory timings initialization).
Modern BIOS includes
Intel Management Engine or AMD Platform Security Processor firmware.
Vendors and products
IBM published the entire listings of the BIOS for its original PC, PC XT, PC AT, and other contemporary PC models, in an appendix of the IBM PC Technical Reference Manual for each machine type. The effect of the publication of the BIOS listings is that anyone can see exactly what a definitive BIOS does and how it does it.
In May 1984 Phoenix Software Associates released its first ROM-BIOS, which enabled OEMs to build essentially fully compatible clones without having to reverse-engineer the IBM PC BIOS themselves, as Compaq had done for the Portable, helping fuel the growth in the PC-compatibles industry and sales of non-IBM versions of DOS. And the first American Megatrends (AMI) BIOS was released on 1986.
New standards grafted onto the BIOS are usually without complete public documentation or any BIOS listings. As a result, it is not as easy to learn the intimate details about the many non-IBM additions to BIOS as about the core BIOS services.
Many PC motherboard suppliers licensed the BIOS "core" and toolkit from a commercial third party, known as an "independent BIOS vendor" or IBV. The motherboard manufacturer then customized this BIOS to suit its own hardware. For this reason, updated BIOSes are normally obtained directly from the motherboard manufacturer. Major IBV included American Megatrends (AMI), Insyde Software, Phoenix Technologies, and Byosoft. Microid Research and Award Software were acquired by Phoenix Technologies in 1998; Phoenix later phased out the Award brand name. General Software, which was also acquired by Phoenix in 2007, sold BIOS for embedded systems based on Intel processors.
Open-source BIOS firmware
The open-source community increased their effort to develop a replacement for proprietary BIOSes and their future incarnations with an open-sourced counterparts. Open Firmware was an early attempt to make open source standard for booting firmware. It was initially endorsed by IEEE in its IEEE 1275-1994 standard but was withdrawn in 2005. Later examples include the libreboot, coreboot and OpenBIOS/Open Firmware projects. AMD provided product specifications for some chipsets, and Google is sponsoring the project. Motherboard manufacturer Tyan offers coreboot next to the standard BIOS with their Opteron line of motherboards.
Security
EEPROM and Flash memory chips are advantageous because they can be easily updated by the user; it is customary for hardware manufacturers to issue BIOS updates to upgrade their products, improve compatibility and remove bugs. However, this advantage had the risk that an improperly executed or aborted BIOS update could render the computer or device unusable. To avoid these situations, more recent BIOSes use a "boot block"; a portion of the BIOS which runs first and must be updated separately. This code verifies if the rest of the BIOS is intact (using hash checksums or other methods) before transferring control to it. If the boot block detects any corruption in the main BIOS, it will typically warn the user that a recovery process must be initiated by booting from removable media (floppy, CD or USB flash drive) so the user can try flashing the BIOS again. Some motherboards have a backup BIOS (sometimes referred to as DualBIOS boards) to recover from BIOS corruptions.
There are at least five known viruses that attack the BIOS. Two of which were for demonstration purposes. The first one found in the wild was Mebromi, targeting Chinese users.
The first BIOS virus was BIOS Meningitis, which instead of erasing BIOS chips it infected them. BIOS Meningitis was relatively harmless, compared to a virus like CIH.
The second BIOS virus was CIH, also known as the "Chernobyl Virus", which was able to erase flash ROM BIOS content on compatible chipsets. CIH appeared in mid-1998 and became active in April 1999. Often, infected computers could no longer boot, and people had to remove the flash ROM IC from the motherboard and reprogram it. CIH targeted the then-widespread Intel i430TX motherboard chipset and took advantage of the fact that the Windows 9x operating systems, also widespread at the time, allowed direct hardware access to all programs.
Modern systems are not vulnerable to CIH because of a variety of chipsets being used which are incompatible with the Intel i430TX chipset, and also other flash ROM IC types. There is also extra protection from accidental BIOS rewrites in the form of boot blocks which are protected from accidental overwrite or dual and quad BIOS equipped systems which may, in the event of a crash, use a backup BIOS. Also, all modern operating systems such as FreeBSD, Linux, macOS, Windows NT-based Windows OS like Windows 2000, Windows XP and newer, do not allow user-mode programs to have direct hardware access using a hardware abstraction layer.
As a result, as of 2008, CIH has become essentially harmless, at worst causing annoyance by infecting executable files and triggering antivirus software. Other BIOS viruses remain possible, however; since most Windows home users without Windows Vista/7's UAC run all applications with administrative privileges, a modern CIH-like virus could in principle still gain access to hardware without first using an exploit. The operating system OpenBSD prevents all users from having this access and the grsecurity patch for the Linux kernel also prevents this direct hardware access by default, the difference being an attacker requiring a much more difficult kernel level exploit or reboot of the machine.
The third BIOS virus was a technique presented by John Heasman, principal security consultant for UK-based Next-Generation Security Software. In 2006, at the Black Hat Security Conference, he showed how to elevate privileges and read physical memory, using malicious procedures that replaced normal ACPI functions stored in flash memory.
The fourth BIOS virus was a technique called "Persistent BIOS infection." It appeared in 2009 at the CanSecWest Security Conference in Vancouver, and at the SyScan Security Conference in Singapore. Researchers Anibal Sacco and Alfredo Ortega, from Core Security Technologies, demonstrated how to insert malicious code into the decompression routines in the BIOS, allowing for nearly full control of the PC at start-up, even before the operating system is booted. The proof-of-concept does not exploit a flaw in the BIOS implementation, but only involves the normal BIOS flashing procedures. Thus, it requires physical access to the machine, or for the user to be root. Despite these requirements, Ortega underlined the profound implications of his and Sacco's discovery: "We can patch a driver to drop a fully working rootkit. We even have a little code that can remove or disable antivirus."
Mebromi is a trojan which targets computers with AwardBIOS, Microsoft Windows, and antivirus software from two Chinese companies: Rising Antivirus and Jiangmin KV Antivirus. Mebromi installs a rootkit which infects the Master boot record.
In a December 2013 interview with 60 Minutes, Deborah Plunkett, Information Assurance Director for the US National Security Agency claimed the NSA had uncovered and thwarted a possible BIOS attack by a foreign nation state, targeting the US financial system. The program cited anonymous sources alleging it was a Chinese plot. However follow-up articles in The Guardian, The Atlantic, Wired and The Register refuted the NSA's claims.
Newer Intel platforms have Intel Boot Guard (IBG) technology enabled, this technology will check the BIOS digital signature at startup, and the IBG public key is fused into the PCH. End users can't disable this function.
Alternatives and successors
Unified Extensible Firmware Interface (UEFI) supplements the BIOS in many new machines. Initially written for the Intel Itanium architecture, UEFI is now available for x86 and Arm platforms; the specification development is driven by the Unified EFI Forum, an industry Special Interest Group. EFI booting has been supported in only Microsoft Windows versions supporting GPT, the Linux kernel 2.6.1 and later, and macOS on Intel-based Macs. , new PC hardware predominantly ships with UEFI firmware. The architecture of the rootkit safeguard can also prevent the system from running the user's own software changes, which makes UEFI controversial as a legacy BIOS replacement in the open hardware community. Also, Windows 11 requires UEFI to boot.
Other alternatives to the functionality of the "Legacy BIOS" in the x86 world include coreboot and libreboot.
Some servers and workstations use a platform-independent Open Firmware (IEEE-1275) based on the Forth programming language; it is included with Sun's SPARC computers, IBM's RS/6000 line, and other PowerPC systems such as the CHRP motherboards, along with the x86-based OLPC XO-1.
As of at least 2015, Apple has removed legacy BIOS support from MacBook Pro computers. As such the BIOS utility no longer supports the legacy option, and prints "Legacy mode not supported on this system". In 2017, Intel announced that it would remove legacy BIOS support by 2020. Since 2019, new Intel platform OEM PCs no longer support the legacy option.
See also
Double boot
Extended System Configuration Data (ESCD)
Input/Output Control System
Advanced Configuration and Power Interface (ACPI)
Ralf Brown's Interrupt List (RBIL) interrupts, calls, interfaces, data structures, memory and port addresses, and processor opcodes for the x86 architecture
System Management BIOS (SMBIOS)
Unified Extensible Firmware Interface (UEFI)
Notes
References
Further reading
BIOS Disassembly Ninjutsu Uncovered, 1st edition, a freely available book in PDF format
More Power To Firmware, free bonus chapter to the Mac OS X Internals: A Systems Approach book
External links
CP/M technology
DOS technology
Windows technology
|
https://en.wikipedia.org/wiki/Bakelite
|
Bakelite ( ), formally Polyoxybenzylmethyleneglycolanhydride, is a thermosetting phenol formaldehyde resin, formed from a condensation reaction of phenol with formaldehyde. The first plastic made from synthetic components, it was developed by Leo Baekeland in Yonkers, New York in 1907, and patented on December 7, 1909 ().
Because of its electrical nonconductivity and heat-resistant properties, it became a great commercial success. It was used in electrical insulators, radio and telephone casings, and such diverse products as kitchenware, jewelry, pipe stems, children's toys, and firearms. The "retro" appeal of old Bakelite products has made them collectible.
The creation of a synthetic plastic was revolutionary for the chemical industry, which at the time made most of its income from cloth dyes and explosives. Bakelite's commercial success inspired the industry to develop other synthetic plastics. As the world's first commercial synthetic plastic, Bakelite was named a National Historic Chemical Landmark by the American Chemical Society.
History
Bakelite was produced for the first time in 1872 by Adolf von Baeyer though its use as a commercial product was not considered at the time.
Leo Baekeland was already wealthy due to his invention of Velox photographic paper when he began to investigate the reactions of phenol and formaldehyde in his home laboratory. Chemists had begun to recognize that many natural resins and fibers were polymers. Baekeland's initial intent was to find a replacement for shellac, a material in limited supply because it was made naturally from the secretion of lac insects (specifically Kerria lacca). He produced a soluble phenol-formaldehyde shellac called "Novolak", but it was not a market success, even though it is still used to this day (e.g., as a photoresist).
He then began experimenting on strengthening wood by impregnating it with a synthetic resin rather than coating it. By controlling the pressure and temperature applied to phenol and formaldehyde, he produced a hard moldable material that he named Bakelite, after himself. It was the first synthetic thermosetting plastic produced, and Baekeland speculated on "the thousand and one ... articles" it could be used to make. He considered the possibilities of using a wide variety of filling materials, including cotton, powdered bronze, and slate dust, but was most successful with wood and asbestos fibers, though asbestos was gradually abandoned by all manufacturers due to stricter environmental laws.
Baekeland filed a substantial number of related patents. Bakelite, his "method of making insoluble products of phenol and formaldehyde," was filed on July 13, 1907 and granted on December 7, 1909. He also filed for patent protection in other countries, including Belgium, Canada, Denmark, Hungary, Japan, Mexico, Russia and Spain. He announced his invention at a meeting of the American Chemical Society on February 5, 1909.
Baekeland started semi-commercial production of his new material in his home laboratory, marketing it as a material for electrical insulators. In the summer of 1909 he licensed the continental European rights to Rütger AG. The subsidiary formed at that time, Bakelite AG, was the first to produce Bakelite on an industrial scale.
By 1910, Baekeland was producing enough material in the US to justify expansion. He formed the General Bakelite Company of Perth Amboy, NJ as a U.S. company to manufacture and market his new industrial material, and made overseas connections to produce it in other countries.
The Bakelite Company produced "transparent" cast resin (which did not include filler) for a small market during the 1910s and 1920s. Blocks or rods of cast resin, also known as "artificial amber", were machined and carved to create items such as pipe stems, cigarette holders, and jewelry. However, the demand for molded plastics led the company to concentrate on molding rather than cast solid resins.
The Bakelite Corporation was formed in 1922 after patent litigation favorable to Baekeland, from a merger of three companies: Baekeland's General Bakelite Company; the Condensite Company, founded by J. W. Aylesworth; and the Redmanol Chemical Products Company, founded by Lawrence V. Redman. Under director of advertising and public relations Allan Brown, who came to Bakelite from Condensite, Bakelite was aggressively marketed as "the material of a thousand uses". A filing for a trademark featuring the letter B above the mathematical symbol for infinity was made August 25, 1925, and claimed the mark was in use as of December 1, 1924. A wide variety of uses were listed in their trademark applications.
The first issue of Plastics magazine, October 1925, featured Bakelite on its cover and included the article "Bakelite – What It Is" by Allan Brown. The range of colors that were available included "black, brown, red, yellow, green, gray, blue, and blends of two or more of these". The article emphasized that Bakelite came in various forms. "Bakelite is manufactured in several forms to suit varying requirements. In all these forms the fundamental basis is the initial Bakelite resin. This variety includes clear material, for jewelry, smokers' articles, etc.; cement, for sealing electric light bulbs in metal bases; varnishes, for impregnating electric coils, etc.; lacquers, for protecting the surface of hardware; enamels, for giving resistive coating to industrial equipment; Laminated Bakelite, used for silent gears and insulation; and molding material, from which are formed innumerable articles of utility and beauty. The molding material is prepared ordinarily by the impregnation of cellulose substances with the initial 'uncured' resin." In a 1925 report, the United States Tariff Commission hailed the commercial manufacture of synthetic phenolic resin as "distinctly an American achievement", and noted that "the publication of figures, however, would be a virtual disclosure of the production of an individual company".
In England, Bakelite Limited, a merger of three British phenol formaldehyde resin suppliers (Damard Lacquer Company Limited of Birmingham, Mouldensite Limited of Darley Dale and Redmanol Chemical Products Company of London), was formed in 1926. A new Bakelite factory opened in Tyseley, Birmingham, around 1928. It was the "heart of Bakelite production in the UK" until it closed in 1987.
A factory to produce phenolic resins and precursors opened in Bound Brook, New Jersey in 1931.
In 1939, the companies were acquired by Union Carbide and Carbon Corporation.
In 2005 German Bakelite manufacturer Bakelite AG was acquired by Borden Chemical of Columbus, Ohio, now Hexion Inc.
In addition to the original Bakelite material, these companies eventually made a wide range of other products, many of which were marketed under the brand name "Bakelite plastics". These included other types of cast phenolic resins similar to Catalin, and urea-formaldehyde resins, which could be made in brighter colors than polyoxybenzylmethyleneglycolanhydride.
Once Baekeland's heat and pressure patents expired in 1927, Bakelite Corporation faced serious competition from other companies. Because molded Bakelite incorporated fillers to give it strength, it tended to be made in concealing dark colors. In 1927, beads, bangles, and earrings were produced by the Catalin company, through a different process which enabled them to introduce 15 new colors. Translucent jewelry, poker chips and other items made of phenolic resins were introduced in the 1930s or 1940s by the Catalin company under the Prystal name. The creation of marbled phenolic resins may also be attributable to the Catalin company.
Synthesis
Making Bakelite is a multi-stage process. It begins with the heating of phenol and formaldehyde in the presence of a catalyst such as hydrochloric acid, zinc chloride, or the base ammonia. This creates a liquid condensation product, referred to as Bakelite A, which is soluble in alcohol, acetone, or additional phenol. Heated further, the product becomes partially soluble and can still be softened by heat. Sustained heating results in an "insoluble hard gum". However, the high temperatures required to create this tend to cause violent foaming of the mixture when done at standard atmospheric pressure, which results in the cooled material being porous and breakable. Baekeland's innovative step was to put his "last condensation product" into an egg-shaped "Bakelizer". By heating it under pressure, at about , Baekeland was able to suppress the foaming that would otherwise occur. The resulting substance is extremely hard and both infusible and insoluble.
Compression molding
Molded Bakelite forms in a condensation reaction of phenol and formaldehyde, with wood flour or asbestos fiber as a filler, under high pressure and heat in a time frame of a few minutes of curing. The result is a hard plastic material. Asbestos was gradually abandoned as filler because many countries banned the production of asbestos.
Bakelite's molding process had a number of advantages. Bakelite resin could be provided either as powder or as preformed partially cured slugs, increasing the speed of the casting. Thermosetting resins such as Bakelite required heat and pressure during the molding cycle but could be removed from the molding process without being cooled, again making the molding process faster. Also, because of the smooth polished surface that resulted, Bakelite objects required less finishing. Millions of parts could be duplicated quickly and relatively cheaply.
Phenolic sheet
Another market for Bakelite resin was the creation of phenolic sheet materials. A phenolic sheet is a hard, dense material made by applying heat and pressure to layers of paper or glass cloth impregnated with synthetic resin. Paper, cotton fabrics, synthetic fabrics, glass fabrics, and unwoven fabrics are all possible materials used in lamination. When heat and pressure are applied, polymerization transforms the layers into thermosetting industrial laminated plastic.
Bakelite phenolic sheet is produced in many commercial grades and with various additives to meet diverse mechanical, electrical, and thermal requirements. Some common types include:
Paper reinforced NEMA XX per MIL-I-24768 PBG. Normal electrical applications, moderate mechanical strength, continuous operating temperature of .
Canvas-reinforced NEMA C per MIL-I-24768 TYPE FBM NEMA CE per MIL-I-24768 TYPE FBG. Good mechanical and impact strength with a continuous operating temperature of .
Linen-reinforced NEMA L per MIL-I-24768 TYPE FBI NEMA LE per MIL-I-24768 TYPE FEI. Good mechanical and electrical strength. Recommended for intricate high-strength parts. Continuous operating temperature .
Nylon reinforced NEMA N-1 per MIL-I-24768 TYPE NPG. Superior electrical properties under humid conditions, fungus resistant, continuous operating temperature of .
Properties
Bakelite has a number of important properties. It can be molded very quickly, decreasing production time. Moldings are smooth, retain their shape, and are resistant to heat, scratches, and destructive solvents. It is also resistant to electricity, and prized for its low conductivity. It is not flexible.
Phenolic resin products may swell slightly under conditions of extreme humidity or perpetual dampness. When rubbed or burnt, Bakelite has a distinctive, acrid, sickly-sweet or fishy odor.
Applications and uses
The characteristics of Bakelite made it particularly suitable as a molding compound, an adhesive or binding agent, a varnish, and a protective coating. Bakelite was particularly suitable for the emerging electrical and automobile industries because of its extraordinarily high resistance to electricity, heat, and chemical action.
The earliest commercial use of Bakelite in the electrical industry was the molding of tiny insulating bushings, made in 1908 for the Weston Electrical Instrument Corporation by Richard W. Seabury of the Boonton Rubber Company. Bakelite was soon used for non-conducting parts of telephones, radios, and other electrical devices, including bases and sockets for light bulbs and electron tubes (vacuum tubes), supports for any type of electrical components, automobile distributor caps, and other insulators. By 1912, it was being used to make billiard balls, since its elasticity and the sound it made were similar to ivory.
During World War I, Bakelite was used widely, particularly in electrical systems. Important projects included the Liberty airplane engine, the wireless telephone and radio phone, and the use of micarta-bakelite propellers in the NBS-1 bomber and the DH-4B aeroplane.
Bakelite's availability and ease and speed of molding helped to lower the costs and increase product availability so that telephones and radios became common household consumer goods. It was also very important to the developing automobile industry. It was soon found in myriad other consumer products ranging from pipe stems and buttons to saxophone mouthpieces, cameras, early machine guns, and appliance casings. Bakelite was also very commonly used in making molded grip panels on handguns, as furniture for submachine guns and machineguns, the classic Bakelite magazines for Kalashnikov rifles, as well as numerous knife handles and "scales" through the first half of the 20th century.
Beginning in the 1920s, it became a popular material for jewelry. Designer Coco Chanel included Bakelite bracelets in her costume jewelry collections. Designers such as Elsa Schiaparelli used it for jewelry and also for specially designed dress buttons. Later, Diana Vreeland, editor of Vogue, was enthusiastic about Bakelite. Bakelite was also used to make presentation boxes for Breitling watches.
By 1930, designer Paul T. Frankl considered Bakelite a "Materia Nova", "expressive of our own age". By the 1930s, Bakelite was used for game pieces like chessmen, poker chips, dominoes and mahjong sets. Kitchenware made with Bakelite, including canisters and tableware, was promoted for its resistance to heat and to chipping. In the mid-1930s, Northland marketed a line of skis with a black "Ebonite" base, a coating of Bakelite. By 1935, it was used in solid-body electric guitars. Performers such as Jerry Byrd loved the tone of Bakelite guitars but found them difficult to keep in tune.
Charles Plimpton patented BAYKO in 1933 and rushed out his first construction sets for Christmas 1934. He called the toy Bayko Light Constructional Sets, the words "Bayko Light" being a pun on the word "Bakelite."
During World War II, Bakelite was used in a variety of wartime equipment including pilots' goggles and field telephones. It was also used for patriotic wartime jewelry. In 1943, the thermosetting phenolic resin was even considered for the manufacture of coins, due to a shortage of traditional material. Bakelite and other non-metal materials were tested for usage for the one cent coin in the US before the Mint settled on zinc-coated steel.
During World War II, Bakelite buttons were part of British uniforms. These included brown buttons for the Army and black buttons for the RAF.
In 1947, Dutch art forger Han van Meegeren was convicted of forgery, after chemist and curator Paul B. Coremans proved that a purported Vermeer contained Bakelite, which van Meegeren had used as a paint hardener.
Bakelite was sometimes used in the pistol grip, hand guard, and buttstock of firearms. The AKM and some early AK-74 rifles are frequently mistakenly identified as using Bakelite, but most were made with AG-4S.
By the late 1940s, newer materials were superseding Bakelite in many areas. Phenolics are less frequently used in general consumer products today due to their cost and complexity of production and their brittle nature. They still appear in some applications where their specific properties are required, such as small precision-shaped components, molded disc brake cylinders, saucepan handles, electrical plugs, switches and parts for electrical irons, Printed circuit boards, as well as in the area of inexpensive board and tabletop games produced in China, Hong Kong, and India. Items such as billiard balls, dominoes and pieces for board games such as chess, checkers, and backgammon are constructed of Bakelite for its look, durability, fine polish, weight, and sound. Common dice are sometimes made of Bakelite for weight and sound, but the majority are made of a thermoplastic polymer such as acrylonitrile butadiene styrene (ABS).
Bakelite continues to be used for wire insulation, brake pads and related automotive components, and industrial electrical-related applications. Bakelite stock is still manufactured and produced in sheet, rod, and tube form for industrial applications in the electronics, power generation, and aerospace industries, and under a variety of commercial brand names.
Phenolic resins have been commonly used in ablative heat shields. Soviet heatshields for ICBM warheads and spacecraft reentry consisted of asbestos textolite, impregnated with Bakelite. Bakelite is also used in the mounting of metal samples in metallography.
Collectible status
Bakelite items, particularly jewelry and radios, have become popular collectibles. The term Bakelite is sometimes used in the resale market to indicate various types of early plastics, including Catalin and Faturan, which may be brightly colored, as well as items made of Bakelite material.
Patents
The United States Patent and Trademark Office granted Baekeland a patent for a "Method of making insoluble products of phenol and formaldehyde" on December 7, 1909. Producing hard, compact, insoluble, and infusible condensation products of phenols and formaldehyde marked the beginning of the modern plastics industry.
Similar plastics
Catalin is also a phenolic resin, similar to Bakelite, but contains different mineral fillers that allow the production of light colors.
Condensites are similar thermoset materials having much the same properties, characteristics, and uses.
Crystalate is an early plastic.
Faturan is a phenolic resin, also similar to Bakelite, that turns red over time, regardless of its original color.
Galalith is an early plastic derived from milk products.
Micarta is an early composite insulating plate that used Bakelite as a binding agent. It was developed in 1910 by Westinghouse Elec. & Mfg Co.
Novotext is a brand name for cotton textile-phenolic resin.
See also
Bakelite Museum, Williton, Somerset, England
Ericsson DBH 1001 telephone
Prodema, a construction material with a bakelite core.
References
External links
All Things Bakelite: The Age of Plastic—trailer for a film by John Maher, with additional video & resources
Amsterdam Bakelite Collection
Large Bakelite Collection
Bakelite: The Material of a Thousand Uses
Virtual Bakelite Museum of Ghent 1907–2007
1909 introductions
Belgian inventions
Composite materials
Dielectrics
Phenol formaldehyde resins
Plastic brands
Thermosetting plastics
|
https://en.wikipedia.org/wiki/Brick
|
A brick is a type of construction material used to build walls, pavements and other elements in masonry construction. Properly, the term brick denotes a unit primarily composed of clay, but is now also used informally to denote units made of other materials or other chemically cured construction blocks. Bricks can be joined using mortar, adhesives or by interlocking. Bricks are usually produced at brickworks in numerous classes, types, materials, and sizes which vary with region, and are produced in bulk quantities.
Block is a similar term referring to a rectangular building unit composed of clay or concrete, but is usually larger than a brick. Lightweight bricks (also called lightweight blocks) are made from expanded clay aggregate.
Fired bricks are one of the longest-lasting and strongest building materials, sometimes referred to as artificial stone, and have been used since circa 4000 BC. Air-dried bricks, also known as mud-bricks, have a history older than fired bricks, and have an additional ingredient of a mechanical binder such as straw.
Bricks are laid in courses and numerous patterns known as bonds, collectively known as brickwork, and may be laid in various kinds of mortar to hold the bricks together to make a durable structure.
History
Middle East and South Asia
The earliest bricks were dried mud-bricks, meaning that they were formed from clay-bearing earth or mud and dried (usually in the sun) until they were strong enough for use. The oldest discovered bricks, originally made from shaped mud and dating before 7500 BC, were found at Tell Aswad, in the upper Tigris region and in southeast Anatolia close to Diyarbakir.
Mud-brick construction was used at Çatalhöyük, from c. 7,400 BC.
Mud-brick structures, dating to c. 7,200 BC have been located in Jericho, Jordan Valley. These structures were made up of the first bricks with dimension 400x150x100 mm.
Between 5000 and 4500 BC, Mesopotamia had discovered fired brick. The standard brick sizes in Mesopotamia followed a general rule: the width of the dried or burned brick would be twice its thickness, and its length would be double its width.
The South Asian inhabitants of Mehrgarh also constructed, air-dried mud-brick structures, between 7000 and 3300 BC. and later the ancient Indus Valley cities of Mohenjo-daro, Harappa, and Mehrgarh. Ceramic, or fired brick was used as early as 3000 BC in early Indus Valley cities like Kalibangan.
In the middle of the third millennium BC, there was a rise in monumental baked brick architecture in Indus cities. Examples included the Great Bath at Mohenjo-daro, the fire altars of Kaalibangan, and the granary of Harappa. There was a uniformity to the brick sizes throughout the Indus Valley region, conforming to the 1:2:4, thickness, width, and length ratio. As the Indus civilization began its decline at the start of the second millennium BC, Harappans migrated east, spreading their knowledge of brickmaking technology. This led to the rise of cities like Pataliputra, Kausambi, and Ujjain, where there was an enormous demand for kiln-made bricks.
By 604 BC, bricks were the construction materials for architectural wonders such as the Hanging Gardens of Babylon, where glazed fired bricks were put into practice.
China
The earliest fired bricks appeared in Neolithic China around 4400 BC at Chengtoushan, a walled settlement of the Daxi culture. These bricks were made of red clay, fired on all sides to above 600 °C, and used as flooring for houses. By the Qujialing period (3300 BC), fired bricks were being used to pave roads and as building foundations at Chengtoushan.
According to Lukas Nickel, the use of ceramic pieces for protecting and decorating floors and walls dates back at various cultural sites to 3000-2000 BC and perhaps even before, but these elements should be rather qualified as tiles. For the longest time builders relied on wood, mud and rammed earth, while fired brick and mud-brick played no structural role in architecture. Proper brick construction, for erecting walls and vaults, finally emerges in the third century BC, when baked bricks of regular shape began to be employed for vaulting underground tombs. Hollow brick tomb chambers rose in popularity as builders were forced to adapt due to a lack of readily available wood or stone. The oldest extant brick building above ground is possibly Songyue Pagoda, dated to 523 AD.
By the end of the third century BC in China, both hollow and small bricks were available for use in building walls and ceilings. Fired bricks were first mass-produced during the construction of the tomb of China's first Emperor, Qin Shi Huangdi. The floors of the three pits of the terracotta army were paved with an estimated 230,000 bricks, with the majority measuring 28x14x7 cm, following a 4:2:1 ratio. The use of fired bricks in Chinese city walls first appeared in the Eastern Han dynasty (25 AD-220 AD). Up until the Middle Ages, buildings in Central Asia were typically built with unbaked bricks. It was only starting in the ninth century CE when buildings were entirely constructed using fired bricks.
The carpenter's manual Yingzao Fashi, published in 1103 at the time of the Song dynasty described the brick making process and glazing techniques then in use. Using the 17th-century encyclopaedic text Tiangong Kaiwu, historian Timothy Brook outlined the brick production process of Ming dynasty China:
Europe
Early civilisations around the Mediterranean, including the Ancient Greeks and Romans, adopted the use of fired bricks. By the early first century CE, standardised fired bricks were being heavily produced in Rome. The Roman legions operated mobile kilns, and built large brick structures throughout the Roman Empire, stamping the bricks with the seal of the legion. The Romans used brick for walls, arches, forts, aqueducts, etc. Notable mentions of Roman brick structures are the Herculaneum gate of Pompeii and the baths of Caracalla.
During the Early Middle Ages the use of bricks in construction became popular in Northern Europe, after being introduced there from Northwestern Italy. An independent style of brick architecture, known as brick Gothic (similar to Gothic architecture) flourished in places that lacked indigenous sources of rocks. Examples of this architectural style can be found in modern-day Denmark, Germany, Poland, and Kaliningrad (former East Prussia).
This style evolved into the Brick Renaissance as the stylistic changes associated with the Italian Renaissance spread to northern Europe, leading to the adoption of Renaissance elements into brick building. Identifiable attributes included a low-pitched hipped or flat roof, symmetrical facade, round arch entrances and windows, columns and pilasters, and more.
A clear distinction between the two styles only developed at the transition to Baroque architecture. In Lübeck, for example, Brick Renaissance is clearly recognisable in buildings equipped with terracotta reliefs by the artist Statius von Düren, who was also active at Schwerin (Schwerin Castle) and Wismar (Fürstenhof).
Long-distance bulk transport of bricks and other construction equipment remained prohibitively expensive until the development of modern transportation infrastructure, with the construction of canal, roads, and railways.
Industrial era
Production of bricks increased massively with the onset of the Industrial Revolution and the rise in factory building in England. For reasons of speed and economy, bricks were increasingly preferred as building material to stone, even in areas where the stone was readily available. It was at this time in London that bright red brick was chosen for construction to make the buildings more visible in the heavy fog and to help prevent traffic accidents.
The transition from the traditional method of production known as hand-moulding to a mechanised form of mass-production slowly took place during the first half of the nineteenth century. Possibly the first successful brick-making machine was patented by Henry Clayton, employed at the Atlas Works in Middlesex, England, in 1855, and was capable of producing up to 25,000 bricks daily with minimal supervision. His mechanical apparatus soon achieved widespread attention after it was adopted for use by the South Eastern Railway Company for brick-making at their factory near Folkestone. The Bradley & Craven Ltd 'Stiff-Plastic Brickmaking Machine' was patented in 1853, apparently predating Clayton. Bradley & Craven went on to be a dominant manufacturer of brickmaking machinery. Predating both Clayton and Bradley & Craven Ltd. however was the brick making machine patented by Richard A. Ver Valen of Haverstraw, New York, in 1852.
At the end of the 19th century, the Hudson River region of New York State would become the world's largest brick manufacturing region, with 130 brickyards lining the shores of the Hudson River from Mechanicsville to Haverstraw and employing 8,000 people. At its peak, about 1 billion bricks were produced a year, with many being sent to New York City for use in its construction industry.
The demand for high office building construction at the turn of the 20th century led to a much greater use of cast and wrought iron, and later, steel and concrete. The use of brick for skyscraper construction severely limited the size of the building – the Monadnock Building, built in 1896 in Chicago, required exceptionally thick walls to maintain the structural integrity of its 17 storeys.
Following pioneering work in the 1950s at the Swiss Federal Institute of Technology and the Building Research Establishment in Watford, UK, the use of improved masonry for the construction of tall structures up to 18 storeys high was made viable. However, the use of brick has largely remained restricted to small to medium-sized buildings, as steel and concrete remain superior materials for high-rise construction.
Bricks are often made of shale because it easily splits into thin layers.
Methods of manufacture
Four basic types of brick are un-fired, fired, chemically set bricks, and compressed earth blocks. Each type is manufactured differently for various purposes.
Mud-brick
Unfired bricks, also known as mud-bricks, are made from a mixture of silt, clay, sand and other earth materials like gravel and stone, combined with tempers and binding agents such as chopped straw, grasses, tree bark, or dung. Since these bricks are made up of natural materials and only require heat from the Sun to bake, mud-bricks have a relatively low embodied energy and carbon footprint.
The ingredients are first harvested and added together, with clay content ranging from 30% to 70%. The mixture is broken up with hoes or adzes, and stirred with water to form a homogenous blend. Next, the tempers and binding agents are added in a ratio, roughly one part straw to five parts earth to reduce weight and reinforce the brick by helping reduce shrinkage. However, additional clay could be added to reduce the need for straw, which would prevent the likelihood of insects deteriorating the organic material of the bricks, subsequently weakening the structure. These ingredients are thoroughly mixed together by hand or by treading and are then left to ferment for about a day.
The mix is then kneaded with water and molded into rectangular prisms of a desired size. Bricks are lined up and left to sundry for three days on both sides. After the six days, the bricks continue drying until required for use. Typically, longer drying times are preferred, but the average is eight to nine days spanning from initial stages to its application in structures. Unfired bricks could be made in the spring months and left to dry over the summer for use in the fall. Mud-bricks are commonly employed in arid environments to allow for adequate air drying.
Fired brick
Fired bricks are burned in a kiln which makes them durable. Modern, fired, clay bricks are formed in one of three processes – soft mud, dry press, or extruded. Depending on the country, either the extruded or soft mud method is the most common, since they are the most economical.
Clay and shale are the raw ingredients in the recipe for a fired brick. They are the product of thousands of years of decomposition and erosion of rocks, such as pegmatite and granite, leading to a material that has properties of being highly chemically stable and inert. Within the clays and shales are the materials of aluminosilicate (pure clay), free silica (quartz), and decomposed rock.
One proposed optimal mix is:
Silica (sand) – 50% to 60% by weight
Alumina (clay) – 20% to 30% by weight
Lime – 2 to 5% by weight
Iron oxide – ≤ 7% by weight
Magnesia – less than 1% by weight
Shaping methods
Three main methods are used for shaping the raw materials into bricks to be fired:
Molded bricks – These bricks start with raw clay, preferably in a mix with 25–30% sand to reduce shrinkage. The clay is first ground and mixed with water to the desired consistency. The clay is then pressed into steel moulds with a hydraulic press. The shaped clay is then fired ("burned") at to achieve strength.
Dry-pressed bricks – The dry-press method is similar to the soft-mud moulded method, but starts with a much thicker clay mix, so it forms more accurate, sharper-edged bricks. The greater force in pressing and the longer burn make this method more expensive.
Extruded bricks – For extruded bricks the clay is mixed with 10–15% water (stiff extrusion) or 20–25% water (soft extrusion) in a pugmill. This mixture is forced through a die to create a long cable of material of the desired width and depth. This mass is then cut into bricks of the desired length by a wall of wires. Most structural bricks are made by this method as it produces hard, dense bricks, and suitable dies can produce perforations as well. The introduction of such holes reduces the volume of clay needed, and hence the cost. Hollow bricks are lighter and easier to handle, and have different thermal properties from solid bricks. The cut bricks are hardened by drying for 20 to 40 hours at before being fired. The heat for drying is often waste heat from the kiln.
Kilns
In many modern brickworks, bricks are usually fired in a continuously fired tunnel kiln, in which the bricks are fired as they move slowly through the kiln on conveyors, rails, or kiln cars, which achieves a more consistent brick product. The bricks often have lime, ash, and organic matter added, which accelerates the burning process.
The other major kiln type is the Bull's Trench Kiln (BTK), based on a design developed by British engineer W. Bull in the late 19th century.
An oval or circular trench is dug, wide, deep, and in circumference. A tall exhaust chimney is constructed in the centre. Half or more of the trench is filled with "green" (unfired) bricks which are stacked in an open lattice pattern to allow airflow. The lattice is capped with a roofing layer of finished brick.
In operation, new green bricks, along with roofing bricks, are stacked at one end of the brick pile. Historically, a stack of unfired bricks covered for protection from the weather was called a "hack". Cooled finished bricks are removed from the other end for transport to their destinations. In the middle, the brick workers create a firing zone by dropping fuel (coal, wood, oil, debris, etc.) through access holes in the roof above the trench. The constant source of fuel maybe grown on the woodlots.
The advantage of the BTK design is a much greater energy efficiency compared with clamp or scove kilns. Sheet metal or boards are used to route the airflow through the brick lattice so that fresh air flows first through the recently burned bricks, heating the air, then through the active burning zone. The air continues through the green brick zone (pre-heating and drying the bricks), and finally out the chimney, where the rising gases create suction that pulls air through the system. The reuse of heated air yields savings in fuel cost.
As with the rail process, the BTK process is continuous. A half-dozen labourers working around the clock can fire approximately 15,000–25,000 bricks a day. Unlike the rail process, in the BTK process the bricks do not move. Instead, the locations at which the bricks are loaded, fired, and unloaded gradually rotate through the trench.
Influences on colour
The colour of fired clay bricks is influenced by the chemical and mineral content of the raw materials, the firing temperature, and the atmosphere in the kiln. For example, pink bricks are the result of a high iron content, white or yellow bricks have a higher lime content. Most bricks burn to various red hues; as the temperature is increased the colour moves through dark red, purple, and then to brown or grey at around . The names of bricks may reflect their origin and colour, such as London stock brick and Cambridgeshire White. Brick tinting may be performed to change the colour of bricks to blend-in areas of brickwork with the surrounding masonry.
An impervious and ornamental surface may be laid on brick either by salt glazing, in which salt is added during the burning process, or by the use of a slip, which is a glaze material into which the bricks are dipped. Subsequent reheating in the kiln fuses the slip into a glazed surface integral with the brick base.
Chemically set bricks
Chemically set bricks are not fired but may have the curing process accelerated by the application of heat and pressure in an autoclave.
Calcium-silicate bricks
Calcium-silicate bricks are also called sandlime or flintlime bricks, depending on their ingredients. Rather than being made with clay they are made with lime binding the silicate material. The raw materials for calcium-silicate bricks include lime mixed in a proportion of about 1 to 10 with sand, quartz, crushed flint, or crushed siliceous rock together with mineral colourants. The materials are mixed and left until the lime is completely hydrated; the mixture is then pressed into moulds and cured in an autoclave for three to fourteen hours to speed the chemical hardening. The finished bricks are very accurate and uniform, although the sharp arrises need careful handling to avoid damage to brick and bricklayer. The bricks can be made in a variety of colours; white, black, buff, and grey-blues are common, and pastel shades can be achieved. This type of brick is common in Sweden as well as Russia and other post-Soviet countries, especially in houses built or renovated in the 1970s. A version known as fly ash bricks, manufactured using fly ash, lime, and gypsum (known as the FaL-G process) are common in South Asia. Calcium-silicate bricks are also manufactured in Canada and the United States, and meet the criteria set forth in ASTM C73 – 10 Standard Specification for Calcium Silicate Brick (Sand-Lime Brick).
Concrete bricks
Bricks formed from concrete are usually termed as blocks or concrete masonry unit, and are typically pale grey. They are made from a dry, small aggregate concrete which is formed in steel moulds by vibration and compaction in either an "egglayer" or static machine. The finished blocks are cured, rather than fired, using low-pressure steam. Concrete bricks and blocks are manufactured in a wide range of shapes, sizes and face treatments – a number of which simulate the appearance of clay bricks.
Concrete bricks are available in many colours and as an engineering brick made with sulfate-resisting Portland cement or equivalent. When made with adequate amount of cement they are suitable for harsh environments such as wet conditions and retaining walls. They are made to standards BS 6073, EN 771-3 or ASTM C55. Concrete bricks contract or shrink so they need movement joints every 5 to 6 metres, but are similar to other bricks of similar density in thermal and sound resistance and fire resistance.
Compressed earth blocks
Compressed earth blocks are made mostly from slightly moistened local soils compressed with a mechanical hydraulic press or manual lever press. A small amount of a cement binder may be added, resulting in a stabilised compressed earth block.
Types
There are thousands of types of bricks that are named for their use, size, forming method, origin, quality, texture, and/or materials.
Categorized by manufacture method:
Extruded – made by being forced through an opening in a steel die, with a very consistent size and shape.
Wire-cut – cut to size after extrusion with a tensioned wire which may leave drag marks
Moulded – shaped in moulds rather than being extruded
Machine-moulded – clay is forced into moulds using pressure
Handmade – clay is forced into moulds by a person
Dry-pressed – similar to soft mud method, but starts with a much thicker clay mix and is compressed with great force.
Categorized by use:
Common or building – A brick not intended to be visible, used for internal structure
Face – A brick used on exterior surfaces to present a clean appearance
Hollow – not solid, the holes are less than 25% of the brick volume
Perforated – holes greater than 25% of the brick volume
Keyed – indentations in at least one face and end to be used with rendering and plastering
Paving – brick intended to be in ground contact as a walkway or roadway
Thin – brick with normal height and length but thin width to be used as a veneer
Specialized use bricks:
Chemically resistant – bricks made with resistance to chemical reactions
Acid brick – acid resistant bricks
Engineering – a type of hard, dense, brick used where strength, low water porosity or acid (flue gas) resistance are needed. Further classified as type A and type B based on their compressive strength
Accrington – a type of engineering brick from England
Fire or refractory – highly heat-resistant bricks
Clinker – a vitrified brick
Ceramic glazed – fire bricks with a decorative glazing
Bricks named for place of origin:
Chicago common brick - a soft brick made near Chicago, Illinois with a range of colors, like buff yellow, salmon pink, or deep red
Cream City brick – a light yellow brick made in Milwaukee, Wisconsin
Dutch brick – a hard light coloured brick originally from the Netherlands
Fareham red brick – a type of construction brick
London stock brick – type of handmade brick which was used for the majority of building work in London and South East England until the growth in the use of machine-made bricks
Nanak Shahi bricks – a type of decorative brick in India
Roman brick – a long, flat brick typically used by the Romans
Staffordshire blue brick – a type of construction brick from England
Optimal dimensions, characteristics, and strength
For efficient handling and laying, bricks must be small enough and light enough to be picked up by the bricklayer using one hand (leaving the other hand free for the trowel). Bricks are usually laid flat, and as a result, the effective limit on the width of a brick is set by the distance which can conveniently be spanned between the thumb and fingers of one hand, normally about . In most cases, the length of a brick is twice its width plus the width of a mortar joint, about or slightly more. This allows bricks to be laid bonded in a structure which increases stability and strength (for an example, see the illustration of bricks laid in English bond, at the head of this article). The wall is built using alternating courses of stretchers, bricks laid longways, and headers, bricks laid crossways. The headers tie the wall together over its width. In fact, this wall is built in a variation of English bond called English cross bond where the successive layers of stretchers are displaced horizontally from each other by half a brick length. In true English bond, the perpendicular lines of the stretcher courses are in line with each other.
A bigger brick makes for a thicker (and thus more insulating) wall. Historically, this meant that bigger bricks were necessary in colder climates (see for instance the slightly larger size of the Russian brick in table below), while a smaller brick was adequate, and more economical, in warmer regions. A notable illustration of this correlation is the Green Gate in Gdansk; built in 1571 of imported Dutch brick, too small for the colder climate of Gdansk, it was notorious for being a chilly and drafty residence. Nowadays this is no longer an issue, as modern walls typically incorporate specialised insulation materials.
The correct brick for a job can be selected from a choice of colour, surface texture, density, weight, absorption, and pore structure, thermal characteristics, thermal and moisture movement, and fire resistance.
In England, the length and width of the common brick remained fairly constant from 1625 when the size was regulated by statute at 9 x x 3 inches (but see brick tax), but the depth has varied from about or smaller in earlier times to about more recently. In the United Kingdom, the usual size of a modern brick (from 1965) is , which, with a nominal mortar joint, forms a unit size of , for a ratio of 6:3:2.
In the United States, modern standard bricks are specified for various uses; The most commonly used is the modular brick has the actual dimensions of × × inches (194 × 92 × 57 mm). With the standard inch mortar joint, this gives the nominal dimensions of 8 x 4 x inches which eases the calculation of the number of bricks in a given wall. The 2:1 ratio of modular bricks means that when they turn corners, a 1/2 running bond is formed without needing to cut the brick down or fill the gap with a cut brick; and the height of modular bricks means that a soldier course matches the height of three modular running courses, or one standard CMU course.
Some brickmakers create innovative sizes and shapes for bricks used for plastering (and therefore not visible on the inside of the building) where their inherent mechanical properties are more important than their visual ones. These bricks are usually slightly larger, but not as large as blocks and offer the following advantages:
A slightly larger brick requires less mortar and handling (fewer bricks), which reduces cost
Their ribbed exterior aids plastering
More complex interior cavities allow improved insulation, while maintaining strength.
Blocks have a much greater range of sizes. Standard co-ordinating sizes in length and height (in mm) include 400×200, 450×150, 450×200, 450×225, 450×300, 600×150, 600×200, and 600×225; depths (work size, mm) include 60, 75, 90, 100, 115, 140, 150, 190, 200, 225, and 250. They are usable across this range as they are lighter than clay bricks. The density of solid clay bricks is around 2000 kg/m3: this is reduced by frogging, hollow bricks, and so on, but aerated autoclaved concrete, even as a solid brick, can have densities in the range of 450–850 kg/m3.
Bricks may also be classified as solid (less than 25% perforations by volume, although the brick may be "frogged," having indentations on one of the longer faces), perforated (containing a pattern of small holes through the brick, removing no more than 25% of the volume), cellular (containing a pattern of holes removing more than 20% of the volume, but closed on one face), or hollow (containing a pattern of large holes removing more than 25% of the brick's volume). Blocks may be solid, cellular or hollow.
The term "frog" can refer to the indentation or the implement used to make it. Modern brickmakers usually use plastic frogs but in the past they were made of wood.
The compressive strength of bricks produced in the United States ranges from about , varying according to the use to which the brick are to be put. In England clay bricks can have strengths of up to 100 MPa, although a common house brick is likely to show a range of 20–40 MPa.
Uses
Bricks are a versatile building material, able to participate in a wide variety of applications, including:
Structural walls, exterior and interior walls
Bearing and non-bearing sound proof partitions
The fireproofing of structural-steel members in the form of firewalls, party walls, enclosures and fire towers
Foundations for stucco
Chimneys and fireplaces
Porches and terraces
Outdoor steps, brick walks and paved floors
Swimming pools
In the United States, bricks have been used for both buildings and pavement. Examples of brick use in buildings can be seen in colonial era buildings and other notable structures around the country. Bricks have been used in paving roads and sidewalks especially during the late 19th century and early 20th century. The introduction of asphalt and concrete reduced the use of brick for paving, but they are still sometimes installed as a method of traffic calming or as a decorative surface in pedestrian precincts. For example, in the early 1900s, most of the streets in the city of Grand Rapids, Michigan, were paved with bricks. Today, there are only about 20 blocks of brick-paved streets remaining (totalling less than 0.5 percent of all the streets in the city limits). Much like in Grand Rapids, municipalities across the United States began replacing brick streets with inexpensive asphalt concrete by the mid-20th century.
In Northwest Europe, bricks have been used in construction for centuries. Until recently, almost all houses were built almost entirely from bricks. Although many houses are now built using a mixture of concrete blocks and other materials, many houses are skinned with a layer of bricks on the outside for aesthetic appeal.
Bricks in the metallurgy and glass industries are often used for lining furnaces, in particular refractory bricks such as silica, magnesia, chamotte and neutral (chromomagnesite) refractory bricks. This type of brick must have good thermal shock resistance, refractoriness under load, high melting point, and satisfactory porosity. There is a large refractory brick industry, especially in the United Kingdom, Japan, the United States, Belgium and the Netherlands.
Engineering bricks are used where strength, low water porosity or acid (flue gas) resistance are needed.
In the UK a red brick university is one founded in the late 19th or early 20th century. The term is used to refer to such institutions collectively to distinguish them from the older Oxbridge institutions, and refers to the use of bricks, as opposed to stone, in their buildings.
Colombian architect Rogelio Salmona was noted for his extensive use of red bricks in his buildings and for using natural shapes like spirals, radial geometry and curves in his designs.
Limitations
Starting in the 20th century, the use of brickwork declined in some areas due to concerns about earthquakes. Earthquakes such as the San Francisco earthquake of 1906 and the 1933 Long Beach earthquake revealed the weaknesses of unreinforced brick masonry in earthquake-prone areas. During seismic events, the mortar cracks and crumbles, so that the bricks are no longer held together. Brick masonry with steel reinforcement, which helps hold the masonry together during earthquakes, has been used to replace unreinforced bricks in many buildings. Retrofitting older unreinforced masonry structures has been mandated in many jurisdictions. However, similar to steel corrosion in reinforced concrete, rebar rusting will compromise the structural integrity of reinforced brick and ultimately limit the expected lifetime, so there is a trade-off between earthquake safety and longevity to a certain extent.
Gallery
See also
References
Further reading
Hudson, Kenneth (1972) Building Materials; chap. 3: Bricks and tiles. London: Longman; pp. 28–42
External links
Brick in 20th-Century Architecture
Brick Industry Association United States
Brick Development Association UK
Think Brick Australia
International Brick Collectors Association
Building materials
Masonry
Soil-based building materials
|
https://en.wikipedia.org/wiki/Blizzard
|
A blizzard is a severe snowstorm characterized by strong sustained winds and low visibility, lasting for a prolonged period of time—typically at least three or four hours. A ground blizzard is a weather condition where snow is not falling but loose snow on the ground is lifted and blown by strong winds. Blizzards can have an immense size and usually stretch to hundreds or thousands of kilometres.
Definition and etymology
In the United States, the National Weather Service defines a blizzard as a severe snow storm characterized by strong winds causing blowing snow that results in low visibilities. The difference between a blizzard and a snowstorm is the strength of the wind, not the amount of snow. To be a blizzard, a snow storm must have sustained winds or frequent gusts that are greater than or equal to with blowing or drifting snow which reduces visibility to or less and must last for a prolonged period of time—typically three hours or more.
Environment Canada defines a blizzard as a storm with wind speeds exceeding accompanied by visibility of or less, resulting from snowfall, blowing snow, or a combination of the two. These conditions must persist for a period of at least four hours for the storm to be classified as a blizzard, except north of the arctic tree line, where that threshold is raised to six hours.
The Australia Bureau of Meteorology describes a blizzard as, "Violent and very cold wind which is laden with snow, some part, at least, of which has been raised from snow covered ground."
While severe cold and large amounts of drifting snow may accompany blizzards, they are not required. Blizzards can bring whiteout conditions, and can paralyze regions for days at a time, particularly where snowfall is unusual or rare.
A severe blizzard has winds over , near zero visibility, and temperatures of or lower. In Antarctica, blizzards are associated with winds spilling over the edge of the ice plateau at an average velocity of .
Ground blizzard refers to a weather condition where loose snow or ice on the ground is lifted and blown by strong winds. The primary difference between a ground blizzard as opposed to a regular blizzard is that in a ground blizzard no precipitation is produced at the time, but rather all the precipitation is already present in the form of snow or ice at the surface.
The Oxford English Dictionary concludes the term blizzard is likely onomatopoeic, derived from the same sense as blow, blast, blister, and bluster; the first recorded use of it for weather dates to 1829, when it was defined as a "violent blow". It achieved its modern definition by 1859, when it was in use in the western United States. The term became common in the press during the harsh winter of 1880–81.
United States storm systems
In the United States, storm systems powerful enough to cause blizzards usually form when the jet stream dips far to the south, allowing cold, dry polar air from the north to clash with warm, humid air moving up from the south.
When cold, moist air from the Pacific Ocean moves eastward to the Rocky Mountains and the Great Plains, and warmer, moist air moves north from the Gulf of Mexico, all that is needed is a movement of cold polar air moving south to form potential blizzard conditions that may extend from the Texas Panhandle to the Great Lakes and Midwest. A blizzard also may be formed when a cold front and warm front mix together and a blizzard forms at the border line.
Another storm system occurs when a cold core low over the Hudson Bay area in Canada is displaced southward over southeastern Canada, the Great Lakes, and New England. When the rapidly moving cold front collides with warmer air coming north from the Gulf of Mexico, strong surface winds, significant cold air advection, and extensive wintry precipitation occur.
Low pressure systems moving out of the Rocky Mountains onto the Great Plains, a broad expanse of flat land, much of it covered in prairie, steppe and grassland, can cause thunderstorms and rain to the south and heavy snows and strong winds to the north. With few trees or other obstructions to reduce wind and blowing, this part of the country is particularly vulnerable to blizzards with very low temperatures and whiteout conditions. In a true whiteout, there is no visible horizon. People can become lost in their own front yards, when the door is only away, and they would have to feel their way back. Motorists have to stop their cars where they are, as the road is impossible to see.
Nor'easter blizzards
A nor'easter is a macro-scale storm that occurs off the New England and Atlantic Canada coastlines. It gets its name from the direction the wind is coming from. The usage of the term in North America comes from the wind associated with many different types of storms, some of which can form in the North Atlantic Ocean and some of which form as far south as the Gulf of Mexico. The term is most often used in the coastal areas of New England and Atlantic Canada. This type of storm has characteristics similar to a hurricane. More specifically, it describes a low-pressure area whose center of rotation is just off the coast and whose leading winds in the left-forward quadrant rotate onto land from the northeast. High storm waves may sink ships at sea and cause coastal flooding and beach erosion. Notable nor'easters include The Great Blizzard of 1888, one of the worst blizzards in U.S. history. It dropped of snow and had sustained winds of more than that produced snowdrifts in excess of . Railroads were shut down and people were confined to their houses for up to a week. It killed 400 people, mostly in New York.
Historic events
1972 Iran blizzard
The 1972 Iran blizzard, which caused 4,000 reported deaths, was the deadliest blizzard in recorded history. Dropping as much as of snow, it completely covered 200 villages. After a snowfall lasting nearly a week, an area the size of Wisconsin was entirely buried in snow.
2008 Afghanistan blizzard
The 2008 Afghanistan blizzard, was a fierce blizzard that struck Afghanistan on the 10th of January 2008. Temperatures fell to a low of , with up to of snow in the more mountainous regions, killing at least 926 people. It was the third deadliest blizzard in history. The weather also claimed more than 100,000 sheep and goats, and nearly 315,000 cattle died.
The Snow Winter of 1880–1881
The winter of 1880–1881 is widely considered the most severe winter ever known in many parts of the United States.
The initial blizzard in October of 1880 brought snowfalls so deep that two-story homes experienced accumulations, as opposed to drifts, up to their second floor windows. No one was prepared for deep snow so early in the winter. Farmers from North Dakota to Virginia were caught flat with fields unharvested, what grain that had been harvested unmilled, and their suddenly all-important winter stocks of wood fuel only partially collected. By January train service was almost entirely suspended from the region. Railroads hired scores of men to dig out the tracks but as soon as they had finished shoveling a stretch of line a new storm arrived, burying it again.
There were no winter thaws and on February 2, 1881, a second massive blizzard struck that lasted for nine days. In towns the streets were filled with solid drifts to the tops of the buildings and tunneling was necessary to move about. Homes and barns were completely covered, compelling farmers to construct fragile tunnels in order to feed their stock.
When the snow finally melted in late spring of 1881 huge sections of the plains experienced flooding. Massive ice jams clogged the Missouri River and when they broke the downstream areas were inundated. Most of the town of Yankton, in what is now South Dakota, was washed away when the river overflowed its banks after the thaw.
Novelization
Many children—and their parents—learned of "The Snow Winter" through the children's book The Long Winter by Laura Ingalls Wilder, in which the author tells of her family's efforts to survive. The snow arrived in October 1880 and blizzard followed blizzard throughout the winter and into March 1881, leaving many areas snowbound throughout the entire winter. Accurate details in Wilder's novel include the blizzards' frequency and the deep cold, the Chicago and North Western Railway stopping trains until the spring thaw because the snow made the tracks impassable, the near-starvation of the townspeople, and the courage of her future husband Almanzo and another man, Cap Garland, who ventured out on the open prairie in search of a cache of wheat that no one was even sure existed.
The Storm of the Century
The Storm of the Century, also known as the Great Blizzard of 1993, was a large cyclonic storm that formed over the Gulf of Mexico on March 12, 1993, and dissipated in the North Atlantic Ocean on March 15. It is unique for its intensity, massive size and wide-reaching effect. At its height, the storm stretched from Canada towards Central America, but its main impact was on the United States and Cuba. The cyclone moved through the Gulf of Mexico, and then through the Eastern United States before moving into Canada. Areas as far south as northern Alabama and Georgia received a dusting of snow and areas such as Birmingham, Alabama, received up to with hurricane-force wind gusts and record low barometric pressures. Between Louisiana and Cuba, hurricane-force winds produced high storm surges across northwestern Florida, which along with scattered tornadoes killed dozens of people. In the United States, the storm was responsible for the loss of electric power to over 10 million customers. It is purported to have been directly experienced by nearly 40 percent of the country's population at that time. A total of 310 people, including 10 from Cuba, perished during this storm. The storm cost $6 to $10 billion in damages.
List of blizzards
North America
1700 to 1799
The Great Snow 1717 series of four snowstorms between February 27 and March 7, 1717. There were reports of about five feet of snow already on the ground when the first of the storms hit. By the end, there were about ten feet of snow and some drifts reaching , burying houses entirely. In the colonial era, this storm made travel impossible until the snow simply melted.
Blizzard of 1765. March 24, 1765. Affected area from Philadelphia to Massachusetts. High winds and over of snowfall recorded in some areas.
Blizzard of 1772. "The Washington and Jefferson Snowstorm of 1772". January 26–29, 1772. One of largest D.C. and Virginia area snowstorms ever recorded. Snow accumulations of recorded.
The "Hessian Storm of 1778". December 26, 1778. Severe blizzard with high winds, heavy snows and bitter cold extending from Pennsylvania to New England. Snow drifts reported to be high in Rhode Island. Storm named for stranded Hessian troops in deep snows stationed in Rhode Island during the Revolutionary War.
The Great Snow of 1786. December 4–10, 1786. Blizzard conditions and a succession of three harsh snowstorms produced snow depths of to from Pennsylvania to New England. Reportedly of similar magnitude of 1717 snowstorms.
The Long Storm of 1798. November 19–21, 1798. Heavy snowstorm produced snow from Maryland to Maine.
1800 to 1850
Blizzard of 1805. January 26–28, 1805. Cyclone brought heavy snowstorm to New York City and New England. Snow fell continuously for two days where over of snow accumulated.
New York City Blizzard of 1811. December 23–24, 1811. Severe blizzard conditions reported on Long Island, in New York City, and southern New England. Strong winds and tides caused damage to shipping in harbor.
Luminous Blizzard of 1817. January 17, 1817. In Massachusetts and Vermont, a severe snowstorm was accompanied by frequent lightning and heavy thunder. St. Elmo's fire reportedly lit up trees, fence posts, house roofs, and even people. John Farrar professor at Harvard, recorded the event in his memoir in 1821.
Great Snowstorm of 1821. January 5–7, 1821. Extensive snowstorm and blizzard spread from Virginia to New England.
Winter of Deep Snow in 1830. December 29, 1830. Blizzard storm dumped in Kansas City and in Illinois. Areas experienced repeated storms thru mid-February 1831.
"The Great Snowstorm of 1831" January 14–16, 1831. Produced snowfall over widest geographic area that was only rivaled, or exceeded by, the 1993 Blizzard. Blizzard raged from Georgia, to Ohio Valley, all the way to Maine.
"The Big Snow of 1836" January 8–10, 1836. Produced to of snowfall in interior New York, northern Pennsylvania, and western New England. Philadelphia got a reported and New York City of snow.
1851 to 1900
Plains Blizzard of 1856. December 3–5, 1856. Severe blizzard-like storm raged for three days in Kansas and Iowa. Early pioneers suffered.
"The Cold Storm of 1857" January 18–19, 1857. Produced severe blizzard conditions from North Carolina to Maine. Heavy snowfalls reported in east coast cities.
Midwest Blizzard of 1864. January 1, 1864. Gale-force winds, driving snow, and low temperatures all struck simultaneously around Chicago, Wisconsin and Minnesota.
Plains Blizzard of 1873. January 7, 1873. Severe blizzard struck the Great Plains. Many pioneers from the east were unprepared for the storm and perished in Minnesota and Iowa.
Great Plains Easter Blizzard of 1873. April 13, 1873
Seattle Blizzard of 1880. January 6, 1880. Seattle area's greatest snowstorm to date. An estimated fell around the town. Many barns collapsed and all transportation halted.
The Hard Winter of 1880-81. October 15, 1880. A blizzard in eastern South Dakota marked the beginning of this historically difficult season. Laura Ingalls Wilder's book The Long Winter details the effects of this season on early settlers.
In the three year winter period from December 1885 to March 1888, the Great Plains and Eastern United States suffered a series of the worst blizzards in this nation's history ending with the Schoolhouse Blizzard and the Great Blizzard of 1888. The massive explosion of the volcano Krakatoa in the South Pacific late in August 1883 is a suspected cause of these huge blizzards during these several years. The clouds of ash it emitted continued to circulate around the world for many years. Weather patterns continued to be chaotic for years, and temperatures did not return to normal until 1888. Record rainfall was experienced in Southern California during July 1883 to June 1884. The Krakatoa eruption injected an unusually large amount of sulfur dioxide (SO2) gas high into the stratosphere which reflects sunlight and helped cool the planet over the next few years until the suspended atmospheric sulfur fell to ground.
Plains Blizzard of late 1885. In Kansas, heavy snows of late 1885 had piled drifts high.
Kansas Blizzard of 1886. First week of January 1886. Reported that 80 percent of the cattle were frozen to death in that state alone from the cold and snow.
January 1886 Blizzard. January 9, 1886. Same system as Kansas 1886 Blizzard that traveled eastward.
Great Plains Blizzards of late 1886. On November 13, 1886, it reportedly began to snow and did not stop for a month in the Great Plains region.
Great Plains Blizzard of 1887. January 9–11, 1887. Reported 72-hour blizzard that covered parts of the Great Plains in more than of snow. Winds whipped and temperatures dropped to around . So many cows that were not killed by the cold soon died from starvation. When spring arrived, millions of the animals were dead, with around 90 percent of the open range's cattle rotting where they fell. Those present reported carcasses as far as the eye could see. Dead cattle clogged up rivers and spoiled drinking water. Many ranchers went bankrupt and others simply called it quits and moved back east. The "Great Die-Up" from the blizzard effectively concluded the romantic period of the great Plains cattle drives.
Schoolhouse Blizzard of 1888 North American Great Plains. January 12–13, 1888. What made the storm so deadly was the timing (during work and school hours), the suddenness, and the brief spell of warmer weather that preceded it. In addition, the very strong wind fields behind the cold front and the powdery nature of the snow reduced visibilities on the open plains to zero. People ventured from the safety of their homes to do chores, go to town, attend school, or simply enjoy the relative warmth of the day. As a result, thousands of people—including many schoolchildren—got caught in the blizzard.
Great Blizzard of March 1888 March 11–14, 1888. One of the most severe recorded blizzards in the history of the United States. On March 12, an unexpected northeaster hit New England and the mid-Atlantic, dropping up to of snow in the space of three days. New York City experienced its heaviest snowfall recorded to date at that time, all street railcars were stranded, and the storm led to the creation of the NYC subway system. Snowdrifts reached up to the second story of some buildings. Some 400 people died from this blizzard, including many sailors aboard vessels that were beset by gale-force winds and turbulent seas.
Great Blizzard of 1899 February 11–14, 1899. An extremely unusual blizzard in that it reached into the far southern states of the US. It hit in February, and the area around Washington, D.C., experienced 51 hours straight of snowfall. The port of New Orleans was totally iced over; revelers participating in the New Orleans Mardi Gras had to wait for the parade routes to be shoveled free of snow. Concurrent with this blizzard was the extremely cold arctic air. Many city and state record low temperatures date back to this event, including all-time records for locations in the Midwest and South. State record lows: Nebraska reached , Ohio experienced , Louisiana bottomed out at , and Florida dipped below zero to .
1901 to 1939
Great Lakes Storm of 1913 November 7–10, 1913. "The White Hurricane" of 1913 was the deadliest and most destructive natural disaster ever to hit the Great Lakes Basin in the Midwestern United States and the Canadian province of Ontario. It produced wind gusts, waves over high, and whiteout snowsqualls. It killed more than 250 people, destroyed 19 ships, and stranded 19 others.
Blizzard of 1918. January 11, 1918. Vast blizzard-like storm moved through Great Lakes and Ohio Valley.
1920 North Dakota blizzard March 15–18, 1920
Knickerbocker Storm January 27–28, 1922
1940 to 1949
Armistice Day Blizzard of 1940 November 10–12, 1940. Took place in the Midwest region of the United States on Armistice Day. This "Panhandle hook" winter storm cut a through the middle of the country from Kansas to Michigan. The morning of the storm was unseasonably warm but by mid afternoon conditions quickly deteriorated into a raging blizzard that would last into the next day. A total of 145 deaths were blamed on the storm, almost a third of them duck hunters who had taken time off to take advantage of the ideal hunting conditions. Weather forecasters had not predicted the severity of the oncoming storm, and as a result the hunters were not dressed for cold weather. When the storm began many hunters took shelter on small islands in the Mississippi River, and the winds and waves overcame their encampments. Some became stranded on the islands and then froze to death in the single-digit temperatures that moved in over night. Others tried to make it to shore and drowned.
North American blizzard of 1947 December 25–26, 1947. Was a record-breaking snowfall that began on Christmas Day and brought the Northeast United States to a standstill. Central Park in New York City got of snowfall in 24 hours with deeper snows in suburbs. It was not accompanied by high winds, but the snow fell steadily with drifts reaching . Seventy-seven deaths were attributed to the blizzard.
The Blizzard of 1949 - The first blizzard started on Sunday, January 2, 1949; it lasted for three days. It was followed by two more months of blizzard after blizzard with high winds and bitter cold. Deep drifts isolated southeast Wyoming, northern Colorado, western South Dakota and western Nebraska, for weeks. Railroad tracks and roads were all drifted in with drifts of and more. Hundreds of people that had been traveling on trains were stranded. Motorists that had set out on January 2 found their way to private farm homes in rural areas and hotels and other buildings in towns; some dwellings were so crowded that there wasn't enough room for all to sleep at once. It would be weeks before they were plowed out. The Federal government quickly responded with aid, airlifting food and hay for livestock. The total rescue effort involved numerous volunteers and local agencies plus at least ten major state and federal agencies from the U.S. Army to the National Park Service. Private businesses, including railroad and oil companies, also lent manpower and heavy equipment to the work of plowing out. The official death toll was 76 people and one million livestock. Youtube video Storm of the Century - the Blizzard of '49 Storm of the Century - the Blizzard of '49
1950 to 1959
Great Appalachian Storm of November 1950 November 24–30, 1950
March 1958 Nor'easter blizzard March 18–21, 1958.
The Mount Shasta California Snowstorm of 1959 – The storm dumped of snow on Mount Shasta. The bulk of the snow fell on unpopulated mountainous areas, barely disrupting the residents of the Mount Shasta area. The amount of snow recorded is the largest snowfall from a single storm in North America.
1960 to 1969
March 1960 Nor'easter blizzard March 2–5, 1960
December 1960 Nor'easter blizzard December 12–14, 1960. Wind gusts up to .
March 1962 Nor'easter Great March Storm of 1962 – Ash Wednesday. North Carolina and Virginia blizzards. Struck during Spring high tide season and remained mostly stationary for almost 5 days causing significant damage along eastern coast, Assateague island was under water, and dumped of snow in Virginia.
North American blizzard of 1966 January 27–31, 1966
Chicago Blizzard of 1967 January 26–27, 1967
February 1969 nor'easter February 8–10, 1969
March 1969 Nor'easter blizzard March 9, 1969
December 1969 Nor'easter blizzard December 25–28, 1969.
1970 to 1979
The Great Storm of 1975 known as the "Super Bowl Blizzard" or "Minnesota's Storm of the Century". January 9–12, 1975. Wind chills of to recorded, deep snowfalls.
Groundhog Day gale of 1976 February 2, 1976
Buffalo Blizzard of 1977 January 28 – February 1, 1977. There were several feet of packed snow already on the ground, and the blizzard brought with it enough snow to reach Buffalo's record for the most snow in one season – .
Great Blizzard of 1978 also called the "Cleveland Superbomb". January 25–27, 1978. Was one of the worst snowstorms the Midwest has ever seen. Wind gusts approached , causing snowdrifts to reach heights of in some areas, making roadways impassable. Storm reached maximum intensity over southern Ontario Canada.
Northeastern United States Blizzard of 1978 – February 6–7, 1978. Just one week following the Cleveland Superbomb blizzard, New England was hit with its most severe blizzard in 90 years since 1888.
Chicago Blizzard of 1979 January 13–14, 1979
1980 to 1989
February 1987 Nor'easter blizzard February 22–24, 1987
1990 to 1999
1991 Halloween blizzard Upper Mid-West US, October 31 – November 3, 1991
December 1992 Nor'easter blizzard December 10–12, 1992
1993 Storm of the Century March 12–15, 1993. While the southern and eastern U.S. and Cuba received the brunt of this massive blizzard, the Storm of the Century impacted a wider area than any in recorded history.
February 1995 Nor'easter blizzard February 3–6, 1995
Blizzard of 1996 January 6–10, 1996
April Fool's Day Blizzard March 31 – April 1, 1997. US East Coast
1997 Western Plains winter storms October 24–26, 1997
Mid West Blizzard of 1999 January 2–4, 1999
2000 to 2009
January 25, 2000 Southeastern United States winter storm January 25, 2000. North Carolina and Virginia
December 2000 Nor'easter blizzard December 27–31, 2000
North American blizzard of 2003 February 14–19, 2003 (Presidents' Day Storm II)
December 2003 Nor'easter blizzard December 6–7, 2003
North American blizzard of 2005 January 20–23, 2005
North American blizzard of 2006 February 11–13, 2006
Early winter 2006 North American storm complex Late November 2006
Colorado Holiday Blizzards (2006–07) December 20–29, 2006 Colorado
February 2007 North America blizzard February 12–20, 2007
January 2008 North American storm complex January, 2008 West Coast US
North American blizzard of 2008 March 6–10, 2008
2009 Midwest Blizzard 6–8 December 2009, a bomb cyclogenesis event that also affected parts of Canada
North American blizzard of 2009 December 16–20, 2009
2009 North American Christmas blizzard December 22–28, 2009
2010 to 2019
February 5–6, 2010 North American blizzard February 5–6, 2010 Referred to at the time as Snowmageddon was a Category 3 ("major") nor'easter and severe weather event.
February 9–10, 2010 North American blizzard February 9–10, 2010
February 25–27, 2010 North American blizzard February 25–27, 2010
October 2010 North American storm complex October 23–28, 2010
December 2010 North American blizzard December 26–29, 2010
January 31 – February 2, 2011 North American blizzard January 31 – February 2, 2011. Groundhog Day Blizzard of 2011
2011 Halloween nor'easter October 28 – Nov 1, 2011
Hurricane Sandy October 29–31, 2012. West Virginia, western North Carolina, and southwest Pennsylvania received heavy snowfall and blizzard conditions from this hurricane
November 2012 nor'easter November 7–10, 2012
December 17–22, 2012 North American blizzard December 17–22, 2012
Late December 2012 North American storm complex December 25–28, 2012
February 2013 nor'easter February 7–20, 2013
February 2013 Great Plains blizzard February 19 – March 6, 2013
March 2013 nor'easter March 6, 2013
October 2013 North American storm complex October 3–5, 2013
Buffalo, NY blizzard of 2014. Buffalo got over of snow during November 18–20, 2014.
January 2015 North American blizzard January 26–27, 2015
Late December 2015 North American storm complex December 26–27, 2015 Was one of the most notorious blizzards in the state of New Mexico and West Texas ever reported. It had sustained winds of over and continuous snow precipitation that lasted over 30 hours. Dozens of vehicles were stranded in small county roads in the areas of Hobbs, Roswell, and Carlsbad New Mexico. Strong sustained winds destroyed various mobile homes.
January 2016 United States blizzard January 20–23, 2016
February 2016 North American storm complex February 1–8, 2016
February 2017 North American blizzard February 6–11, 2017
March 2017 North American blizzard March 9–16, 2017
Early January 2018 nor’easter January 3–6, 2018
March 2019 North American blizzard March 8–16, 2019
April 2019 North American blizzard April 10–14, 2019
2020 to present
December 5–6, 2020 nor'easter December 5–6, 2020
January 31 – February 3, 2021 nor'easter January 31 – February 3, 2021
February 13–17, 2021 North American winter storm February 13–17, 2021
March 2021 North American blizzard March 11–14, 2021
January 2022 North American blizzard January 27–30, 2022
Late December 2022 North American winter storm December 21–26, 2022
Canada
The Eastern Canadian Blizzard of 1971 – Dumped a foot and a half (45.7 cm) of snow on Montreal and more than elsewhere in the region. The blizzard caused the cancellation of a Montreal Canadiens hockey game for the first time since 1918.
Saskatchewan blizzard of 2007 – January 10, 2007 Canada
United Kingdom
Great Frost of 1709
Blizzard of January 1881
Winter of 1894–95 in the United Kingdom
Winter of 1946–1947 in the United Kingdom
Winter of 1962–1963 in the United Kingdom
January 1987 Southeast England snowfall
Winter of 1990–91 in Western Europe
February 2009 Great Britain and Ireland snowfall
Winter of 2009–10 in Great Britain and Ireland
Winter of 2010–11 in Great Britain and Ireland
Early 2012 European cold wave
Other locations
1954 Romanian blizzard
1972 Iran blizzard
Winter of 1990–1991 in Western Europe
2008 Afghanistan blizzard
2008 Chinese winter storms
Winter storms of 2009–2010 in East Asia
See also
Cold wave
Lake-effect snow
Nor'easter
European windstorm
Whiteout (weather)
Blowing snow advisory
Ground blizzard
Severe weather terminology (Canada)
Snowsquall
Blowing snow
List of blizzards
References
External links
Digital Snow Museum Photos of historic blizzards and snowstorms.
Farmers Almanac List of Worst Blizzards in the United States
United States Search and Rescue Task Force: About Blizzards
A Historical Review On The Origin and Definition of the Word Blizzard Dr Richard Wild
Snow or ice weather phenomena
Storm
Weather hazards
Hazards of outdoor recreation
|
https://en.wikipedia.org/wiki/Baryon
|
In particle physics, a baryon is a type of composite subatomic particle which contains an odd number of valence quarks (at least 3). Baryons belong to the hadron family of particles; hadrons are composed of quarks. Baryons are also classified as fermions because they have half-integer spin.
The name "baryon", introduced by Abraham Pais, comes from the Greek word for "heavy" (βαρύς, barýs), because, at the time of their naming, most known elementary particles had lower masses than the baryons. Each baryon has a corresponding antiparticle (antibaryon) where their corresponding antiquarks replace quarks. For example, a proton is made of two up quarks and one down quark; and its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark.
Baryons participate in the residual strong force, which is mediated by particles known as mesons. The most familiar baryons are protons and neutrons, both of which contain three quarks, and for this reason they are sometimes called triquarks. These particles make up most of the mass of the visible matter in the universe and compose the nucleus of every atom (electrons, the other major component of the atom, are members of a different family of particles called leptons; leptons do not interact via the strong force). Exotic baryons containing five quarks, called pentaquarks, have also been discovered and studied.
A census of the Universe's baryons indicates that 10% of them could be found inside galaxies, 50 to 60% in the circumgalactic medium, and the remaining 30 to 40% could be located in the warm–hot intergalactic medium (WHIM).
Background
Baryons are strongly interacting fermions; that is, they are acted on by the strong nuclear force and are described by Fermi–Dirac statistics, which apply to all particles obeying the Pauli exclusion principle. This is in contrast to the bosons, which do not obey the exclusion principle.
Baryons, along with mesons, are hadrons, particles composed of quarks. Quarks have baryon numbers of B = and antiquarks have baryon numbers of B = −. The term "baryon" usually refers to triquarks—baryons made of three quarks (B = + + = 1).
Other exotic baryons have been proposed, such as pentaquarks—baryons made of four quarks and one antiquark (B = + + + − = 1), but their existence is not generally accepted. The particle physics community as a whole did not view their existence as likely in 2006, and in 2008, considered evidence to be overwhelmingly against the existence of the reported pentaquarks. However, in July 2015, the LHCb experiment observed two resonances consistent with pentaquark states in the Λ → J/ψKp decay, with a combined statistical significance of 15σ.
In theory, heptaquarks (5 quarks, 2 antiquarks), nonaquarks (6 quarks, 3 antiquarks), etc. could also exist.
Baryonic matter
Nearly all matter that may be encountered or experienced in everyday life is baryonic matter, which includes atoms of any sort, and provides them with the property of mass. Non-baryonic matter, as implied by the name, is any sort of matter that is not composed primarily of baryons. This might include neutrinos and free electrons, dark matter, supersymmetric particles, axions, and black holes.
The very existence of baryons is also a significant issue in cosmology because it is assumed that the Big Bang produced a state with equal amounts of baryons and antibaryons. The process by which baryons came to outnumber their antiparticles is called baryogenesis.
Baryogenesis
Experiments are consistent with the number of quarks in the universe being a constant and, to be more specific, the number of baryons being a constant (if antimatter is counted as negative); in technical language, the total baryon number appears to be conserved. Within the prevailing Standard Model of particle physics, the number of baryons may change in multiples of three due to the action of sphalerons, although this is rare and has not been observed under experiment. Some grand unified theories of particle physics also predict that a single proton can decay, changing the baryon number by one; however, this has not yet been observed under experiment. The excess of baryons over antibaryons in the present universe is thought to be due to non-conservation of baryon number in the very early universe, though this is not well understood.
Properties
Isospin and charge
The concept of isospin was first proposed by Werner Heisenberg in 1932 to explain the similarities between protons and neutrons under the strong interaction. Although they had different electric charges, their masses were so similar that physicists believed they were the same particle. The different electric charges were explained as being the result of some unknown excitation similar to spin. This unknown excitation was later dubbed isospin by Eugene Wigner in 1937.
This belief lasted until Murray Gell-Mann proposed the quark model in 1964 (containing originally only the u, d, and s quarks). The success of the isospin model is now understood to be the result of the similar masses of u and d quarks. Since u and d quarks have similar masses, particles made of the same number then also have similar masses. The exact specific u and d quark composition determines the charge, as u quarks carry charge + while d quarks carry charge −. For example, the four Deltas all have different charges ( (uuu), (uud), (udd), (ddd)), but have similar masses (~1,232 MeV/c2) as they are each made of a combination of three u or d quarks. Under the isospin model, they were considered to be a single particle in different charged states.
The mathematics of isospin was modeled after that of spin. Isospin projections varied in increments of 1 just like those of spin, and to each projection was associated a "charged state". Since the "Delta particle" had four "charged states", it was said to be of isospin I = . Its "charged states" , , , and , corresponded to the isospin projections I3 = +, I3 = +, I3 = −, and I3 = −, respectively. Another example is the "nucleon particle". As there were two nucleon "charged states", it was said to be of isospin . The positive nucleon (proton) was identified with I3 = + and the neutral nucleon (neutron) with I3 = −. It was later noted that the isospin projections were related to the up and down quark content of particles by the relation:
where the n'''s are the number of up and down quarks and antiquarks.
In the "isospin picture", the four Deltas and the two nucleons were thought to be the different states of two particles. However, in the quark model, Deltas are different states of nucleons (the N++ or N− are forbidden by Pauli's exclusion principle). Isospin, although conveying an inaccurate picture of things, is still used to classify baryons, leading to unnatural and often confusing nomenclature.
Flavour quantum numbers
The strangeness flavour quantum number S (not to be confused with spin) was noticed to go up and down along with particle mass. The higher the mass, the lower the strangeness (the more s quarks). Particles could be described with isospin projections (related to charge) and strangeness (mass) (see the uds octet and decuplet figures on the right). As other quarks were discovered, new quantum numbers were made to have similar description of udc and udb octets and decuplets. Since only the u and d mass are similar, this description of particle mass and charge in terms of isospin and flavour quantum numbers works well only for octet and decuplet made of one u, one d, and one other quark, and breaks down for the other octets and decuplets (for example, ucb octet and decuplet). If the quarks all had the same mass, their behaviour would be called symmetric, as they would all behave in the same way to the strong interaction. Since quarks do not have the same mass, they do not interact in the same way (exactly like an electron placed in an electric field will accelerate more than a proton placed in the same field because of its lighter mass), and the symmetry is said to be broken.
It was noted that charge (Q) was related to the isospin projection (I3), the baryon number (B) and flavour quantum numbers (S, C, B′, T) by the Gell-Mann–Nishijima formula:
where S, C, B′, and T represent the strangeness, charm, bottomness and topness flavour quantum numbers, respectively. They are related to the number of strange, charm, bottom, and top quarks and antiquark according to the relations:
meaning that the Gell-Mann–Nishijima formula is equivalent to the expression of charge in terms of quark content:
Spin, orbital angular momentum, and total angular momentum
Spin (quantum number S) is a vector quantity that represents the "intrinsic" angular momentum of a particle. It comes in increments of ħ (pronounced "h-bar"). The ħ is often dropped because it is the "fundamental" unit of spin, and it is implied that "spin 1" means "spin 1 ħ". In some systems of natural units, ħ is chosen to be 1, and therefore does not appear anywhere.
Quarks are fermionic particles of spin (S = ). Because spin projections vary in increments of 1 (that is 1 ħ), a single quark has a spin vector of length , and has two spin projections (Sz = + and Sz = −). Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length S = 1 and three spin projections (Sz = +1, Sz = 0, and Sz = −1). If two quarks have unaligned spins, the spin vectors add up to make a vector of length S = 0 and has only one spin projection (Sz = 0), etc. Since baryons are made of three quarks, their spin vectors can add to make a vector of length S = , which has four spin projections (Sz = +, Sz = +, Sz = −, and Sz = −), or a vector of length S = with two spin projections (Sz = +, and Sz = −).
There is another quantity of angular momentum, called the orbital angular momentum (azimuthal quantum number L), that comes in increments of 1 ħ, which represent the angular moment due to quarks orbiting around each other. The total angular momentum (total angular momentum quantum number J) of a particle is therefore the combination of intrinsic angular momentum (spin) and orbital angular momentum. It can take any value from to , in increments of 1.
Particle physicists are most interested in baryons with no orbital angular momentum (L = 0), as they correspond to ground states—states of minimal energy. Therefore, the two groups of baryons most studied are the S = ; L = 0 and S = ; L = 0, which corresponds to J = + and J = +, respectively, although they are not the only ones. It is also possible to obtain J = + particles from S = and L = 2, as well as S = and L = 2. This phenomenon of having multiple particles in the same total angular momentum configuration is called degeneracy. How to distinguish between these degenerate baryons is an active area of research in baryon spectroscopy.D.M. Manley (2005)
Parity
If the universe were reflected in a mirror, most of the laws of physics would be identical—things would behave the same way regardless of what we call "left" and what we call "right". This concept of mirror reflection is called "intrinsic parity" or simply "parity" (P). Gravity, the electromagnetic force, and the strong interaction all behave in the same way regardless of whether or not the universe is reflected in a mirror, and thus are said to conserve parity (P-symmetry). However, the weak interaction does distinguish "left" from "right", a phenomenon called parity violation (P-violation).
Based on this, if the wavefunction for each particle (in more precise terms, the quantum field for each particle type) were simultaneously mirror-reversed, then the new set of wavefunctions would perfectly satisfy the laws of physics (apart from the weak interaction). It turns out that this is not quite true: for the equations to be satisfied, the wavefunctions of certain types of particles have to be multiplied by −1, in addition to being mirror-reversed. Such particle types are said to have negative or odd parity (P = −1, or alternatively P = –), while the other particles are said to have positive or even parity (P = +1, or alternatively P = +).
For baryons, the parity is related to the orbital angular momentum by the relation:
As a consequence, baryons with no orbital angular momentum (L = 0) all have even parity (P = +).
Nomenclature
Baryons are classified into groups according to their isospin (I) values and quark (q) content. There are six groups of baryons: nucleon (), Delta (), Lambda (), Sigma (), Xi (), and Omega (). The rules for classification are defined by the Particle Data Group. These rules consider the up (), down () and strange () quarks to be light and the charm (), bottom (), and top () quarks to be heavy. The rules cover all the particles that can be made from three of each of the six quarks, even though baryons made of top quarks are not expected to exist because of the top quark's short lifetime. The rules do not cover pentaquarks.
Baryons with (any combination of) three and/or quarks are s (I = ) or baryons (I = ).
Baryons containing two and/or quarks are baryons (I = 0) or baryons (I = 1). If the third quark is heavy, its identity is given by a subscript.
Baryons containing one or quark are baryons (I = ). One or two subscripts are used if one or both of the remaining quarks are heavy.
Baryons containing no or quarks are baryons (I = 0), and subscripts indicate any heavy quark content.
Baryons that decay strongly have their masses as part of their names. For example, Σ0 does not decay strongly, but Δ++(1232) does.
It is also a widespread (but not universal) practice to follow some additional rules when distinguishing between some states that would otherwise have the same symbol.
Baryons in total angular momentum J = configuration that have the same symbols as their J = counterparts are denoted by an asterisk ( * ).
Two baryons can be made of three different quarks in J = configuration. In this case, a prime ( ′ ) is used to distinguish between them.
Exception: When two of the three quarks are one up and one down quark, one baryon is dubbed Λ while the other is dubbed Σ.
Quarks carry a charge, so knowing the charge of a particle indirectly gives the quark content. For example, the rules above say that a contains a c quark and some combination of two u and/or d quarks. The c quark has a charge of (Q = +), therefore the other two must be a u quark (Q = +), and a d quark (Q = −) to have the correct total charge (Q'' = +1).
See also
Eightfold way
List of baryons
Meson
Timeline of particle discoveries
Citations
General references
External links
Particle Data Group—Review of Particle Physics (2018).
Georgia State University—HyperPhysics
Baryons made thinkable, an interactive visualisation allowing physical properties to be compared
|
https://en.wikipedia.org/wiki/Bogie
|
A bogie ( ) (or truck in North American English) is a chassis or framework that carries a wheelset, attached to a vehicle—a modular subassembly of wheels and axles. Bogies take various forms in various modes of transport. A bogie may remain normally attached (as on many railroad cars and semi-trailers) or be quickly detachable (as the dolly in a road train or in railway bogie exchange); it may contain a suspension within it (as most rail and trucking bogies do), or be solid and in turn be suspended (as most bogies of tracked vehicles are); it may be mounted on a swivel, as traditionally on a railway carriage or locomotive, additionally jointed and sprung (as in the landing gear of an airliner), or held in place by other means (centreless bogies).
In Northern England (particularly Yorkshire),Scotland and some part of Wales, the term is used for a child’s (usually home-made) wooden cart.
While bogie is the preferred spelling and first-listed variant in various dictionaries, bogey and bogy are also used.
Railway
A bogie in the UK, or a railroad truck, wheel truck, or simply truck in North America, is a structure underneath a railway vehicle (wagon, coach or locomotive) to which axles (and, hence, wheels) are attached through bearings. In Indian English, bogie may also refer to an entire railway carriage. In South Africa, the term bogie is often alternatively used to refer to a freight or goods wagon (shortened from bogie wagon).
The first locomotive with a bogie was built by William Chapman (engineer) in 1812, this one hauled itself along by chains and was not successful, but he built a more successful locomotive with two gear driven bogies in 1814.
The bogie was first used in America for wagons on the Quincy Granite Railroad in 1829 and for locomotives by John B. Jervis along with the 4-2-0 locomotive to support the smokebox on it in the early 1830s, but it didn't get accepted for decades. The first use of bogie coaches in Britain was in 1872 by the Festiniog Railway. The first standard gauge British railway to build coaches with bogies, instead of rigidly mounted axles, was the Midland Railway in 1874.
Purpose
Bogies serve a number of purposes:
Support of the rail vehicle body
Stability on both straight and curved track
Improve ride quality by absorbing vibration and minimizing the impact of centrifugal forces when the train runs on curves at high speed
Minimizing generation of track irregularities and rail abrasion
Usually, two bogies are fitted to each carriage, wagon or locomotive, one at each end. Another configuration is often used in articulated vehicles, which places the bogies (often Jacobs bogies) under the connection between the carriages or wagons.
Most bogies have two axles, but some cars designed for heavy loads have more axles per bogie. Heavy-duty cars may have more than two bogies using span bolsters to equalize the load and connect the bogies to the cars.
Usually, the train floor is at a level above the bogies, but the floor of the car may be lower between bogies, such as for a bilevel rail car to increase interior space while staying within height restrictions, or in easy-access, stepless-entry, low-floor trains.
Components
Key components of a bogie include:
The bogie frame: This can be of inside frame type where the main frame and bearings are between the wheels, or (more commonly) of outside frame type where the main frame and bearings are outside the wheels.
Suspension to absorb shocks between the bogie frame and the rail vehicle body. Common types are coil springs, leaf springs and rubber airbags.
At least one wheelset, composed of an axle with bearings and a wheel at each end.
The bolster, the main crossmember, connected to the bogie frame through the secondary suspension. The railway car is supported at the pivot point on the bolster.
Axle box suspensions absorb shocks between the axle bearings and the bogie frame. The axle box suspension usually consists of a spring between the bogie frame and axle bearings to permit up-and-down movement, and sliders to prevent lateral movement. A more modern design uses solid rubber springs.
Brake equipment: Two main types are used: brake shoes that are pressed against the tread of the wheel, and disc brakes and pads.
In powered vehicles, some form of transmission, usually electrically powered traction motors with a single speed gearbox or a hydraulically powered torque converter.
The connections of the bogie with the rail vehicle allow a certain degree of rotational movement around a vertical axis pivot (bolster), with side bearers preventing excessive movement. More modern, bolsterless bogie designs omit these features, instead taking advantage of the sideways movement of the suspension to permit rotational movement.
Locomotives
Diesel and electric
Modern diesel and electric locomotives are mounted on bogies. Those commonly used in North America include Type A, Blomberg, HT-C and Flexicoil trucks.
Steam
On a steam locomotive, the leading and trailing wheels may be mounted on bogies like Bissel trucks (also known as pony trucks). Articulated locomotives (e.g. Fairlie, Garratt or Mallet locomotives) have power bogies similar to those on diesel and electric locomotives.
Rollbock
A rollbock is a specialized type of bogie that is inserted under the wheels of a rail wagon/car, usually to convert for another track gauge. Transporter wagons carry the same concept to the level of a flatcar specialized to take other cars as its load.
Archbar bogies
In archbar or diamond frame bogies, the side frames are fabricated rather than cast.
Tramway
Modern
Tram bogies are much simpler in design because of their axle load, and the tighter curves found on tramways mean tram bogies almost never have more than two axles. Furthermore, some tramways have steeper gradients and vertical as well as horizontal curves, which means tram bogies often need to pivot on the horizontal axis, as well.
Some articulated trams have bogies located under articulations, a setup referred to as a Jacobs bogie. Often, low-floor trams are fitted with nonpivoting bogies; many tramway enthusiasts see this as a retrograde step, as it leads to more wear of both track and wheels and also significantly reduces the speed at which a tram can round a curve.
Historic
In the past, many different types of bogie (truck) have been used under tramcars (e.g. Brill, Peckham, maximum traction). A maximum traction truck has one driving axle with large wheels and one nondriving axle with smaller wheels. The bogie pivot is located off-centre, so more than half the weight rests on the driving axle.
Hybrid systems
The retractable stadium roof on Toronto's Rogers Centre used modified off-the-shelf train bogies on a circular rail. The system was chosen for its proven reliability.
Rubber-tyred metro trains use a specialised version of railway bogies. Special flanged steel wheels are behind the rubber-tired running wheels, with additional horizontal guide wheels in front of and behind the running wheels, as well. The unusually large flanges on the steel wheels guide the bogie through standard railroad switches, and in addition keep the train from derailing in case the tires deflate.
Variable gauge axles
To overcome breaks of gauge some bogies are being fitted with variable gauge axles (VGA) so that they can operate on two different gauges. These include the SUW 2000 system from ZNTK Poznań.
Cleminson system
The Cleminson system is not a true bogie, but serves a similar purpose. It was based on a patent of 1883 by James Cleminson, and was once popular on narrow-gauge rolling stock, e.g. on the Isle of Man and Manx Northern Railways. The vehicle would have three axles and the outer two could pivot to adapt to curvature of the track. The pivoting was controlled by levers attached to the third (centre) axle, which could slide sideways.
Tracked vehicles
Some tanks and other tracked vehicles have bogies as external suspension components (see armoured fighting vehicle suspension). This type of bogie usually has two or more road wheels and some type of sprung suspension to smooth the ride across rough terrain. Bogie suspensions keep much of their components on the outside of the vehicle, saving internal space. Although vulnerable to antitank fire, they can often be repaired or replaced in the field.
Articulated bogie
An articulated bogie is any one of a number of bogie designs that allow railway equipment to safely turn sharp corners, while reducing or eliminating the "screeching" normally associated with metal wheels rounding a bend in the rails. There are a number of such designs, and the term is also applied to train sets that incorporate articulation in the vehicle, as opposed to the bogies themselves.
If one considers a single bogie "up close", it resembles a small rail car with axles at either end. The same effect that causes the bogies to rub against the rails at longer radius causes each of the pairs of wheels to rub on the rails and cause the screeching. Articulated bogies add a second pivot point between the two axles (wheelsets) to allow them to rotate to the correct angle even in these cases.
Articulated lorries (tractor-trailers)
In trucking, a bogie is the subassembly of axles and wheels that supports a semi-trailer, whether permanently attached to the frame (as on a single trailer) or making up the dolly that can be hitched and unhitched as needed when hitching up a second or third semi-trailer (as when pulling doubles or triples).
Bogie (aircraft)
Radial steering truck
Radial steering trucks, also known as radial bogies, allow the individual axles to align with curves in addition to the bogie frame as a whole pivoting. For non-radial bogies, the more axles in the assembly, the more difficulty it has negotiating curves, due to wheel flange to rail friction. For radial bogies, the wheel sets actively "steer" through curves, thus reducing wear at the wheel flange to rail interface and improving adhesion.
In the US, this has been implemented for locomotives both by EMD and GE. The EMD version, designated HTCR, was made standard equipment for the SD70 series, first sold in 1993. However, the HTCR in actual operation had mixed results and relatively high purchase and maintenance costs. Thus EMD introduced the HTSC truck in 2003, which basically is the HTCR stripped of radial components. GE introduced their version in 1995 as a buyer option for the AC4400CW and later Evolution Series locomotives. However it also met with limited acceptance due to relatively high purchase and maintenance costs, and customers have generally chosen GE Hi-Ad standard trucks for newer and rebuilt locomotives.
See also
Articles on bogies and trucks
Arnoux system
Bissel bogie
Blomberg B
Gölsdorf axle
Jacobs bogie
Krauss-Helmholtz bogie
Lateral motion device
Mason Bogie
Pony truck
Rocker-bogie
Scheffel bogie
Schwartzkopff-Eckhardt bogie
Syntegra
Related topics
Caster
Dolly
Flange
List of railroad truck parts
Luttermöller axle
Road–rail vehicle
Skateboard truck
Spring (device)
Timmis system, an early form of coil spring used on railway axles.
Trailing wheel
Wheel arrangement
Wheelbase
Wheelset
References
Further reading
External links
Truck (bogie) with tyres
Track modelling
Bogies/Trucks
Barber truck parts
Suspension systems
Locomotive’s Bogies & Components
Locomotive parts
Rail technologies
Vehicle technology
|
https://en.wikipedia.org/wiki/Bookkeeping
|
Bookkeeping is the recording of financial transactions, and is part of the process of accounting in business and other organizations. It involves preparing source documents for all transactions, operations, and other events of a business. Transactions include purchases, sales, receipts and payments by an individual person or an organization/corporation. There are several standard methods of bookkeeping, including the single-entry and double-entry bookkeeping systems. While these may be viewed as "real" bookkeeping, any process for recording financial transactions is a bookkeeping process.
The person in an organisation who is employed to perform bookkeeping functions is usually called the bookkeeper (or book-keeper). They usually write the daybooks (which contain records of sales, purchases, receipts, and payments), and document each financial transaction, whether cash or credit, into the correct daybook—that is, petty cash book, suppliers ledger, customer ledger, etc.—and the general ledger. Thereafter, an accountant can create financial reports from the information recorded by the bookkeeper. The bookkeeper brings the books to the trial balance stage, from which an accountant may prepare financial reports for the organisation, such as the income statement and balance sheet.
History
The origin of book-keeping is lost in obscurity, but recent research indicates that methods of keeping accounts have existed from the remotest times of human life in cities. Babylonian records written with styli on small slabs of clay have been found dating to 2600 BC. Mesopotamian bookkeepers kept records on clay tablets that may date back as far as 7,000 years. Use of the modern double entry bookkeeping system was described by Luca Pacioli in 1494.
The term "waste book" was used in colonial America, referring to the documenting of daily transactions of receipts and expenditures. Records were made in chronological order, and for temporary use only. Daily records were then transferred to a daybook or account ledger to balance the accounts and to create a permanent journal; then the waste book could be discarded, hence the name.
Process
The primary purpose of bookkeeping is to record the financial effects of transactions. An important difference between a manual and an electronic accounting system is the former's latency between the recording of a financial transaction and its posting in the relevant account. This delay, which is absent in electronic accounting systems due to nearly instantaneous posting to relevant accounts, is characteristic of manual systems, and gave rise to the primary books of accounts—cash book, purchase book, sales book, etc.—for immediately documenting a financial transaction.
In the normal course of business, a document is produced each time a transaction occurs. Sales and purchases usually have invoices or receipts. Historically, deposit slips were produced when lodgements (deposits) were made to a bank account; and checks (spelled "cheques" in the UK and several other countries) were written to pay money out of the account. Nowadays such transactions are mostly made electronically. Bookkeeping first involves recording the details of all of these source documents into multi-column journals (also known as books of first entry or daybooks). For example, all credit sales are recorded in the sales journal; all cash payments are recorded in the cash payments journal. Each column in a journal normally corresponds to an account. In the single entry system, each transaction is recorded only once. Most individuals who balance their check-book each month are using such a system, and most personal-finance software follows this approach.
After a certain period, typically a month, each column in each journal is totalled to give a summary for that period. Using the rules of double-entry, these journal summaries are then transferred to their respective accounts in the ledger, or account book. For example, the entries in the Sales Journal are taken and a debit entry is made in each customer's account (showing that the customer now owes us money), and a credit entry might be made in the account for "Sale of class 2 widgets" (showing that this activity has generated revenue for us). This process of transferring summaries or individual transactions to the ledger is called posting. Once the posting process is complete, accounts kept using the "T" format (debits on the left side of the "T" and credits on the right side) undergo balancing, which is simply a process to arrive at the balance of the account.
As a partial check that the posting process was done correctly, a working document called an unadjusted trial balance is created. In its simplest form, this is a three-column list. Column One contains the names of those accounts in the ledger which have a non-zero balance. If an account has a debit balance, the balance amount is copied into Column Two (the debit column); if an account has a credit balance, the amount is copied into Column Three (the credit column). The debit column is then totalled, and then the credit column is totalled. The two totals must agree—which is not by chance—because under the double-entry rules, whenever there is a posting, the debits of the posting equal the credits of the posting. If the two totals do not agree, an error has been made, either in the journals or during the posting process. The error must be located and rectified, and the totals of the debit column and the credit column recalculated to check for agreement before any further processing can take place.
Once the accounts balance, the accountant makes a number of adjustments and changes the balance amounts of some of the accounts. These adjustments must still obey the double-entry rule: for example, the inventory account and asset account might be changed to bring them into line with the actual numbers counted during a stocktake. At the same time, the expense account associated with use of inventory is adjusted by an equal and opposite amount. Other adjustments such as posting depreciation and prepayments are also done at this time. This results in a listing called the adjusted trial balance. It is the accounts in this list, and their corresponding debit or credit balances, that are used to prepare the financial statements.
Finally financial statements are drawn from the trial balance, which may include:
the income statement, also known as the statement of financial results, profit and loss account, or P&L
the balance sheet, also known as the statement of financial position
the cash flow statement
the statement of changes in equity, also known as the statement of total recognised gains and losses
Single-entry system
The primary bookkeeping record in single-entry bookkeeping is the cash book, which is similar to a checking account register (in UK: cheque account, current account), except all entries are allocated among several categories of income and expense accounts. Separate account records are maintained for petty cash, accounts payable and accounts receivable, and other relevant transactions such as inventory and travel expenses. To save time and avoid the errors of manual calculations, single-entry bookkeeping can be done today with do-it-yourself bookkeeping software.
Double-entry system
A double-entry bookkeeping system is a set of rules for recording financial information in a financial accounting system in which every transaction or event changes at least two different nominal ledger accounts.
Daybooks
A daybook is a descriptive and chronological (diary-like) record of day-to-day financial transactions; it is also called a book of original entry. The daybook's details must be transcribed formally into journals to enable posting to ledgers. Daybooks include:
Sales daybook, for recording sales invoices.
Sales credits daybook, for recording sales credit notes.
Purchases daybook, for recording purchase invoices.
Purchases debits daybook, for recording purchase debit notes.
Cash daybook, usually known as the cash book, for recording all monies received and all monies paid out. It may be split into two daybooks: a receipts daybook documenting every money-amount received, and a payments daybook recording every payment made.
General Journal daybook, for recording journal entries.
Petty cash book
A petty cash book is a record of small-value purchases before they are later transferred to the ledger and final accounts; it is maintained by a petty or junior cashier. This type of cash book usually uses the imprest system: a certain amount of money is provided to the petty cashier by the senior cashier. This money is to cater for minor expenditures (hospitality, minor stationery, casual postage, and so on) and is reimbursed periodically on satisfactory explanation of how it was spent.
The balance of petty cash book is Asset.
Journals
Journals are recorded in the general journal daybook. A journal is a formal and chronological record of financial transactions before their values are accounted for in the general ledger as debits and credits. A company can maintain one journal for all transactions, or keep several journals based on similar activity (e.g., sales, cash receipts, revenue, etc.), making transactions easier to summarize and reference later. For every debit journal entry recorded, there must be an equivalent credit journal entry to maintain a balanced accounting equation.
Ledgers
A ledger is a record of accounts. The ledger is a permanent summary of all amounts entered in supporting Journals which list individual transactions by date. These accounts are recorded separately, showing their beginning/ending balance. A journal lists financial transactions in chronological order, without showing their balance but showing how much is going to be charged in each account. A ledger takes each financial transaction from the journal and records it into the corresponding account for every transaction listed. The ledger also sums up the total of every account, which is transferred into the balance sheet and the income statement. There are three different kinds of ledgers that deal with book-keeping:
Sales ledger, which deals mostly with the accounts receivable account. This ledger consists of the records of the financial transactions made by customers to the business.
Purchase ledger is the record of the company's purchasing transactions; it goes hand in hand with the Accounts Payable account.
Abbreviations used in bookkeeping
Chart of accounts
A chart of accounts is a list of the accounts codes that can be identified with numeric, alphabetical, or alphanumeric codes allowing the account to be located in the general ledger. The equity section of the chart of accounts is based on the fact that the legal structure of the entity is of a particular legal type. Possibilities include sole trader, partnership, trust, and company.
Computerized bookkeeping
Computerized bookkeeping removes many of the paper "books" that are used to record the financial transactions of a business entity; instead, relational databases are used today, but typically, these still enforce the norms of bookkeeping including the single-entry and double-entry bookkeeping systems. Certified Public Accountants (CPAs) supervise the internal controls for computerized bookkeeping systems, which serve to minimize errors in documenting the numerous activities a business entity may initiate or complete over an accounting period.
See also
Accounting
Comparison of accounting software
POS system: records sales and updates stock levels
Bookkeeping Associations
coordinate bookkeeper
References
External links
Guide to the Account Book from Italy 1515–1520
Accounting systems
Accounting
|
https://en.wikipedia.org/wiki/Blissymbols
|
Blissymbols or Blissymbolics is a constructed language conceived as an ideographic writing system called Semantography consisting of several hundred basic symbols, each representing a concept, which can be composed together to generate new symbols that represent new concepts. Blissymbols differ from most of the world's major writing systems in that the characters do not correspond at all to the sounds of any spoken language.
Semantography was published by Charles K. Bliss in 1949 and found use in the education of people with communication difficulties.
History
Semantography was invented by Charles K. Bliss (1897–1985), born Karl Kasiel Blitz to a Jewish family in Chernivtsi (then Czernowitz, Austria-Hungary), which had a mixture of different nationalities that "hated each other, mainly because they spoke and thought in different languages."
Bliss graduated as a chemical engineer at the Vienna University of Technology, and joined an electronics company. After the Nazi annexation of Austria in 1938, Bliss was sent to concentration camps but his German wife Claire managed to get him released, and they finally became exiles in Shanghai, where Bliss had a cousin.
Bliss devised the symbols while a refugee at the Shanghai Ghetto and Sydney, from 1942 to 1949. He wanted to create an easy-to-learn international auxiliary language to allow communication between different linguistic communities. He was inspired by Chinese characters, with which he became familiar at Shanghai.
Bliss published his system in Semantography (1949, exp. 2nd ed. 1965, 3rd ed. 1978.)
It had several names:
As the "tourist explosion" took place in the 1960s, a number of researchers were looking for new standard symbols to be used at roads, stations, airports, etc. Bliss then adopted the name Blissymbolics in order that no researcher could plagiarize his system of symbols.
Since the 1960s/1970s, Blissymbols have become popular as a method to teach disabled people to communicate.
In 1971 Shirley McNaughton started a pioneer program at the Ontario Crippled Children's Centre (OCCC), aimed at children with cerebral palsy, from the approach of augmentative and alternative communication (AAC). According to Arika Okrent, Bliss used to complain about the way the teachers at the OCCC were using the symbols, in relation with the proportions of the symbols and other questions: for example, they used "fancy" terms like "nouns" and "verbs", to describe what Bliss called "things" and "actions". (2009, p. 173-4).
The ultimate objective of the OCCC program was to use Blissymbols as a practical way to teach the children to express themselves in their mother tongue, since the Blissymbols provided visual keys to understand the meaning of the English words, especially the abstract words.
In Semantography Bliss had not provided a systematic set of definitions for his symbols (there was a provisional vocabulary index instead (1965, pp. 827–67)), so McNaughton's team might often interpret a certain symbol in a way that Bliss would later criticize as a "misinterpretation". For example, they might interpret a tomato as a vegetable —according to the English definition of tomato— even though the ideal Blissymbol of vegetable was restricted by Bliss to just vegetables growing underground. Eventually the OCCC staff modified and adapted Bliss's system in order to make it serve as a bridge to English. (2009, p. 189) Bliss' complaints about his symbols "being abused" by the OCCC became so intense that the director of the OCCC told Bliss, on his 1974 visit, never to come back. In spite of this, in 1975 Bliss granted an exclusive world license, for use with disabled children, to the new Blissymbolics Communication Foundation directed by Shirley McNaughton (later called Blissymbolics Communication International, BCI). Nevertheless, in 1977 Bliss claimed that this agreement was violated so that he was deprived of effective control of his symbol system.
According to Okrent (2009, p. 190), there was a final period of conflict, as Bliss would make continuous criticisms to McNaughton often followed by apologies. Bliss finally brought his lawyers back to the OCCC, reaching a settlement:
Blissymbolic Communication International now claims an exclusive license from Bliss, for the use and publication of Blissymbols for persons with communication, language, and learning difficulties.
The Blissymbol method has been used in Canada, Sweden, and a few other countries. Practitioners of Blissymbolics (that is, speech and language therapists and users) maintain that some users who have learned to communicate with Blissymbolics find it easier to learn to read and write traditional orthography in the local spoken language than do users who did not know Blissymbolics.
The speech question
Unlike similar constructed languages like aUI, Blissymbolics was conceived as a written language with no phonology, on the premise that "interlinguistic communication is mainly carried on by reading and writing". Nevertheless, Bliss suggested that a set of international words could be adopted, so that "a kind of spoken language could be established – as a travelling aid only". (1965, p. 89–90).
Whether Blissymbolics constitutes an unspoken language is a controversial question, whatever its practical utility may be. Some linguists, such as John DeFrancis and J. Marshall Unger have argued that genuine ideographic writing systems with the same capacities as natural languages do not exist.
Semantics
Bliss' concern about semantics finds an early referent in John Locke, whose Essay Concerning Human Understanding prevented people from those "vague and insignificant forms of speech" that may give the impression of being deep learning.
Another vital referent is Gottfried Wilhelm Leibniz's project of an ideographic language "characteristica universalis", based on the principles of Chinese characters. It would contain small figures representing "visible things by their lines, and the invisible, by the visible which accompany them", adding "certain additional marks, suitable to make understood the flexions and the particles." Bliss stated that his own work was an attempt to take up the thread of Leibniz's project.
Finally there is a strong influence by The Meaning of Meaning (1923) by C. K. Ogden and I. A. Richards, which was considered a standard work on semantics. Bliss found especially useful their "triangle of reference": the physical thing or "referent" that we perceive would be represented at the right vertex; the meaning that we know by experience (our implicit definition of the thing), at the top vertex; and the physical word that we speak or symbol we write, at the left vertex. The reversed process would happen when we read or listen to words: from the words, we recall meanings, related to referents which may be real things or unreal "fictions". Bliss was particularly concerned with political propaganda, whose discourses would tend to contain words that correspond to unreal or ambiguous referents.
Grammar
The grammar of Blissymbols is based on a certain interpretation of nature, dividing it into matter (material things), energy (actions), and human values (mental evaluations). In a natural language, these would give place respectively to nouns, verbs, and adjectives. In Blissymbols, they are marked respectively by a small square symbol, a small cone symbol, and a small V or inverted cone. These symbols may be placed above any other symbol, turning it respectively into a "thing", an "action", and an "evaluation":
When a symbol is not marked by any of the three grammar symbols (square, cone, inverted cone), it may refer to a non-material thing, a grammatical particle, etc.
Examples
The preceding symbol represents the expression "world language", which was a first tentative name for Blissymbols. It combines the symbol for "writing tool" or "pen" (a line inclined, as a pen being used) with the symbol for "world", which in its turn combines "ground" or "earth" (a horizontal line below) and its counterpart derivate "sky" (a horizontal line above). Thus the world would be seen as "what is among the ground and the sky", and "Blissymbols" would be seen as "the writing tool to express the world". This is clearly distinct from the symbol of "language", which is a combination of "mouth" and "ear". Thus natural languages are mainly oral, while Blissymbols is just a writing system dealing with semantics, not phonetics.
The 900 individual symbols of the system are called "Bliss-characters"; these may be "ideographic" – representing abstract concepts, "pictographic" – a direct representation of objects, or "composite" – in which two or more existing Bliss-characters are superimposed to represent a new meaning. Size, orientation and relation to the "skyline" and "earthline" affects the meaning of each symbol. A single concept is called a "Bliss-word", which can consist of one or more Bliss-characters. In multiple-character Bliss-words, the main character is called the "classifier" which "indicates the semantic or grammatical category to which the Bliss-word belongs". To this can be added Bliss-characters as prefixes or suffixes called "modifiers" which amend the meaning of the first symbol. A further symbol called an "indicator" can be added above one of the characters in the Bliss-word (typically the classifier); these are used as "grammatical and/or semantic markers."
Sentence on the right means "I want to go to the cinema.", showing several features of Blissymbolics:
The pronoun "I" is formed of the Bliss-character for "person" and the number 1 (the first person). Using the number 2 would give the symbol for singular "You"; adding the plural indicator (a small cross at the top) would produce the pronouns "We" and plural "You".
The Bliss-word for "to want" contains the heart which symbolizes "feeling" (the classifier), plus the serpentine line which symbolizes "fire" (the modifier), and the verb (called "action") indicator at the top.
The Bliss-word for "to go" is composed of the Bliss-character for "leg" and the verb indicator.
The Bliss-word for "cinema" is composed of the Bliss-character for "house" (the classifier), and "film" (the modifier); "film" is a composite character composed of "camera" and the arrow indicating movement.
Towards the international standardization of the script
Blissymbolics was used in 1971 to help children at the Ontario Crippled Children's Centre (OCCC, now the Holland Bloorview Kids Rehabilitation Hospital) in Toronto, Ontario, Canada. Since it was important that the children see consistent pictures, OCCC had a draftsman named Jim Grice draw the symbols. Both Charles K. Bliss and Margrit Beesley at the OCCC worked with Grice to ensure consistency. In 1975, a new organization named Blissymbolics Communication Foundation directed by Shirley McNaughton led this effort. Over the years, this organization changed its name to Blissymbolics Communication Institute, Easter Seal Communication Institute, and ultimately to Blissymbolics Communication International (BCI).
BCI is an international group of people who act as an authority regarding the standardization of the Blissymbolics language. It has taken responsibility for any extensions of the Blissymbolics language as well as any maintenance needed for the language. BCI has coordinated usage of the language since 1971 for augmentative and alternative communication. BCI received a licence and copyright through legal agreements with Charles K. Bliss in 1975 and 1982. Limiting the count of Bliss-characters (there are currently about 900) is very useful in order to help the user community. It also helps when implementing Blissymbolics using technology such as computers.
In 1991, BCI published a reference guide containing 2300 vocabulary items and detailed rules for the graphic design of additional characters, so they settled a first set of approved Bliss-words for general use.
The Standards Council of Canada then sponsored, on January 21, 1993, the registration of an encoded character set for use in ISO/IEC 2022, in the ISO-IR international registry of coded character sets.
After many years of requests, the Blissymbolic language was finally approved as an encoded language, with code , into the ISO 639-2 and ISO 639-3 standards.
A proposal was posted by Michael Everson for the Blissymbolics script to be included in the Universal Character Set (UCS) and encoded for use with the ISO/IEC 10646 and Unicode standards. BCI would cooperate with the Unicode Technical Committee (UTC) and the ISO Working Group.
The proposed encoding does not use the lexical encoding model used in the existing ISO-IR/169 registered character set, but instead applies the Unicode and ISO character-glyph model to the Bliss-character model already adopted by BCI, since this would significantly reduce the number of needed characters. Bliss-characters can now be used in a creative way to create many new arbitrary concepts, by surrounding the invented words with special Bliss indicators (similar to punctuation), something which was not possible in the ISO-IR/169 encoding.
However, by the end of 2009, the Blissymbolic script was not encoded in the UCS. Some questions are still unanswered, such as the inclusion in the BCI repertoire of some characters (currently about 24) that are already encoded in the UCS (like digits, punctuation signs, spaces and some markers), but whose unification may cause problems due to the very strict graphical layouts required by the published Bliss reference guides. In addition, the character metrics use a specific layout where the usual baseline is not used, and the ideographic em-square is not relevant for Bliss character designs that use additional "earth line" and "sky line" to define the composition square.
Some fonts supporting the BCI repertoire are available and usable with texts encoded with private-use assignments (PUA) within the UCS. But only the private BCI encoding based on ISO-IR/169 registration is available for text interchange.
See also
Egyptian hieroglyphs
Esperanto
iConji
Isotype
Kanji
LoCoS (language)
References
External links
Blissymbol Communication UK
An Introduction to Blissymbols (PDF file)
Standard two-byte encoded character set for Blissymbols , from the ISO-IR international registry of character sets, registration number 169 (1993-01-21).
Michael Everson's First proposed encoding into Unicode and ISO/IEC 10646 of Blissymbolics characters, based on the decomposition of the ISO-IR/169 repertoire.
Preliminary proposal for encoding Blissymbols (WG2 N5228)
Radiolab program about Charles Bliss – Broadcast December 2012 – the item about Charles Bliss starts after 5 minutes and is approx 30 mins long.
Engineered languages
Auxiliary and educational artificial scripts
International auxiliary languages
Pictograms
Augmentative and alternative communication
Writing systems introduced in 1949
Constructed languages
Constructed languages introduced in the 1940s
|
https://en.wikipedia.org/wiki/Bestiary
|
A bestiary (from bestiarum vocabulum) is a compendium of beasts. Originating in the ancient world, bestiaries were made popular in the Middle Ages in illustrated volumes that described various animals and even rocks. The natural history and illustration of each beast was usually accompanied by a moral lesson. This reflected the belief that the world itself was the Word of God and that every living thing had its own special meaning. For example, the pelican, which was believed to tear open its breast to bring its young to life with its own blood, was a living representation of Jesus. Thus the bestiary is also a reference to the symbolic language of animals in Western Christian art and literature.
History
The bestiary — the medieval book of beasts — was among the most popular illuminated texts in northern Europe during the Middle Ages (about 500–1500). Medieval Christians understood every element of the world as a manifestation of God, and bestiaries largely focused on each animal's religious meaning. Much of what is in the bestiary came from the ancient Greeks and their philosophers. The earliest bestiary in the form in which it was later popularized was an anonymous 2nd-century Greek volume called the Physiologus, which itself summarized ancient knowledge and wisdom about animals in the writings of classical authors such as Aristotle's Historia Animalium and various works by Herodotus, Pliny the Elder, Solinus, Aelian and other naturalists.
Following the Physiologus, Saint Isidore of Seville (Book XII of the Etymologiae) and Saint Ambrose expanded the religious message with reference to passages from the Bible and the Septuagint. They and other authors freely expanded or modified pre-existing models, constantly refining the moral content without interest or access to much more detail regarding the factual content. Nevertheless, the often fanciful accounts of these beasts were widely read and generally believed to be true. A few observations found in bestiaries, such as the migration of birds, were discounted by the natural philosophers of later centuries, only to be rediscovered in the modern scientific era.
Medieval bestiaries are remarkably similar in sequence of the animals of which they treat. Bestiaries were particularly popular in England and France around the 12th century and were mainly compilations of earlier texts. The Aberdeen Bestiary is one of the best known of over 50 manuscript bestiaries surviving today.
Much influence comes from the Renaissance era and the general Middle Ages, as well as modern times. The Renaissance has been said to have started around the 14th century in Italy. Bestiaries influenced early heraldry in the Middle Ages, giving ideas for charges and also for the artistic form. Bestiaries continue to give inspiration to coats of arms created in our time.
Two illuminated Psalters, the Queen Mary Psalter (British Library Ms. Royal 2B, vii) and the Isabella Psalter (State Library, Munich), contain full Bestiary cycles. The bestiary in the Queen Mary Psalter is found in the "marginal" decorations that occupy about the bottom quarter of the page, and are unusually extensive and coherent in this work. In fact the bestiary has been expanded beyond the source in the Norman bestiary of Guillaume le Clerc to ninety animals. Some are placed in the text to make correspondences with the psalm they are illustrating.
Many decide to make their own bestiary with their own observations including knowledge from previous ones. These observations can be made in text form, as well as illustrated out. The Italian artist Leonardo da Vinci also made his own bestiary.
A volucrary is a similar collection of the symbols of birds that is sometimes found in conjunction with bestiaries. The most widely known volucrary in the Renaissance was Johannes de Cuba's Gart der Gesundheit which describes 122 birds and which was printed in 1485.
Bestiary content
The contents of medieval bestiaries were often obtained and created from combining older textual sources and accounts of animals, such as the Physiologus.
Medieval bestiaries contained detailed descriptions and illustrations of species native to Western Europe, exotic animals and what in modern times are considered to be imaginary animals. Descriptions of the animals included the physical characteristics associated with the creature, although these were often physiologically incorrect, along with the Christian morals that the animal represented. The description was then often accompanied by an artistic illustration of the animal as described in the bestiary. For example, in one bestiary the eagle is depicted in an illustration and is said to be the “king of birds.”
Bestiaries were organized in different ways based upon the sources they drew upon. The descriptions could be organized by animal groupings, such as terrestrial and marine creatures, or presented in an alphabetical manner. However, the texts gave no distinction between existing and imaginary animals. Descriptions of creatures such as dragons, unicorns, basilisk, griffin and caladrius were common in such works and found intermingled amongst accounts of bears, boars, deer, lions, and elephants. In one source, the author explains how fables and bestiaries are closely linked to one another as “each chapter of a bestiary, each fable in a collection, has a text and has a meaning.
This lack of separation has often been associated with the assumption that people during this time believed in what the modern period classifies as nonexistent or "imaginary creatures". However, this assumption is currently under debate, with various explanations being offered. Some scholars, such as Pamela Gravestock, have written on the theory that medieval people did not actually think such creatures existed but instead focused on the belief in the importance of the Christian morals these creatures represented, and that the importance of the moral did not change regardless if the animal existed or not. The historian of science David C. Lindberg pointed out that medieval bestiaries were rich in symbolism and allegory, so as to teach moral lessons and entertain, rather than to convey knowledge of the natural world.
Religious significance
The significance shown between animals and religion started much before bestiaries came into play. In many ancient civilizations there are references to animals and their meaning within that specific religion or mythology that we know of today. These civilizations included Egypt and their gods with the faces of animals or Greece which had symbolic animals for their godly beings, an example being Zeus and the eagle. With animals being a part of religion before bestiaries and their lessons came out, they were influenced by past observations of meaning as well as older civilizations and their interpretations.
As most of the students who read these bestiaries were monks and clerics, it is not impossible to say that there is a major religious significance within them. The bestiary was used to educate young men on the correct morals they should display. All of the animals presented in the bestiaries show some sort of lesson or meaning when presented. Much of the symbolism shown of animals. Much of what is proposed by the bestiaries mentions much of paganism because of the religious significance and time period of the medieval ages.
One of the main 'animals' mentioned in some of the bestiaries is dragons, which hold much significance in terms of religion and meaning. The unnatural part of dragon's history shows how important the church can be during this time. Much of what is covered in the article talks about how the dragon that is mentioned in some of the bestiaries shows a glimpse of the religious significance in many of these tales.
These bestiaries held much content in terms of religious significance. In almost every animal there is some way to connect it to a lesson from the church or a familiar religious story. With animals holding significance since ancient times, it is fair to say that bestiaries and their contents gave fuel to the context behind the animals, whether real or myth, and their meanings.
Modern bestiaries
In modern times, artists such as Henri de Toulouse-Lautrec and Saul Steinberg have produced their own bestiaries. Jorge Luis Borges wrote a contemporary bestiary of sorts, the Book of Imaginary Beings, which collects imaginary beasts from bestiaries and fiction. Nicholas Christopher wrote a literary novel called "The Bestiary" (Dial, 2007) that describes a lonely young man's efforts to track down the world's most complete bestiary. John Henry Fleming's Fearsome Creatures of Florida (Pocol Press, 2009) borrows from the medieval bestiary tradition to impart moral lessons about the environment. Caspar Henderson's The Book of Barely Imagined Beings (Granta 2012, University of Chicago Press 2013), subtitled "A 21st Century Bestiary", explores how humans imagine animals in a time of rapid environmental change. In July 2014, Jonathan Scott wrote The Blessed Book of Beasts, Eastern Christian Publications, featuring 101 animals from the various translations of the Bible, in keeping with the tradition of the bestiary found in the writings of the Saints, including Saint John Chrysostom. In today’s world there is a discipline called cryptozoology which is the study of unknown species. This discipline can be linked to medieval bestiaries because in many cases the unknown animals can be the same, as well as having meaning or significance behind them.
The lists of monsters to be found in computer games (for example NetHack, Monster Hunter and Pokémon) are often termed bestiaries.
See also
Allegory in the Middle Ages
List of medieval bestiaries
Marine counterparts of land creatures
References
“Animal Symbolism (Illustrated).” OpenSIUC, https://opensiuc.lib.siu.edu/cgi/viewcontent.cgi?article=2505&context=ocj. Accessed 5 March 2022.
Morrison, Elizabeth, and Larisa Grollemond. “An Introduction to the Bestiary, Book of Beasts in the Medieval World (article).” Khan Academy, https://www.khanacademy.org/humanities/medieval-world/beginners-guide-to-medieval-europe/manuscripts/a/an-introduction-to-the-bestiary-book-of-beasts-in-the-medieval-world. Accessed 2 March 2022.
Morrison, Elizabeth. “Beastly tales from the medieval bestiary.” The British Library, https://www.bl.uk/medieval-english-french-manuscripts/articles/beastly-tales-from-the-medieval-bestiary. Accessed 2 March 2022.
“The Renaissance | Boundless World History.” Lumen Learning, LumenCandela, https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-renaissance/. Accessed 5 March 2022.
"The Medieval Bestiary", by James Grout, part of the Encyclopædia Romana.
McCulloch, Florence. (1962) Medieval Latin and French Bestiaries.
Clark, Willene B. and Meradith T. McMunn. eds. (1989) Beasts and Birds of the Middle Ages. The Bestiary and its Legacy.
Payne, Ann. (1990) "Mediaeval Beasts. George, Wilma and Brunsdon Yapp. (1991) The Naming of the Beasts: Natural History in the Medieval Bestiary.
Benton, Janetta Rebold. (1992) The Medieval Menagerie: Animals in the Art of the Middle Ages.
Lindberg, David C. (1992) The Beginnings of Western Science. The European Tradition in Philosophhical, Religious and Institutional Context, 600 B. C. to A. D. 1450 Flores, Nona C. (1993) "The Mirror of Nature Distorted: The Medieval Artist's Dilemma in Depicting Animals".
Hassig, Debra (1995) Medieval Bestiaries: Text, Image, Ideology. Gravestock, Pamela. (1999) "Did Imaginary Animals Exist?"
Hassig, Debra, ed. (1999) The Mark of the Beast: The Medieval Bestiary in Art, Life, and Literature.
Notes
External links
The Bestiary: The Book of Beasts, T.H. White's translation of a medieval bestiary in the Cambridge University library; digitized by the University of Wisconsin–Madison libraries.
The Medieval Bestiary online, edited by David Badke.
The Bestiaire of Philippe de Thaon at the National Library of Denmark.
The Bestiary of Anne Walshe at the National Library of Denmark.
The Aberdeen Bestiary'' at the University of Aberdeen.
Exhibition (in English, but French version is fuller) at the Bibliothèque nationale de France
Christian Symbology Animals and their meanings in Christian texts.
Bestiairy - Monsters & Fabulous Creatures of Greek Myth & Legend with pictures
Types of illuminated manuscript
Medieval European legendary creatures
Medieval literature
Zoology
|
https://en.wikipedia.org/wiki/Benzodiazepine
|
Benzodiazepines (BZD, BDZ, BZs), colloquially called "benzos", are a class of depressant drugs whose core chemical structure is the fusion of a benzene ring and a diazepine ring. They are prescribed to treat conditions such as anxiety disorders, insomnia, and seizures. The first benzodiazepine, chlordiazepoxide (Librium), was discovered accidentally by Leo Sternbach in 1955 and was made available in 1960 by Hoffmann–La Roche, who soon followed with diazepam (Valium) in 1963. By 1977, benzodiazepines were the most prescribed medications globally; the introduction of selective serotonin reuptake inhibitors (SSRIs), among other factors, decreased rates of prescription, but they remain frequently used worldwide.
Benzodiazepines are depressants that enhance the effect of the neurotransmitter gamma-aminobutyric acid (GABA) at the GABAA receptor, resulting in sedative, hypnotic (sleep-inducing), anxiolytic (anti-anxiety), anticonvulsant, and muscle relaxant properties. High doses of many shorter-acting benzodiazepines may also cause anterograde amnesia and dissociation. These properties make benzodiazepines useful in treating anxiety, panic disorder, insomnia, agitation, seizures, muscle spasms, alcohol withdrawal and as a premedication for medical or dental procedures. Benzodiazepines are categorized as short, intermediate, or long-acting. Short- and intermediate-acting benzodiazepines are preferred for the treatment of insomnia; longer-acting benzodiazepines are recommended for the treatment of anxiety.
Benzodiazepines are generally viewed as safe and effective for short-term use—about two to four weeks—although cognitive impairment and paradoxical effects such as aggression or behavioral disinhibition can occur. A minority of people have paradoxical reactions after taking benzodiazepines such as worsened agitation or panic. Benzodiazepines are associated with an increased risk of suicide due to aggression, impulsivity, and negative withdrawal effects. Long-term use is controversial because of concerns about decreasing effectiveness, physical dependence, benzodiazepine withdrawal syndrome, and an increased risk of dementia and cancer. The elderly are at an increased risk of both short- and long-term adverse effects, and as a result, all benzodiazepines are listed in the Beers List of inappropriate medications for older adults. There is controversy concerning the safety of benzodiazepines in pregnancy. While they are not major teratogens, uncertainty remains as to whether they cause cleft palate in a small number of babies and whether neurobehavioural effects occur as a result of prenatal exposure; they are known to cause withdrawal symptoms in the newborn.
Taken in overdose, benzodiazepines can cause dangerous deep unconsciousness, but they are less toxic than their predecessors, the barbiturates, and death rarely results when a benzodiazepine is the only drug taken. Combined with other central nervous system (CNS) depressants such as alcohol and opioids, the potential for toxicity and fatal overdose increases significantly. Benzodiazepines are commonly used recreationally and also often taken in combination with other addictive substances, and are controlled in most countries.
Medical uses
Benzodiazepines possess psycholeptic, sedative, hypnotic, anxiolytic, anticonvulsant, muscle relaxant, and amnesic actions, which are useful in a variety of indications such as alcohol dependence, seizures, anxiety disorders, panic, agitation, and insomnia. Most are administered orally; however, they can also be given intravenously, intramuscularly, or rectally. In general, benzodiazepines are well tolerated and are safe and effective drugs in the short term for a wide range of conditions. Tolerance can develop to their effects and there is also a risk of dependence, and upon discontinuation a withdrawal syndrome may occur. These factors, combined with other possible secondary effects after prolonged use such as psychomotor, cognitive, or memory impairments, limit their long-term applicability. The effects of long-term use or misuse include the tendency to cause or worsen cognitive deficits, depression, and anxiety. The College of Physicians and Surgeons of British Columbia recommends discontinuing the usage of benzodiazepines in those on opioids and those who have used them long term. Benzodiazepines can have serious adverse health outcomes, and these findings support clinical and regulatory efforts to reduce usage, especially in combination with non-benzodiazepine receptor agonists.
Panic disorder
Because of their effectiveness, tolerability, and rapid onset of anxiolytic action, benzodiazepines are frequently used for the treatment of anxiety associated with panic disorder. However, there is disagreement among expert bodies regarding the long-term use of benzodiazepines for panic disorder. The views range from those holding benzodiazepines are not effective long-term and should be reserved for treatment-resistant cases to those holding they are as effective in the long term as selective serotonin reuptake inhibitors (SSRIs).
The American Psychiatric Association (APA) guidelines note that, in general, benzodiazepines are well tolerated, and their use for the initial treatment for panic disorder is strongly supported by numerous controlled trials. APA states that there is insufficient evidence to recommend any of the established panic disorder treatments over another. The choice of treatment between benzodiazepines, SSRIs, serotonin–norepinephrine reuptake inhibitors (SNRIs), tricyclic antidepressants, and psychotherapy should be based on the patient's history, preference, and other individual characteristics. Selective serotonin reuptake inhibitors are likely to be the best choice of pharmacotherapy for many patients with panic disorder, but benzodiazepines are also often used, and some studies suggest that these medications are still used with greater frequency than the SSRIs. One advantage of benzodiazepines is that they alleviate the anxiety symptoms much faster than antidepressants, and therefore may be preferred in patients for whom rapid symptom control is critical. However, this advantage is offset by the possibility of developing benzodiazepine dependence. APA does not recommend benzodiazepines for persons with depressive symptoms or a recent history of substance use disorder. The APA guidelines state that, in general, pharmacotherapy of panic disorder should be continued for at least a year, and that clinical experience supports continuing benzodiazepine treatment to prevent recurrence. Although major concerns about benzodiazepine tolerance and withdrawal have been raised, there is no evidence for significant dose escalation in patients using benzodiazepines long-term. For many such patients, stable doses of benzodiazepines retain their efficacy over several years.
Guidelines issued by the UK-based National Institute for Health and Clinical Excellence (NICE), carried out a systematic review using different methodology and came to a different conclusion. They questioned the accuracy of studies that were not placebo-controlled. And, based on the findings of placebo-controlled studies, they do not recommend use of benzodiazepines beyond two to four weeks, as tolerance and physical dependence develop rapidly, with withdrawal symptoms including rebound anxiety occurring after six weeks or more of use. Nevertheless, benzodiazepines are still prescribed for long-term treatment of anxiety disorders, although specific antidepressants and psychological therapies are recommended as the first-line treatment options with the anticonvulsant drug pregabalin indicated as a second- or third-line treatment and suitable for long-term use. NICE stated that long-term use of benzodiazepines for panic disorder with or without agoraphobia is an unlicensed indication, does not have long-term efficacy, and is, therefore, not recommended by clinical guidelines. Psychological therapies such as cognitive behavioural therapy are recommended as a first-line therapy for panic disorder; benzodiazepine use has been found to interfere with therapeutic gains from these therapies.
Benzodiazepines are usually administered orally; however, very occasionally lorazepam or diazepam may be given intravenously for the treatment of panic attacks.
Generalized anxiety disorder
Benzodiazepines have robust efficacy in the short-term management of generalized anxiety disorder (GAD), but were not shown effective in producing long-term improvement overall. According to National Institute for Health and Clinical Excellence (NICE), benzodiazepines can be used in the immediate management of GAD, if necessary. However, they should not usually be given for longer than 2–4 weeks. The only medications NICE recommends for the longer term management of GAD are antidepressants.
Likewise, Canadian Psychiatric Association (CPA) recommends benzodiazepines alprazolam, bromazepam, lorazepam, and diazepam only as a second-line choice, if the treatment with two different antidepressants was unsuccessful. Although they are second-line agents, benzodiazepines can be used for a limited time to relieve severe anxiety and agitation. CPA guidelines note that after 4–6 weeks the effect of benzodiazepines may decrease to the level of placebo, and that benzodiazepines are less effective than antidepressants in alleviating ruminative worry, the core symptom of GAD. However, in some cases, a prolonged treatment with benzodiazepines as the add-on to an antidepressant may be justified.
A 2015 review found a larger effect with medications than talk therapy. Medications with benefit include serotonin-noradrenaline reuptake inhibitors, benzodiazepines, and selective serotonin reuptake inhibitors.
Anxiety
Benzodiazepines are sometimes used in the treatment of acute anxiety, as they bring about rapid and marked relief of symptoms in most individuals; however, they are not recommended beyond 2–4 weeks of use due to risks of tolerance and dependence and a lack of long-term effectiveness. As for insomnia, they may also be used on an irregular/"as-needed" basis, such as in cases where said anxiety is at its worst. Compared to other pharmacological treatments, benzodiazepines are twice as likely to lead to a relapse of the underlying condition upon discontinuation. Psychological therapies and other pharmacological therapies are recommended for the long-term treatment of generalized anxiety disorder. Antidepressants have higher remission rates and are, in general, safe and effective in the short and long term.
Insomnia
Benzodiazepines can be useful for short-term treatment of insomnia. Their use beyond 2 to 4 weeks is not recommended due to the risk of dependence. The Committee on Safety of Medicines report recommended that where long-term use of benzodiazepines for insomnia is indicated then treatment should be intermittent wherever possible. It is preferred that benzodiazepines be taken intermittently and at the lowest effective dose. They improve sleep-related problems by shortening the time spent in bed before falling asleep, prolonging the sleep time, and, in general, reducing wakefulness. However, they worsen sleep quality by increasing light sleep and decreasing deep sleep. Other drawbacks of hypnotics, including benzodiazepines, are possible tolerance to their effects, rebound insomnia, and reduced slow-wave sleep and a withdrawal period typified by rebound insomnia and a prolonged period of anxiety and agitation.
The list of benzodiazepines approved for the treatment of insomnia is fairly similar among most countries, but which benzodiazepines are officially designated as first-line hypnotics prescribed for the treatment of insomnia varies between countries. Longer-acting benzodiazepines such as nitrazepam and diazepam have residual effects that may persist into the next day and are, in general, not recommended.
Since the release of nonbenzodiazepines, also known as z-drugs, in 1992 in response to safety concerns, individuals with insomnia and other sleep disorders have increasingly been prescribed nonbenzodiazepines (2.3% in 1993 to 13.7% of Americans in 2010), less often prescribed benzodiazepines (23.5% in 1993 to 10.8% in 2010). It is not clear as to whether the new non benzodiazepine hypnotics (Z-drugs) are better than the short-acting benzodiazepines. The efficacy of these two groups of medications is similar. According to the US Agency for Healthcare Research and Quality, indirect comparison indicates that side-effects from benzodiazepines may be about twice as frequent as from nonbenzodiazepines. Some experts suggest using nonbenzodiazepines preferentially as a first-line long-term treatment of insomnia. However, the UK National Institute for Health and Clinical Excellence did not find any convincing evidence in favor of Z-drugs. NICE review pointed out that short-acting Z-drugs were inappropriately compared in clinical trials with long-acting benzodiazepines. There have been no trials comparing short-acting Z-drugs with appropriate doses of short-acting benzodiazepines. Based on this, NICE recommended choosing the hypnotic based on cost and the patient's preference.
Older adults should not use benzodiazepines to treat insomnia unless other treatments have failed. When benzodiazepines are used, patients, their caretakers, and their physician should discuss the increased risk of harms, including evidence that shows twice the incidence of traffic collisions among driving patients, and falls and hip fracture for older patients.
Seizures
Prolonged convulsive epileptic seizures are a medical emergency that can usually be dealt with effectively by administering fast-acting benzodiazepines, which are potent anticonvulsants. In a hospital environment, intravenous clonazepam, lorazepam, and diazepam are first-line choices. In the community, intravenous administration is not practical and so rectal diazepam or buccal midazolam are used, with a preference for midazolam as its administration is easier and more socially acceptable.
When benzodiazepines were first introduced, they were enthusiastically adopted for treating all forms of epilepsy. However, drowsiness and tolerance become problems with continued use and none are now considered first-line choices for long-term epilepsy therapy. Clobazam is widely used by specialist epilepsy clinics worldwide and clonazepam is popular in the Netherlands, Belgium and France. Clobazam was approved for use in the United States in 2011. In the UK, both clobazam and clonazepam are second-line choices for treating many forms of epilepsy. Clobazam also has a useful role for very short-term seizure prophylaxis and in catamenial epilepsy. Discontinuation after long-term use in epilepsy requires additional caution because of the risks of rebound seizures. Therefore, the dose is slowly tapered over a period of up to six months or longer.
Alcohol withdrawal
Chlordiazepoxide is the most commonly used benzodiazepine for alcohol detoxification, but diazepam may be used as an alternative. Both are used in the detoxification of individuals who are motivated to stop drinking, and are prescribed for a short period of time to reduce the risks of developing tolerance and dependence to the benzodiazepine medication itself. The benzodiazepines with a longer half-life make detoxification more tolerable, and dangerous (and potentially lethal) alcohol withdrawal effects are less likely to occur. On the other hand, short-acting benzodiazepines may lead to breakthrough seizures, and are, therefore, not recommended for detoxification in an outpatient setting. Oxazepam and lorazepam are often used in patients at risk of drug accumulation, in particular, the elderly and those with cirrhosis, because they are metabolized differently from other benzodiazepines, through conjugation.
Benzodiazepines are the preferred choice in the management of alcohol withdrawal syndrome, in particular, for the prevention and treatment of the dangerous complication of seizures and in subduing severe delirium. Lorazepam is the only benzodiazepine with predictable intramuscular absorption and it is the most effective in preventing and controlling acute seizures.
Other indications
Benzodiazepines are often prescribed for a wide range of conditions:
They can sedate patients receiving mechanical ventilation or those in extreme distress. Caution is exercised in this situation due to the risk of respiratory depression, and it is recommended that benzodiazepine overdose treatment facilities should be available. They have also been found to increase the likelihood of later PTSD after people have been removed from ventilators.
Benzodiazepines are indicated in the management of breathlessness (shortness of breath) in advanced diseases, in particular where other treatments have failed to adequately control symptoms.
Benzodiazepines are effective as medication given a couple of hours before surgery to relieve anxiety. They also produce amnesia, which can be useful, as patients may not remember unpleasantness from the procedure. They are also used in patients with dental phobia as well as some ophthalmic procedures like refractive surgery; although such use is controversial and only recommended for those who are very anxious. Midazolam is the most commonly prescribed for this use because of its strong sedative actions and fast recovery time, as well as its water solubility, which reduces pain upon injection. Diazepam and lorazepam are sometimes used. Lorazepam has particularly marked amnesic properties that may make it more effective when amnesia is the desired effect.
Benzodiazepines are well known for their strong muscle-relaxing properties and can be useful in the treatment of muscle spasms, although tolerance often develops to their muscle relaxant effects. Baclofen or tizanidine are sometimes used as an alternative to benzodiazepines. Tizanidine has been found to have superior tolerability compared to diazepam and baclofen.
Benzodiazepines are also used to treat the acute panic caused by hallucinogen intoxication. Benzodiazepines are also used to calm the acutely agitated individual and can, if required, be given via an intramuscular injection. They can sometimes be effective in the short-term treatment of psychiatric emergencies such as acute psychosis as in schizophrenia or mania, bringing about rapid tranquillization and sedation until the effects of lithium or neuroleptics (antipsychotics) take effect. Lorazepam is most commonly used but clonazepam is sometimes prescribed for acute psychosis or mania; their long-term use is not recommended due to risks of dependence. Further research investigating the use of benzodiazepines alone and in combination with antipsychotic medications for treating acute psychosis is warranted.
Clonazepam, a benzodiazepine is used to treat many forms of parasomnia. Rapid eye movement behavior disorder responds well to low doses of clonazepam. Restless legs syndrome can be treated using clonazepam as a third line treatment option as the use of clonazepam is still investigational.
Benzodiazepines are sometimes used for obsessive–compulsive disorder (OCD), although they are generally believed ineffective for this indication. Effectiveness was, however, found in one small study. Benzodiazepines can be considered as a treatment option in treatment resistant cases.
Antipsychotics are generally a first-line treatment for delirium; however, when delirium is caused by alcohol or sedative hypnotic withdrawal, benzodiazepines are a first-line treatment.
There is some evidence that low doses of benzodiazepines reduce adverse effects of electroconvulsive therapy.
Contraindications
Because of their muscle relaxant action, benzodiazepines may cause respiratory depression in susceptible individuals. For that reason, they are contraindicated in people with myasthenia gravis, sleep apnea, bronchitis, and COPD. Caution is required when benzodiazepines are used in people with personality disorders or intellectual disability because of frequent paradoxical reactions. In major depression, they may precipitate suicidal tendencies and are sometimes used for suicidal overdoses. Individuals with a history of excessive alcohol use or non-medical use of opioids or barbiturates should avoid benzodiazepines, as there is a risk of life-threatening interactions with these drugs.
Pregnancy
In the United States, the Food and Drug Administration has categorized benzodiazepines into either category D or X meaning potential for harm in the unborn has been demonstrated.
Exposure to benzodiazepines during pregnancy has been associated with a slightly increased (from 0.06 to 0.07%) risk of cleft palate in newborns, a controversial conclusion as some studies find no association between benzodiazepines and cleft palate. Their use by expectant mothers shortly before the delivery may result in a floppy infant syndrome. Newborns with this condition tend to have hypotonia, hypothermia, lethargy, and breathing and feeding difficulties. Cases of neonatal withdrawal syndrome have been described in infants chronically exposed to benzodiazepines in utero. This syndrome may be hard to recognize, as it starts several days after delivery, for example, as late as 21 days for chlordiazepoxide. The symptoms include tremors, hypertonia, hyperreflexia, hyperactivity, and vomiting and may last for up to three to six months. Tapering down the dose during pregnancy may lessen its severity. If used in pregnancy, those benzodiazepines with a better and longer safety record, such as diazepam or chlordiazepoxide, are recommended over potentially more harmful benzodiazepines, such as temazepam or triazolam. Using the lowest effective dose for the shortest period of time minimizes the risks to the unborn child.
Elderly
The benefits of benzodiazepines are least and the risks are greatest in the elderly. They are listed as a potentially inappropriate medication for older adults by the American Geriatrics Society. The elderly are at an increased risk of dependence and are more sensitive to the adverse effects such as memory problems, daytime sedation, impaired motor coordination, and increased risk of motor vehicle accidents and falls, and an increased risk of hip fractures. The long-term effects of benzodiazepines and benzodiazepine dependence in the elderly can resemble dementia, depression, or anxiety syndromes, and progressively worsens over time. Adverse effects on cognition can be mistaken for the effects of old age. The benefits of withdrawal include improved cognition, alertness, mobility, reduced risk incontinence, and a reduced risk of falls and fractures. The success of gradual-tapering benzodiazepines is as great in the elderly as in younger people. Benzodiazepines should be prescribed to the elderly only with caution and only for a short period at low doses. Short to intermediate-acting benzodiazepines are preferred in the elderly such as oxazepam and temazepam. The high potency benzodiazepines alprazolam and triazolam and long-acting benzodiazepines are not recommended in the elderly due to increased adverse effects. Nonbenzodiazepines such as zaleplon and zolpidem and low doses of sedating antidepressants are sometimes used as alternatives to benzodiazepines.
Long-term use of benzodiazepines is associated with increased risk of cognitive impairment and dementia, and reduction in prescribing levels is likely to reduce dementia risk. The association of a history of benzodiazepine use and cognitive decline is unclear, with some studies reporting a lower risk of cognitive decline in former users, some finding no association and some indicating an increased risk of cognitive decline.
Benzodiazepines are sometimes prescribed to treat behavioral symptoms of dementia. However, like antidepressants, they have little evidence of effectiveness, although antipsychotics have shown some benefit. Cognitive impairing effects of benzodiazepines that occur frequently in the elderly can also worsen dementia.
Adverse effects
The most common side-effects of benzodiazepines are related to their sedating and muscle-relaxing action. They include drowsiness, dizziness, and decreased alertness and concentration. Lack of coordination may result in falls and injuries, in particular, in the elderly. Another result is impairment of driving skills and increased likelihood of road traffic accidents. Decreased libido and erection problems are a common side effect. Depression and disinhibition may emerge. Hypotension and suppressed breathing (hypoventilation) may be encountered with intravenous use. Less common side effects include nausea and changes in appetite, blurred vision, confusion, euphoria, depersonalization and nightmares. Cases of liver toxicity have been described but are very rare.
The long-term effects of benzodiazepine use can include cognitive impairment as well as affective and behavioural problems. Feelings of turmoil, difficulty in thinking constructively, loss of sex-drive, agoraphobia and social phobia, increasing anxiety and depression, loss of interest in leisure pursuits and interests, and an inability to experience or express feelings can also occur. Not everyone, however, experiences problems with long-term use. Additionally, an altered perception of self, environment and relationships may occur. A study published in 2020 found that long-term use of prescription benzodiazepines is associated with an increase in all-cause mortality among those age 65 or younger, but not those older than 65. The study also found that all-cause mortality was increased further in cases in which benzodiazepines are co-prescribed with opioids, relative to cases in which benzodiazepines are prescribed without opioids, but again only in those age 65 or younger.
Compared to other sedative-hypnotics, visits to the hospital involving benzodiazepines had a 66% greater odds of a serious adverse health outcome. This included hospitalization, patient transfer, or death, and visits involving a combination of benzodiazepines and non-benzodiapine receptor agonists had almost four-times increased odds of a serious health outcome.
In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class.
Cognitive effects
The short-term use of benzodiazepines adversely affects multiple areas of cognition, the most notable one being that it interferes with the formation and consolidation of memories of new material and may induce complete anterograde amnesia. However, researchers hold contrary opinions regarding the effects of long-term administration. One view is that many of the short-term effects continue into the long-term and may even worsen, and are not resolved after stopping benzodiazepine usage. Another view maintains that cognitive deficits in chronic benzodiazepine users occur only for a short period after the dose, or that the anxiety disorder is the cause of these deficits.
While the definitive studies are lacking, the former view received support from a 2004 meta-analysis of 13 small studies. This meta-analysis found that long-term use of benzodiazepines was associated with moderate to large adverse effects on all areas of cognition, with visuospatial memory being the most commonly detected impairment. Some of the other impairments reported were decreased IQ, visiomotor coordination, information processing, verbal learning and concentration. The authors of the meta-analysis and a later reviewer noted that the applicability of this meta-analysis is limited because the subjects were taken mostly from withdrawal clinics; the coexisting drug, alcohol use, and psychiatric disorders were not defined; and several of the included studies conducted the cognitive measurements during the withdrawal period.
Paradoxical effects
Paradoxical reactions, such as increased seizures in epileptics, aggression, violence, impulsivity, irritability and suicidal behavior sometimes occur. These reactions have been explained as consequences of disinhibition and the subsequent loss of control over socially unacceptable behavior. Paradoxical reactions are rare in the general population, with an incidence rate below 1% and similar to placebo. However, they occur with greater frequency in recreational abusers, individuals with borderline personality disorder, children, and patients on high-dosage regimes. In these groups, impulse control problems are perhaps the most important risk factor for disinhibition; learning disabilities and neurological disorders are also significant risks. Most reports of disinhibition involve high doses of high-potency benzodiazepines. Paradoxical effects may also appear after chronic use of benzodiazepines.
Long-term worsening of psychiatric symptoms
While benzodiazepines may have short-term benefits for anxiety, sleep and agitation in some patients, long-term (i.e., greater than 2–4 weeks) use can result in a worsening of the very symptoms the medications are meant to treat. Potential explanations include exacerbating cognitive problems that are already common in anxiety disorders, causing or worsening depression and suicidality, disrupting sleep architecture by inhibiting deep stage sleep, withdrawal symptoms or rebound symptoms in between doses mimicking or exacerbating underlying anxiety or sleep disorders, inhibiting the benefits of psychotherapy by inhibiting memory consolidation and reducing fear extinction, and reducing coping with trauma/stress and increasing vulnerability to future stress. The latter two explanations may be why benzodiazepines are ineffective and/or potentially harmful in PTSD and phobias. Anxiety, insomnia and irritability may be temporarily exacerbated during withdrawal, but psychiatric symptoms after discontinuation are usually less than even while taking benzodiazepines. Functioning significantly improves within 1 year of discontinuation.
Physical Dependence, Withdrawal and Post-Withdrawal Syndromes
Tolerance
The main problem of the chronic use of benzodiazepines is the development of tolerance and dependence. Tolerance manifests itself as diminished pharmacological effect and develops relatively quickly to the sedative, hypnotic, anticonvulsant, and muscle relaxant actions of benzodiazepines. Tolerance to anti-anxiety effects develops more slowly with little evidence of continued effectiveness beyond four to six months of continued use. In general, tolerance to the amnesic effects does not occur. However, controversy exists as to tolerance to the anxiolytic effects with some evidence that benzodiazepines retain efficacy and opposing evidence from a systematic review of the literature that tolerance frequently occurs and some evidence that anxiety may worsen with long-term use. The question of tolerance to the amnesic effects of benzodiazepines is, likewise, unclear. Some evidence suggests that partial tolerance does develop, and that, "memory impairment is limited to a narrow window within 90 minutes after each dose".
A major disadvantage of benzodiazepines is that tolerance to therapeutic effects develops relatively quickly while many adverse effects persist. Tolerance develops to hypnotic and myorelaxant effects within days to weeks, and to anticonvulsant and anxiolytic effects within weeks to months. Therefore, benzodiazepines are unlikely to be effective long-term treatments for sleep and anxiety. While BZD therapeutic effects disappear with tolerance, depression and impulsivity with high suicidal risk commonly persist. Several studies have confirmed that long-term benzodiazepines are not significantly different from placebo for sleep or anxiety. This may explain why patients commonly increase doses over time and many eventually take more than one type of benzodiazepine after the first loses effectiveness. Additionally, because tolerance to benzodiazepine sedating effects develops more quickly than does tolerance to brainstem depressant effects, those taking more benzodiazepines to achieve desired effects may experience sudden respiratory depression, hypotension or death. Most patients with anxiety disorders and PTSD have symptoms that persist for at least several months, making tolerance to therapeutic effects a distinct problem for them and necessitating the need for more effective long-term treatment (e.g., psychotherapy, serotonergic antidepressants).
Withdrawal symptoms and management
Discontinuation of benzodiazepines or abrupt reduction of the dose, even after a relatively short course of treatment (two to four weeks), may result in two groups of symptoms—rebound and withdrawal. Rebound symptoms are the return of the symptoms for which the patient was treated but worse than before. Withdrawal symptoms are the new symptoms that occur when the benzodiazepine is stopped. They are the main sign of physical dependence.
The most frequent symptoms of withdrawal from benzodiazepines are insomnia, gastric problems, tremors, agitation, fearfulness, and muscle spasms. The less frequent effects are irritability, sweating, depersonalization, derealization, hypersensitivity to stimuli, depression, suicidal behavior, psychosis, seizures, and delirium tremens. Severe symptoms usually occur as a result of abrupt or over-rapid withdrawal. Abrupt withdrawal can be dangerous and lead to excitotoxicity, causing damage and even death to nerve cells as a result of excessive levels of the excitatory neurotransmitter glutamate. Increased glutamatergic activity is thought to be part of a compensatory mechanism to chronic GABAergic inhibition from benzodiazepines. Therefore, a gradual reduction regimen is recommended.
Symptoms may also occur during a gradual dosage reduction, but are typically less severe and may persist as part of a protracted withdrawal syndrome for months after cessation of benzodiazepines. Approximately 10% of patients experience a notable protracted withdrawal syndrome, which can persist for many months or in some cases a year or longer. Protracted symptoms tend to resemble those seen during the first couple of months of withdrawal but usually are of a sub-acute level of severity. Such symptoms do gradually lessen over time, eventually disappearing altogether.
Benzodiazepines have a reputation with patients and doctors for causing a severe and traumatic withdrawal; however, this is in large part due to the withdrawal process being poorly managed. Over-rapid withdrawal from benzodiazepines increases the severity of the withdrawal syndrome and increases the failure rate. A slow and gradual withdrawal customised to the individual and, if indicated, psychological support is the most effective way of managing the withdrawal. Opinion as to the time needed to complete withdrawal ranges from four weeks to several years. A goal of less than six months has been suggested, but due to factors such as dosage and type of benzodiazepine, reasons for prescription, lifestyle, personality, environmental stresses, and amount of available support, a year or more may be needed to withdraw.
Withdrawal is best managed by transferring the physically dependent patient to an equivalent dose of diazepam because it has the longest half-life of all of the benzodiazepines, is metabolised into long-acting active metabolites and is available in low-potency tablets, which can be quartered for smaller doses. A further benefit is that it is available in liquid form, which allows for even smaller reductions. Chlordiazepoxide, which also has a long half-life and long-acting active metabolites, can be used as an alternative.
Nonbenzodiazepines are contraindicated during benzodiazepine withdrawal as they are cross tolerant with benzodiazepines and can induce dependence. Alcohol is also cross tolerant with benzodiazepines and more toxic and thus caution is needed to avoid replacing one dependence with another. During withdrawal, fluoroquinolone-based antibiotics are best avoided if possible; they displace benzodiazepines from their binding site and reduce GABA function and, thus, may aggravate withdrawal symptoms. Antipsychotics are not recommended for benzodiazepine withdrawal (or other CNS depressant withdrawal states) especially clozapine, olanzapine or low potency phenothiazines e.g. chlorpromazine as they lower the seizure threshold and can worsen withdrawal effects; if used extreme caution is required.
Withdrawal from long term benzodiazepines is beneficial for most individuals. Withdrawal of benzodiazepines from long-term users, in general, leads to improved physical and mental health particularly in the elderly; although some long term users report continued benefit from taking benzodiazepines, this may be the result of suppression of withdrawal effects.
Controversial associations
Beyond the well established link between benzodiazepines and psychomotor impairment resulting in motor vehicle accidents and falls leading to fracture; research in the 2000s and 2010s has raised the association between benzodiazepines (and Z-drugs) and other, as of yet unproven, adverse effects including dementia, cancer, infections, pancreatitis and respiratory disease exacerbations.
Dementia
A number of studies have drawn an association between long-term benzodiazepine use and neuro-degenerative disease, particularly Alzheimer's disease. It has been determined that long-term use of benzodiazepines is associated with increased dementia risk, even after controlling for protopathic bias.
Infections
Some observational studies have detected significant associations between benzodiazepines and respiratory infections such as pneumonia where others have not. A large meta-analysis of pre-marketing randomized controlled trials on the pharmacologically related Z-Drugs suggest a small increase in infection risk as well. An immunodeficiency effect from the action of benzodiazepines on GABA-A receptors has been postulated from animal studies.
Cancer
A Meta-analysis of observational studies has determined an association between benzodiazepine use and cancer, though the risk across different agents and different cancers varied significantly. In terms of experimental basic science evidence, an analysis of carcinogenetic and genotoxicity data for various benzodiazepines has suggested a small possibility of carcinogenesis for a small number of benzodiazepines.
Pancreatitis
The evidence suggesting a link between benzodiazepines (and Z-Drugs) and pancreatic inflammation is very sparse and limited to a few observational studies from Taiwan. A criticism of confounding can be applied to these findings as with the other controversial associations above. Further well-designed research from other populations as well as a biologically plausible mechanism is required to confirm this association.
Overdose
Although benzodiazepines are much safer in overdose than their predecessors, the barbiturates, they can still cause problems in overdose. Taken alone, they rarely cause severe complications in overdose; statistics in England showed that benzodiazepines were responsible for 3.8% of all deaths by poisoning from a single drug. However, combining these drugs with alcohol, opiates or tricyclic antidepressants markedly raises the toxicity. The elderly are more sensitive to the side effects of benzodiazepines, and poisoning may even occur from their long-term use. The various benzodiazepines differ in their toxicity; temazepam appears most toxic in overdose and when used with other drugs. The symptoms of a benzodiazepine overdose may include; drowsiness, slurred speech, nystagmus, hypotension, ataxia, coma, respiratory depression, and cardiorespiratory arrest.
A reversal agent for benzodiazepines exists, flumazenil (Anexate). Its use as an antidote is not routinely recommended because of the high risk of resedation and seizures. In a double-blind, placebo-controlled trial of 326 people, 4 people had serious adverse events and 61% became resedated following the use of flumazenil. Numerous contraindications to its use exist. It is contraindicated in people with a history of long-term use of benzodiazepines, those having ingested a substance that lowers the seizure threshold or may cause an arrhythmia, and in those with abnormal vital signs. One study found that only 10% of the people presenting with a benzodiazepine overdose are suitable candidates for treatment with flumazenil.
Interactions
Individual benzodiazepines may have different interactions with certain drugs. Depending on their metabolism pathway, benzodiazepines can be divided roughly into two groups. The largest group consists of those that are metabolized by cytochrome P450 (CYP450) enzymes and possess significant potential for interactions with other drugs. The other group comprises those that are metabolized through glucuronidation, such as lorazepam, oxazepam, and temazepam, and, in general, have few drug interactions.
Many drugs, including oral contraceptives, some antibiotics, antidepressants, and antifungal agents, inhibit cytochrome enzymes in the liver. They reduce the rate of elimination of the benzodiazepines that are metabolized by CYP450, leading to possibly excessive drug accumulation and increased side-effects. In contrast, drugs that induce cytochrome P450 enzymes, such as St John's wort, the antibiotic rifampicin, and the anticonvulsants carbamazepine and phenytoin, accelerate elimination of many benzodiazepines and decrease their action. Taking benzodiazepines with alcohol, opioids and other central nervous system depressants potentiates their action. This often results in increased sedation, impaired motor coordination, suppressed breathing, and other adverse effects that have potential to be lethal. Antacids can slow down absorption of some benzodiazepines; however, this effect is marginal and inconsistent.
Pharmacology
Pharmacodynamics
Benzodiazepines work by increasing the effectiveness of the endogenous chemical, GABA, to decrease the excitability of neurons. This reduces the communication between neurons and, therefore, has a calming effect on many of the functions of the brain.
GABA controls the excitability of neurons by binding to the GABAA receptor. The GABAA receptor is a protein complex located in the synapses between neurons. All GABAA receptors contain an ion channel that conducts chloride ions across neuronal cell membranes and two binding sites for the neurotransmitter gamma-aminobutyric acid (GABA), while a subset of GABAA receptor complexes also contain a single binding site for benzodiazepines. Binding of benzodiazepines to this receptor complex does not alter binding of GABA. Unlike other positive allosteric modulators that increase ligand binding, benzodiazepine binding acts as a positive allosteric modulator by increasing the total conduction of chloride ions across the neuronal cell membrane when GABA is already bound to its receptor. This increased chloride ion influx hyperpolarizes the neuron's membrane potential. As a result, the difference between resting potential and threshold potential is increased and firing is less likely.
Different GABAA receptor subtypes have varying distributions within different regions of the brain and, therefore, control distinct neuronal circuits. Hence, activation of different GABAA receptor subtypes by benzodiazepines may result in distinct pharmacological actions. In terms of the mechanism of action of benzodiazepines, their similarities are too great to separate them into individual categories such as anxiolytic or hypnotic. For example, a hypnotic administered in low doses produces anxiety-relieving effects, whereas a benzodiazepine marketed as an anti-anxiety drug at higher doses induces sleep.
The subset of GABAA receptors that also bind benzodiazepines are referred to as benzodiazepine receptors (BzR). The GABAA receptor is a heteromer composed of five subunits, the most common ones being two αs, two βs, and one γ (α2β2γ1). For each subunit, many subtypes exist (α1–6, β1–3, and γ1–3). GABAA receptors that are made up of different combinations of subunit subtypes have different properties, different distributions in the brain and different activities relative to pharmacological and clinical effects. Benzodiazepines bind at the interface of the α and γ subunits on the GABAA receptor. Binding also requires that alpha subunits contain a histidine amino acid residue, (i.e., α1, α2, α3, and α5 containing GABAA receptors). For this reason, benzodiazepines show no affinity for GABAA receptors containing α4 and α6 subunits with an arginine instead of a histidine residue. Once bound to the benzodiazepine receptor, the benzodiazepine ligand locks the benzodiazepine receptor into a conformation in which it has a greater affinity for the GABA neurotransmitter. This increases the frequency of the opening of the associated chloride ion channel and hyperpolarizes the membrane of the associated neuron. The inhibitory effect of the available GABA is potentiated, leading to sedative and anxiolytic effects. For instance, those ligands with high activity at the α1 are associated with stronger hypnotic effects, whereas those with higher affinity for GABAA receptors containing α2 and/or α3 subunits have good anti-anxiety activity.
GABAA receptors participate in the regulation of synaptic pruning by prompting microglial spine engulfment. Benzodiazepines have been shown to upregulate microglial spine engulfment and prompt overzealous eradication of synaptic connections. This mechanism may help explain the increased risk of dementia associated with long-term benzodiazepine treatment.
The benzodiazepine class of drugs also interact with peripheral benzodiazepine receptors. Peripheral benzodiazepine receptors are present in peripheral nervous system tissues, glial cells, and to a lesser extent the central nervous system. These peripheral receptors are not structurally related or coupled to GABAA receptors. They modulate the immune system and are involved in the body response to injury. Benzodiazepines also function as weak adenosine reuptake inhibitors. It has been suggested that some of their anticonvulsant, anxiolytic, and muscle relaxant effects may be in part mediated by this action. Benzodiazepines have binding sites in the periphery, however their effects on muscle tone is not mediated through these peripheral receptors. The peripheral binding sites for benzodiazepines are present in immune cells and gastrointestinal tract.
Pharmacokinetics
A benzodiazepine can be placed into one of three groups by its elimination half-life, or time it takes for the body to eliminate half of the dose. Some benzodiazepines have long-acting active metabolites, such as diazepam and chlordiazepoxide, which are metabolised into desmethyldiazepam. Desmethyldiazepam has a half-life of 36–200 hours, and flurazepam, with the main active metabolite of desalkylflurazepam, with a half-life of 40–250 hours. These long-acting metabolites are partial agonists.
Short-acting compounds have a median half-life of 1–12 hours. They have few residual effects if taken before bedtime, rebound insomnia may occur upon discontinuation, and they might cause daytime withdrawal symptoms such as next day rebound anxiety with prolonged usage. Examples are brotizolam, midazolam, and triazolam.
Intermediate-acting compounds have a median half-life of 12–40 hours. They may have some residual effects in the first half of the day if used as a hypnotic. Rebound insomnia, however, is more common upon discontinuation of intermediate-acting benzodiazepines than longer-acting benzodiazepines. Examples are alprazolam, estazolam, flunitrazepam, clonazepam, lormetazepam, lorazepam, nitrazepam, and temazepam.
Long-acting compounds have a half-life of 40–250 hours. They have a risk of accumulation in the elderly and in individuals with severely impaired liver function, but they have a reduced severity of rebound effects and withdrawal. Examples are diazepam, clorazepate, chlordiazepoxide, and flurazepam.
Chemistry
Benzodiazepines share a similar chemical structure, and their effects in humans are mainly produced by the allosteric modification of a specific kind of neurotransmitter receptor, the GABAA receptor, which increases the overall conductance of these inhibitory channels; this results in the various therapeutic effects as well as adverse effects of benzodiazepines. Other less important modes of action are also known.
The term benzodiazepine is the chemical name for the heterocyclic ring system (see figure to the right), which is a fusion between the benzene and diazepine ring systems. Under Hantzsch–Widman nomenclature, a diazepine is a heterocycle with two nitrogen atoms, five carbon atom and the maximum possible number of cumulative double bonds. The "benzo" prefix indicates the benzene ring fused onto the diazepine ring.
Benzodiazepine drugs are substituted 1,4-benzodiazepines, although the chemical term can refer to many other compounds that do not have useful pharmacological properties. Different benzodiazepine drugs have different side groups attached to this central structure. The different side groups affect the binding of the molecule to the GABAA receptor and so modulate the pharmacological properties. Many of the pharmacologically active "classical" benzodiazepine drugs contain the 5-phenyl-1H-benzo[e] [1,4]diazepin-2(3H)-one substructure (see figure to the right). Benzodiazepines have been found to mimic protein reverse turns structurally, which enable them with their biological activity in many cases.
Nonbenzodiazepines also bind to the benzodiazepine binding site on the GABAA receptor and possess similar pharmacological properties. While the nonbenzodiazepines are by definition structurally unrelated to the benzodiazepines, both classes of drugs possess a common pharmacophore (see figure to the lower-right), which explains their binding to a common receptor site.
Types
2-keto compounds:
clorazepate, diazepam, flurazepam, halazepam, prazepam, and others
3-hydroxy compounds:
lorazepam, lormetazepam, oxazepam, temazepam
7-nitro compounds:
clonazepam, flunitrazepam, nimetazepam, nitrazepam
Triazolo compounds:
adinazolam, alprazolam, estazolam, triazolam
Imidazo compounds:
climazolam, loprazolam, midazolam
1,5-benzodiazepines:
clobazam
History
The first benzodiazepine, chlordiazepoxide (Librium), was synthesized in 1955 by Leo Sternbach while working at Hoffmann–La Roche on the development of tranquilizers. The pharmacological properties of the compounds prepared initially were disappointing, and Sternbach abandoned the project. Two years later, in April 1957, co-worker Earl Reeder noticed a "nicely crystalline" compound left over from the discontinued project while spring-cleaning in the lab. This compound, later named chlordiazepoxide, had not been tested in 1955 because of Sternbach's focus on other issues. Expecting pharmacology results to be negative, and hoping to publish the chemistry-related findings, researchers submitted it for a standard battery of animal tests. The compound showed very strong sedative, anticonvulsant, and muscle relaxant effects. These impressive clinical findings led to its speedy introduction throughout the world in 1960 under the brand name Librium. Following chlordiazepoxide, diazepam marketed by Hoffmann–La Roche under the brand name Valium in 1963, and for a while the two were the most commercially successful drugs. The introduction of benzodiazepines led to a decrease in the prescription of barbiturates, and by the 1970s they had largely replaced the older drugs for sedative and hypnotic uses.
The new group of drugs was initially greeted with optimism by the medical profession, but gradually concerns arose; in particular, the risk of dependence became evident in the 1980s. Benzodiazepines have a unique history in that they were responsible for the largest-ever class-action lawsuit against drug manufacturers in the United Kingdom, involving 14,000 patients and 1,800 law firms that alleged the manufacturers knew of the dependence potential but intentionally withheld this information from doctors. At the same time, 117 general practitioners and 50 health authorities were sued by patients to recover damages for the harmful effects of dependence and withdrawal. This led some doctors to require a signed consent form from their patients and to recommend that all patients be adequately warned of the risks of dependence and withdrawal before starting treatment with benzodiazepines. The court case against the drug manufacturers never reached a verdict; legal aid had been withdrawn and there were allegations that the consultant psychiatrists, the expert witnesses, had a conflict of interest. The court case fell through, at a cost of £30 million, and led to more cautious funding through legal aid for future cases. This made future class action lawsuits less likely to succeed, due to the high cost from financing a smaller number of cases, and increasing charges for losing the case for each person involved.
Although antidepressants with anxiolytic properties have been introduced, and there is increasing awareness of the adverse effects of benzodiazepines, prescriptions for short-term anxiety relief have not significantly dropped. For treatment of insomnia, benzodiazepines are now less popular than nonbenzodiazepines, which include zolpidem, zaleplon and eszopiclone. Nonbenzodiazepines are molecularly distinct, but nonetheless, they work on the same benzodiazepine receptors and produce similar sedative effects.
Benzodiazepines have been detected in plant specimens and brain samples of animals not exposed to synthetic sources, including a human brain from the 1940s. However, it is unclear whether these compounds are biosynthesized by microbes or by plants and animals themselves. A microbial biosynthetic pathway has been proposed.
Society and culture
Legal status
In the United States, benzodiazepines are Schedule IV drugs under the Federal Controlled Substances Act, even when not on the market (for example, nitrazepam and bromazepam). Flunitrazepam is subject to more stringent regulations in certain states and temazepam prescriptions require specially coded pads in certain states.
In Canada, possession of benzodiazepines is legal for personal use. All benzodiazepines are categorized as Schedule IV substances under the Controlled Drugs and Substances Act.
In the United Kingdom, benzodiazepines are Class C controlled drugs, carrying the maximum penalty of 7 years imprisonment, an unlimited fine or both for possession and a maximum penalty of 14 years imprisonment an unlimited fine or both for supplying benzodiazepines to others.
In the Netherlands, since October 1993, benzodiazepines, including formulations containing less than 20 mg of temazepam, are all placed on List 2 of the Opium Law. A prescription is needed for possession of all benzodiazepines. Temazepam formulations containing 20 mg or greater of the drug are placed on List 1, thus requiring doctors to write prescriptions in the List 1 format.
In East Asia and Southeast Asia, temazepam and nimetazepam are often heavily controlled and restricted. In certain countries, triazolam, flunitrazepam, flutoprazepam and midazolam are also restricted or controlled to certain degrees. In Hong Kong, all benzodiazepines are regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. Previously only brotizolam, flunitrazepam and triazolam were classed as dangerous drugs.
Internationally, benzodiazepines are categorized as Schedule IV controlled drugs, apart from flunitrazepam, which is a Schedule III drug under the Convention on Psychotropic Substances.
Recreational use
Benzodiazepines are considered major addictive substances. Non-medical benzodiazepine use is mostly limited to individuals who use other substances, i.e., people who engage in polysubstance use. On the international scene, benzodiazepines are categorized as Schedule IV controlled drugs by the INCB, apart from flunitrazepam, which is a Schedule III drug under the Convention on Psychotropic Substances. Some variation in drug scheduling exists in individual countries; for example, in the United Kingdom, midazolam and temazepam are Schedule III controlled drugs.
British law requires that temazepam (but not midazolam) be stored in safe custody. Safe custody requirements ensures that pharmacists and doctors holding stock of temazepam must store it in securely fixed double-locked steel safety cabinets and maintain a written register, which must be bound and contain separate entries for temazepam and must be written in ink with no use of correction fluid (although a written register is not required for temazepam in the United Kingdom). Disposal of expired stock must be witnessed by a designated inspector (either a local drug-enforcement police officer or official from health authority). Benzodiazepine use ranges from occasional binges on large doses, to chronic and compulsive drug use of high doses.
Benzodiazepines are commonly used recreationally by poly-drug users. Mortality is higher among poly-drug users that also use benzodiazepines. Heavy alcohol use also increases mortality among poly-drug users. Polydrug use involving benzodiazepines and alcohol can result in an increased risk of blackouts, risk-taking behaviours, seizures, and overdose. Dependence and tolerance, often coupled with dosage escalation, to benzodiazepines can develop rapidly among people who misuse drugs; withdrawal syndrome may appear after as little as three weeks of continuous use. Long-term use has the potential to cause both physical and psychological dependence and severe withdrawal symptoms such as depression, anxiety (often to the point of panic attacks), and agoraphobia. Benzodiazepines and, in particular, temazepam are sometimes used intravenously, which, if done incorrectly or in an unsterile manner, can lead to medical complications including abscesses, cellulitis, thrombophlebitis, arterial puncture, deep vein thrombosis, and gangrene. Sharing syringes and needles for this purpose also brings up the possibility of transmission of hepatitis, HIV, and other diseases. Benzodiazepines are also misused intranasally, which may have additional health consequences. Once benzodiazepine dependence has been established, a clinician usually converts the patient to an equivalent dose of diazepam before beginning a gradual reduction program.
A 1999–2005 Australian police survey of detainees reported preliminary findings that self-reported users of benzodiazepines were less likely than non-user detainees to work full-time and more likely to receive government benefits, use methamphetamine or heroin, and be arrested or imprisoned. Benzodiazepines are sometimes used for criminal purposes; they serve to incapacitate a victim in cases of drug assisted rape or robbery.
Overall, anecdotal evidence suggests that temazepam may be the most psychologically habit-forming (addictive) benzodiazepine. Non-medical temazepam use reached epidemic proportions in some parts of the world, in particular, in Europe and Australia, and is a major addictive substance in many Southeast Asian countries. This led authorities of various countries to place temazepam under a more restrictive legal status. Some countries, such as Sweden, banned the drug outright. Temazepam also has certain pharmacokinetic properties of absorption, distribution, elimination, and clearance that make it more apt to non-medical use compared to many other benzodiazepines.
Veterinary use
Benzodiazepines are used in veterinary practice in the treatment of various disorders and conditions. As in humans, they are used in the first-line management of seizures, status epilepticus, and tetanus, and as maintenance therapy in epilepsy (in particular, in cats). They are widely used in small and large animals (including horses, swine, cattle and exotic and wild animals) for their anxiolytic and sedative effects, as pre-medication before surgery, for induction of anesthesia and as adjuncts to anesthesia.
References
External links
National Institute on Drug Abuse: "NIDA for Teens: Prescription Depressant Medications".
Benzodiazepines – information from mental health charity The Royal College of Psychiatrists
Chemical classes of psychoactive drugs
GABAA receptor positive allosteric modulators
Glycine receptor antagonists
Sedatives
Hypnotics
Muscle relaxants
|
https://en.wikipedia.org/wiki/BeOS
|
BeOS is an operating system for personal computers first developed by Be Inc. in 1990. It was first written to run on BeBox hardware.
BeOS was positioned as a multimedia platform that could be used by a substantial population of desktop users and a competitor to Classic Mac OS and Microsoft Windows. It was ultimately unable to achieve a significant market share, and did not prove commercially viable for Be Inc. The company was acquired by Palm, Inc. Today BeOS is mainly used, and derivatives developed, by a small population of enthusiasts.
The open-source operating system Haiku is a continuation of BeOS concepts and most of the application level compatibility. The latest version, Beta 4 released December 2022, still retains BeOS 5 compatibility in its x86 32-bit images.
History
Initially designed to run on AT&T Hobbit-based hardware, BeOS was later modified to run on PowerPC-based processors: first Be's own systems, later Apple Computer's PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its aging Classic Mac OS.
Toward the end of 1996, Apple was still looking for a replacement to Copland in their operating system strategy. Amidst rumours of Apple's interest in purchasing BeOS, Be wanted to increase their user base, to try to convince software developers to write software for the operating system. Be courted Macintosh clone vendors to ship BeOS with their hardware.
Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gassée wanted $300 million; Apple was unwilling to offer any more than $125 million. Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs.
In 1997, Power Computing began bundling BeOS (on a CD for optional installation) with its line of PowerPC-based Macintosh clones. These systems could dual boot either the Classic Mac OS or BeOS, with a start-up screen offering the choice. Motorola also announced in February 1997 that it would bundle BeOS with their Macintosh clones, the Motorola StarMax, along with MacOS.
Due to Apple's moves and the mounting debt of Be Inc., BeOS was soon ported to the Intel x86 platform with its R3 release in March 1998. Through the late 1990s, BeOS managed to create a niche of followers, but the company failed to remain viable. Be Inc. also released a stripped-down, but free, copy of BeOS R5 known as BeOS Personal Edition (BeOS PE). BeOS PE could be started from within Microsoft Windows or Linux, and was intended to nurture consumer interest in its product and give developers something to tinker with. Be Inc. also released a stripped-down version of BeOS for Internet appliances (BeIA), which soon became the company's business focus in place of BeOS.
In 2001, Be's copyrights were sold to Palm, Inc. for some $11 million. BeOS R5 is considered the last official version, but BeOS R5.1 "Dano", which was under development before Be's sale to Palm and included the BeOS Networking Environment (BONE) networking stack, was leaked to the public shortly after the company's demise.
In 2002, Be Inc. sued Microsoft claiming that Hitachi had been dissuaded from selling PCs loaded with BeOS, and that Compaq had been pressured not to market an Internet appliance in partnership with Be. Be also claimed that Microsoft acted to artificially depress Be Inc.'s initial public offering (IPO). The case was eventually settled out of court for $23.25 million with no admission of liability on Microsoft's part.
After the split from Palm, PalmSource used parts of BeOS's multimedia framework for its failed Palm OS Cobalt product. With the takeover of PalmSource, the BeOS rights now belong to Access Co.
Version history
Features
BeOS was built for digital media work and was written to take advantage of modern hardware facilities such as symmetric multiprocessing by utilizing modular I/O bandwidth, pervasive multithreading, preemptive multitasking and a 64-bit journaling file system known as BFS. The BeOS GUI was developed on the principles of clarity and a clean, uncluttered design.
The API was written in C++ for ease of programming. The GUI was largely multithreaded: each window ran in its own thread, relying heavily on sending messages to communicate between threads; and these concepts are reflected into the API.
It has partial POSIX compatibility and access to a command-line interface through Bash, although internally it is not a Unix-derived operating system. Many Unix applications were ported to the BeOS command-line interface.
BeOS uses Unicode as the default encoding in the GUI, though support for input methods such as bidirectional text input was never realized.
Legacy
Products
BeOS (and now Zeta) have been used in media appliances, such as the Edirol DV-7 video editors from Roland Corporation, which run on top of a modified BeOS and the Tunetracker Radio Automation software that used to run it on BeOS and Zeta, and it was also sold as a "Station-in-a-Box" with the Zeta operating system included. In 2015, Tunetracker released a Haiku distribution bundled with its broadcasting software.
The Tascam SX-1 digital audio recorder runs a heavily modified version of BeOS that will only launch the recording interface software.
The RADAR 24, RADAR V and RADAR 6, hard disk-based, 24-track professional audio recorders from iZ Technology Corporation were based on BeOS 5.
Magicbox, a manufacturer of signage and broadcast display machines, uses BeOS to power their Aavelin product line.
Final Scratch, a 12-inch vinyl timecode record-driven DJ software and hardware system, was first developed on BeOS. The "ProFS" version was sold to a few dozen DJs prior to the 1.0 release, which ran on a Linux virtual partition.
Continuation
After the closing of Be Inc., a few projects formed to recreate BeOS or its key elements with the eventual goal of then continuing where Be Inc. left off. This was facilitated by Be Inc. having released some components of BeOS under a free license.
Haiku is a complete reimplementation of BeOS not based on Linux. Unlike Cosmoe and BlueEyedOS, it is directly compatible with BeOS applications. It is open source software. As of 2022, it was the only BeOS clone still under development, with the fourth beta (December 2022) still keeping BeOS 5 compatibility in its x86 32-bit images, with an increased number of modern drivers and GTK apps ported.
Zeta is a commercially available operating system based on the BeOS R5.1 codebase. Originally developed by yellowTAB, the operating system was then distributed by magnussoft. During development by yellowTAB, the company received criticism from the BeOS community for refusing to discuss its legal position with regard to the BeOS codebase (perhaps for contractual reasons). Access Co. (which bought PalmSource, until then the holder of the intellectual property associated with BeOS) has since declared that yellowTAB had no right to distribute a modified version of BeOS, and magnussoft has ceased distribution of the operating system.
See also
Access Co.
BeIA
Comparison of operating systems
Gobe Productive
Hitachi Flora Prius
References
Further reading
External links
The Dawn of Haiku, by Ryan Leavengood, IEEE Spectrum May 2012, p 40–43,51-54.
Mirror of the old www.be.com site Other Mirror of the old www.be.com site
BeOS Celebrating Ten Years
BeGroovy A blog dedicated to all things BeOS
BeOS: The Mac OS X might-have-been, reghardware.co.uk
Programming the Be Operating System: An O'Reilly Open Book (out of print, but can be downloaded)
(BeOS)
Discontinued operating systems
Object-oriented operating systems
PowerPC operating systems
X86 operating systems
|
https://en.wikipedia.org/wiki/Biosphere
|
The biosphere (from Greek βίος bíos "life" and σφαῖρα sphaira "sphere"), also known as the ecosphere (from Greek οἶκος oîkos "environment" and σφαῖρα), is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago.
In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons.
Origin and use of the term
The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells.
While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences.
Narrow definition
Geochemists define the biosphere as being the total sum of living organisms (the "biomass" or "biota" as referred to by biologists and ecologists). In this sense, the biosphere is but one of four separate components of the geochemical model, the other three being geosphere, hydrosphere, and atmosphere. When these four component spheres are combined into one system, it is known as the ecosphere. This term was coined during the 1960s and encompasses both biological and physical components of the planet.
The Second International Conference on Closed Life Systems defined biospherics as the science and technology of analogs and models of Earth's biosphere; i.e., artificial Earth-like biospheres. Others may include the creation of artificial non-Earth biospheres—for example, human-centered biospheres or a native Martian biosphere—as part of the topic of biospherics.
Earth's biosphere
Age
The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilized microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. According to biologist Stephen Blair Hedges, "If life arose relatively quickly on Earth ... then it could be common in the universe."
Extent
Every part of the planet, from the polar ice caps to the equator, features life of some kind. Recent advances in microbiology have demonstrated that microbes live deep beneath the Earth's terrestrial surface, and that the total mass of microbial life in so-called "uninhabitable zones" may, in biomass, exceed all animal and plant life on the surface. The actual thickness of the biosphere on earth is difficult to measure. Birds typically fly at altitudes as high as and fish live as much as underwater in the Puerto Rico Trench.
There are more extreme examples for life on the planet: Rüppell's vulture has been found at altitudes of ; bar-headed geese migrate at altitudes of at least ; yaks live at elevations as high as above sea level; mountain goats live up to . Herbivorous animals at these elevations depend on lichens, grasses, and herbs.
Life forms live in every part of the Earth's biosphere, including soil, hot springs, inside rocks at least deep underground, and at least high in the atmosphere. Marine life under many forms has been found in the deepest reaches of the world ocean while much of the deep sea remains to be explored.
Microorganisms, under certain test conditions, have been observed to survive the vacuum of outer space. The total amount of soil and subsurface bacterial carbon is estimated as 5 × 1017 g. The mass of prokaryote microorganisms—which includes bacteria and archaea, but not the nucleated eukaryote microorganisms—may be as much as 0.8 trillion tons of carbon (of the total biosphere mass, estimated at between 1 and 4 trillion tons). Barophilic marine microbes have been found at more than a depth of in the Mariana Trench, the deepest spot in the Earth's oceans. In fact, single-celled life forms have been found in the deepest part of the Mariana Trench, by the Challenger Deep, at depths of . Other researchers reported related studies that microorganisms thrive inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, as well as beneath the seabed off Japan. Culturable thermophilic microbes have been extracted from cores drilled more than into the Earth's crust in Sweden, from rocks between . Temperature increases with increasing depth into the Earth's crust. The rate at which the temperature increases depends on many factors, including type of crust (continental vs. oceanic), rock type, geographic location, etc. The greatest known temperature at which microbial life can exist is (Methanopyrus kandleri Strain 116), and it is likely that the limit of life in the "deep biosphere" is defined by temperature rather than absolute depth. On 20 August 2014, scientists confirmed the existence of microorganisms living below the ice of Antarctica.
Earth's biosphere is divided into a number of biomes, inhabited by fairly similar flora and fauna. On land, biomes are separated primarily by latitude. Terrestrial biomes lying within the Arctic and Antarctic Circles are relatively barren of plant and animal life, while most of the more populous biomes lie near the equator.
Annual variation
Artificial biospheres
Experimental biospheres, also called closed ecological systems, have been created to study ecosystems and the potential for supporting life outside the Earth. These include spacecraft and the following terrestrial laboratories:
Biosphere 2 in Arizona, United States, 3.15 acres (13,000 m2).
BIOS-1, BIOS-2 and BIOS-3 at the Institute of Biophysics in Krasnoyarsk, Siberia, in what was then the Soviet Union.
Biosphere J (CEEF, Closed Ecology Experiment Facilities), an experiment in Japan.
Micro-Ecological Life Support System Alternative (MELiSSA) at Universitat Autònoma de Barcelona
Extraterrestrial biospheres
No biospheres have been detected beyond the Earth; therefore, the existence of extraterrestrial biospheres remains hypothetical. The rare Earth hypothesis suggests they should be very rare, save ones composed of microbial life only. On the other hand, Earth analogs may be quite numerous, at least in the Milky Way galaxy, given the large number of planets. Three of the planets discovered orbiting TRAPPIST-1 could possibly contain biospheres. Given limited understanding of abiogenesis, it is currently unknown what percentage of these planets actually develop biospheres.
Based on observations by the Kepler Space Telescope team, it has been calculated that provided the probability of abiogenesis is higher than 1 to 1000, the closest alien biosphere should be within 100 light-years from the Earth.
It is also possible that artificial biospheres will be created in the future, for example with the terraforming of Mars.
See also
Climate system
Cryosphere
Thomas Gold
Circumstellar habitable zone
Homeostasis
Life-support system
Man and the Biosphere Programme
Montreal Biosphere
Noosphere
Rare biosphere
Shadow biosphere
Simple biosphere model
Soil biomantle
Wardian case
Winogradsky column
References
Further reading
The Biosphere (A Scientific American Book), San Francisco, W.H. Freeman and Co., 1970, . This book, originally the December 1970 Scientific American issue, covers virtually every major concern and concept since debated regarding materials and energy resources (including solar energy), population trends, and environmental degradation (including global warming).
External links
Article on the Biosphere at Encyclopedia of Earth
GLOBIO.info, an ongoing programme to map the past, current and future impacts of human activities on the biosphere
Paul Crutzen Interview, freeview video of Paul Crutzen Nobel Laureate for his work on decomposition of ozone talking to Harry Kroto Nobel Laureate by the Vega Science Trust.
Atlas of the Biosphere
Oceanography
Superorganisms
Biological systems
Biosphere
|
https://en.wikipedia.org/wiki/Bacardi
|
Bacardi Limited ( , , ) is the largest privately held, family-owned spirits company in the world. Originally known for its Bacardí brand of white rum, it now has a portfolio of more than 200 brands and labels. Founded in Cuba in 1862 by Facundo Bacardí Massó, Bacardi Limited has been family-owned for seven generations, and employs more than 8,000 people with sales in approximately 170 countries. Bacardi Limited is the group of companies as a whole and includes Bacardi International Limited.
Bacardi Limited is headquartered in Hamilton, Bermuda, and has a board of directors led by the original founder's great-great grandson, Facundo L. Bacardí, the chairman of the board.
History
Early history
Facundo Bacardí Massó, a Spanish wine merchant, was born in Sitges, Catalonia, Spain, on October 16, 1814, and immigrated to Santiago, Cuba, in 1830. At the time, rum was cheaply made and not considered a refined drink, and rarely sold in upmarket taverns or purchased by the growing emerging middle class on the island. Facundo began attempting to "tame" rum by isolating a proprietary strain of yeast harvested from local sugar cane still used in Bacardí production today. This yeast gives Bacardí rum its flavour profile. After experimenting with several techniques for close to ten years, Facundo pioneered charcoal rum filtration, which removed impurities from his rum. Facundo then created two separate distillates that he could blend together, balancing a variety of flavors: Aguardiente (a robust, flavorful distillate) and Redestillado (a refined, delicate distillate). Once Facundo achieved the perfect balance of flavors by marrying the two distillates together, he purposefully aged the rum in white oak barrels to develop subtle flavors and characteristics while mellowing out those that were unwanted. The final product was the first clear, light-bodied and mixable "white" rum in the world.
Moving from the experimental stage to a more commercial endeavour as local sales began to grow, Facundo and his brother José purchased a Santiago de Cuba distillery on October 16, 1862, which housed a still made of copper and cast iron. In the rafters of this building lived fruit bats – the inspiration for the Bacardi bat logo. It was the idea of Doña Amalia, Facundo's wife, to adopt the bat to the rum bottle when she recognized its symbolism of family unity, good health, and good fortune to her husband's homeland of Spain. This logo was pragmatic considering the high illiteracy rate in the 19th century, enabling customers to easily identify the product.
The 1880s and 1890s were turbulent times for Cuba and the company. Emilio Bacardí, Don Facundo's eldest son, known for his forward thinking in both his professional and personal life and a passionate advocate for Cuban Independence was imprisoned twice for having fought in the rebel army against Spain in the Cuban War of Independence.
Emilio's brothers, Facundo and José, and their brother-in-law Enrique 'Henri' Schueg, remained in Cuba with the difficult task of sustaining the company during a period of war. With Don Facundo's passing in 1886, Doña Amalia sought refuge by exile in Kingston, Jamaica. At the end of the Cuban War of Independence during the US occupation of Cuba, "The Original Cuba Libre" and the Daiquiri cocktails were both created, with the then Cuban based Bacardí rum. In 1899, Emilio Bacardí became the first democratically elected mayor of Santiago, appointed US General Leonard Wood.
During his time in public office, Emilio established schools and hospitals, completed municipal projects such as the famous Padre Pico Street and the Bacardi Dam, financed the creation of parks, and decorated the city of Santiago with monuments and sculptures. In 1912, Emilio and his wife travelled to Egypt, where he purchased a mummy (still on display) for the future Emilio Bacardi Moreau Municipal Museum in Santiago de Cuba. In Santiago, his brother Facundo M. Bacardí continued to manage the company along with Schueg, who began the company's international expansion by opening bottling plants in Barcelona (1910) and New York City (1916). The New York plant was soon shut down due to Prohibition, yet during this time Cuba became a hotspot for US tourists, kicking off a period of rapid growth for the Bacardi company and the onset of cocktail culture in America.
In 1922 the family completed the expansion and renovation of the original distillery in Santiago, increasing the sites rum production capacity. In 1930 Schueg oversaw the construction and opening of Edificio Bacardí in Havana, regarded as one of the finest Art Deco buildings in Latin America, as the third generation of the Bacardí family entered the business. In 1927, Bacardi ventured outside the realm of spirits for the first time, with the introduction of an authentic Cuban Malt beer: Hatuey beer.
Bacardi's success in transitioning into an international brand and company was due mostly to Schueg, who branded Cuba as "The home of rum", and Bacardí as "The king of rums and the rum of Kings". Expansion began overseas, first to Mexico in 1931 where it had architects Ludwig Mies Van Der Rohe and Felix Candela design office buildings and a bottling plant in Mexico City during the 1950s. The building complex was added to the tentative list of UNESCO's World Heritage Site list on 20 November 2001. In 1936, Bacardi began producing rum on U.S. territory in Puerto Rico after Prohibition which enabled the company to sell rum tariff-free in the United States. The company later expanded to the United States in 1944 with the opening of Bacardi Imports, Inc. in Manhattan, New York City.
During World War II, the company was led by Schueg's son-in-law, José "Pepin" Bosch. Pepin founded Bacardi Imports in New York City, and became Cuba's Minister of the Treasury in 1949.
Cuban Revolution
During the Cuban Revolution in 1959, the Bacardí family (and hence the company) supported and aided the rebels. However, after the triumph of the revolutionaries, and turn to Communism, the family maintained a fierce opposition to Fidel Castro's policies in Cuba in the 1960s. In his book, Bacardi and the Long Fight for Cuba, Tom Gjelten describes how the Bacardí family and the company left Cuba in exile after the Cuban government confiscated the company's Cuban assets without compensation on 14 October 1960, particularly nationalizing and banning all private property on the island as well as all bank accounts. However, due to concerns over the previous Cuban leader, Fulgencio Batista, the company had started foreign branches a few years before the revolution; the company moved the ownership of its trademarks, assets and proprietary formulas out of the country to the Bahamas prior to the revolution and already produced Bacardí rum at other distillery sites in Puerto Rico and Mexico. This helped the company survive after the Cuban government confiscated all Bacardí assets in the country without any compensation.
In 1965, over 100 years after the company was established in Cuba, Bacardi established new roots and found a new home with global headquarters in Hamilton, Bermuda. In February 2019, Bacardi's CEO, Mahesh Madhavan, stated that Bacardí's global headquarters would remain in Bermuda for the next "500 years" and that "Bermuda is our home now."
In 1999, Otto Reich, a lobbyist in Washington on behalf of Bacardí, drafted section 211 of the Omnibus Consolidated and Emergency Appropriations Act, FY1999, a bill that became known as the Bacardi Act. Section 211 denied trademark protection to products of Cuban businesses expropriated after the Cuban revolution, a provision sought by Bacardí. The act was aimed primarily at the Havana Club brand in the United States. The brand was created by the José Arechabala S.A. and nationalised without compensation in the Cuban revolution, the Arechabala family left Cuba and stopped producing rum. They therefore allowed the US trademark registration for "Havana Club" to lapse in 1973. Taking advantage of the lapse, the Cuban government registered the mark in the United States in 1976. This new law was drafted to invalidate the trademark registration. Section 211 has been challenged unsuccessfully by the Cuban government and the European Union in US courts. It was ruled illegal by the WTO in 2001 and 2002. The US Congress has yet to re-examine the matter. The brand was assigned by the Cuban government to Pernod Ricard in 1993.
Bacardi rekindled the story of the Arechabala family and Havana Club in the United States when it launched the AMPARO Experience in 2018, an immersive play experience based in Miami, the city with the highest population of Cuban exiles. AMPARO “is the story of the family’s entire history being erased and their heritage ‘stolen’” according to playwright Vanessa Garcia.
Bacardi in the United States
In 1964, Bacardi opened new US offices in Miami, Florida. Exiled Cuban architect Enrique Gutierrez created a building that was hurricane-proof, using a system of steel cables and pulleys which allow the building to move slightly in the event of a strong shock. The steel cables are anchored into the bedrock and extend through marble-covered shafts up to the top floor, where they are led over large pulleys. Outside, on both sides of the eight-story building, more than 28,000 tiles painted and fired by Brazilian artist Francisco Brennand, depicting abstract blue flowers, were placed on the walls according to the artist's exact specifications.
In 1973, the company commissioned the square building in the plaza. Architect Ignacio Carrera-Justiz used cantilevered construction, a style invented by Frank Lloyd Wright. Wright observed how well trees with taproots withstood hurricane-force winds. The building, raised 47 feet off the ground around a central core, features four massive walls, made of sections of inch-thick hammered glass mural tapestries, designed and manufactured in France. The striking design of the annex, affectionately known as the 'Jewel Box' building, came from a painting by German artist Johannes M. Dietz.
In 2006, Bacardi USA leased a 15-story headquarters complex in Coral Gables, Florida. Bacardi had employees in seven buildings across Miami-Dade County at the time.
Bacardi vacated its former headquarters buildings on Biscayne Boulevard in Midtown Miami. The building currently serves as the headquarters of the National YoungArts Foundation. Miami citizens began a campaign to label the buildings as "historic". The Bacardi Buildings Complex has been a locally protected historic resource since Oct. 6, 2009, when it was designated by unanimous decision by the Historic and Environmental Preservation Board.
In 2007 Chad Oppenheim, the head of Oppenheim Architecture + Design, described the Bacardi buildings as "elegant, with a Modernist [look combined with] a local flavour". In April 2009, University of Miami professor of architecture Allan Schulman said "Miami's brand is its identity as a tropical city. The Bacardi buildings are exactly the sort that resonate with our consciousness of what Miami is about."
The American headquarters is in Coral Gables, Florida.
Bacardi and Cuba today
Bacardi drinks are not easily found in Cuba today. The main brand of rum in Cuba is Havana Club, produced by a company that was confiscated and nationalized by the government following the revolution. Bacardi later bought the brand from the original owners, the Arechabala family. The Cuban government, in partnership with the French company Pernod Ricard, sells its Havana Club products internationally, except in the United States and its territories. Bacardi created the Real Havana Club rum based on the original recipe from the Arechabala family, manufactures it in Puerto Rico, and sells it in the United States. Bacardi continues to fight in the courts, attempting to legalize their own Havana Club trademark outside the United States.
Brands
Bacardi Limited has made numerous acquisitions to diversify away from the eponymous Bacardí rum brand. In 1993, Bacardi merged with Martini & Rossi, the Italian producer of Martini vermouth and sparkling wines, creating the Bacardi-Martini group.
In 1998, the company acquired Dewar's scotch, including Royal Brackla and Bombay Sapphire gin from Diageo for $2 billion. Bacardi acquired the Cazadores tequila brand in 2002 and in 2004 purchased Grey Goose, a French-made vodka, from Sidney Frank for $2 billion. In 2006 Bacardi Limited purchased New Zealand vodka brand 42 Below. In 2018, Bacardi Limited purchased tequila manufacturer Patrón for $5.1 billion. In 2023, Bacardi acquired the renowned super-premium mezcal brand, Ilegal Mezcal.
Other associated brands include the Real Havana Club, Drambuie Scotch whisky liqueur, DiSaronno Amaretto, Eristoff vodka, Cazadores Tequila, B&B and Bénédictine liqueurs.
American Whiskey: Stillhouse
Bourbon: Angel's Envy, Stillhouse Black Bourbon
Cachaça: Leblon Cachaça
Cognac: Otard, D'ussé Cognac, Gaston De LaGrange
Gin: Bombay Sapphire, Bosford, Oxley
Liqueur: Bénédictine, St-Germain, Get 27, Get 31, Nassau Royale, Martini Spirito, Patrón liqueurs
Rum: Bacardí, Havana Club (USA only), Castillo, Banks, Pyrat XO Reserve, Oakheart Spice Rum
Scotch whisky: Aberfeldy, Aultmore, Craigellachie, Deveron, Royal Brackla, Dewar's, William Lawson's
Sparkling wine: Martini Prosecco, Martini Asti, Martini rosé
Tequila: Patrón, Corzo, Cazadores, Camino Real
Vermouth: Martini & Rossi, Noilly Prat
Vodka: Grey Goose, Eristoff, Ultimat Vodka, Stillhouse Classic American Vodka, 42 Below, Plume & Petal
Main brand
Bacardi Superior
Bacardi 8
Bacardi Gran Reserva
Bacardi Dark Rum
Bacardi White Rum
Bacardi Spiced Rum
Bacardi Gold Rum
Bacardi 151
Bacardi Gold
Bacardi Mojito
Bacardi Breezers
Bacardi Apple
Bacardi Lemon
Bacardi Carta Blanca
Awards
Bacardí rums have been entered for a number of international spirit ratings awards. Several Bacardí spirits have performed notably well. In 2020, Bacardí Superior, Bacardí Gold, Bacardí Black, Bacardí Añejo Cuatro were each awarded a gold medal by the International Quality Institute Monde Selection. In addition, both Bacardí Reserva Ocho and Bacardí Gran Reserva Diez were awarded the top honor of Grand Gold quality award.
Hemingway connection
Ernest Hemingway lived in Cuba from 1939 until shortly after the Cuban Revolution. He lived at Finca Vigía, in the small town of San Francisco de Paula, located very close to Bacardi's Modelo Brewery for Hatuey Beer in Cotorro, Havana.
In 1954, Compañía Ron Bacardi S.A. threw Hemingway a party when he was awarded the Nobel Prize in Literature – soon after the publication of his novel The Old Man and the Sea (1952) – in which he honored the company by mentioning its Hatuey beer. Hemingway also mentioned Bacardí and Hatuey in his novels To Have and Have Not (1937) and For Whom the Bell Tolls (1940). Guillermo Cabrera Infante wrote an account of the festivities for the periodical Ciclón, titled "El Viejo y la Marca" ("The Old Man and the Brand", a play on "El Viejo y el Mar", the book's Spanish title). In his account he described how "on one side there was a wooden stage with two streamers – Hatuey beer and Bacardi rum – on each end and a Cuban flag in the middle. Next to the stage was a bar, at which people crowded, ordering daiquiris and beer, all free." A sign at the event read "Bacardi rum welcomes the author of The Old Man and the Sea".
In his article "The Old Man and the Daiquiri", Wayne Curtis writes about how Hemingway's "home bar also held a bottle of Bacardí rum". Hemingway wrote in Islands in the Stream, "...this frozen daiquirí, so well beaten as it is, looks like the sea where the wave falls away from the bow of a ship when she is doing thirty knots."
Controversies
Controversy over sponsoring Russia's war
In March 2022, after Russia's invasion of Ukraine, Bacardi announced that it would halt all exports to Russia and freeze investment and advertising programs. Instead of keeping this promise, the company increased its exports to Russia and tripled its profits. As of summer 2023, Bacardi went on increasing its business in Russia and looking for new employees for its Russian branch. When the broken promise of Bacardi gained international media attention, the pledge disappeared from their company website. On August 10, 2023, Ukrainian authorities added Bacardi to their list of International Sponsors of War.
See also
Lubee Bat Conservancy, an organization in Gainesville, Florida, founded by Facundo's great-grandson Luis
References
External links
Map of Distillery in Puerto Rico from Google Maps
Food and drink companies established in 1862
Food and drink companies of Bermuda
Distilleries
Privately held companies of Bermuda
Rums
1862 establishments in Cuba
Food and drink companies of Cuba
|
https://en.wikipedia.org/wiki/Bain-marie
|
A bain-marie (; ), also known as a water bath or double boiler, a type of heated bath, is a piece of equipment used in science, industry, and cooking to heat materials gently or to keep materials warm over a period of time. A bain-marie is also used to melt ingredients for cooking.
History
The name comes from the French or , in turn derived from the medieval Latin and the Arabic , all meaning 'Mary's bath'. In his books, the 300 AD alchemist Zosimos of Panopolis credits for the invention of the device Mary the Jewess, an ancient alchemist. However, the water bath was known many centuries earlier (Hippocrates and Theophrastus).
Description
The double boiler comes in a wide variety of shapes, sizes, and types, but traditionally is a wide, cylindrical, usually metal container made of three or four basic parts: a handle, an outer (or lower) container that holds the working fluid, an inner (or upper), smaller container that fits inside the outer one and which holds the material to be heated or cooked, and sometimes a base underneath. Under the outer container of the bain-marie (or built into its base) is a heat source.
Typically, the inner container is immersed about halfway into the working fluid.
The inner container, filled with the substance to be heated, fits inside the outer container filled with the working fluid (often water, but alternatively steam or oil). The outer container is heated at or below the base, causing the temperature of the working fluid to rise and thus transferring heat to the inner container. The maximum obtainable temperature of the fluid is dictated by its composition and boiling point at the ambient pressure. Since the surface of the inner container is always in contact with the fluid, the double boiler serves as a constant-temperature heat source for the substance being heated, without hot or cold spots that can affect its properties.
When the working fluid is water and the bain-marie is used at sea level, the maximum temperature of the material in the lower container will not exceed , the boiling point of water at sea level. Using different working fluids such as oil in the outer container will result in different maximum temperatures obtainable in the inner container.
Alternatives
A contemporary alternative to the traditional, liquid-filled bain-marie is the electric "dry-heat" bain-marie, heated by elements below both pots. The dry-heat form of electric bains-marie often consumes less energy, requires little cleaning, and can be heated more quickly than traditional versions. They can also operate at higher temperatures, and are often much less expensive than their traditional counterparts.
Electric bains-marie can also be wet, using either hot water or vapor, or steam, in the heating process. The open, bath-type bain-marie heats via a small, hot-water tub (or "bath"), and the vapour-type bain-marie heats with scalding-hot steam.
Culinary applications
In cooking applications, a bain-marie usually consists of a pan of water in which another container or containers of food to be cooked is/are placed.
Chocolate can be melted in a bain-marie to avoid splitting (separation of cocoa butter and cocoa solids, breaking emulsion) and caking onto the pot. Special dessert bains-marie usually have a thermally insulated container and can be used as a chocolate fondue for the purposes of dipping foods (typically fruits) at the table.
Cheesecake is often baked in a bain-marie to prevent the top from cracking in the centre.
Baked custard desserts such as custard tarts may be cooked in a bain-marie to keep a crust from forming on the outside of the custard before the interior is fully cooked. In the case of the crème brûlée, placing the ramekins in a roasting pan and filling the pan with hot water until it is half to two-thirds of the way up the sides of the ramekins transfers the heat to the custard gently, which prevents the custard from curdling. The humidity from the steam that rises as the water heats helps keep the top of the custard from becoming too dry.
Classic warm high-fat sauces, such as Hollandaise and beurre blanc, are often cooked using a bain-marie as they require enough heat to emulsify the mixture of fats and water but not enough to curdle or split the sauce.
Some charcuterie such as terrines and pâtés are cooked in an "oven-type" bain-marie.
The making of Clotted cream.
Thickening of condensed milk, such as in confection-making, is done in a bain-marie.
Controlled-temperature bains-marie can be used to heat frozen breast milk before feedings.
Bains-marie can be used in place of chafing dishes for keeping foods warm for long periods of time, where stovetops or hot plates are inconvenient or too powerful.
A bain-marie can be used to re-liquefy hardened honey by placing a glass jar on top of any improvised platform sitting at the bottom of a pot of gently boiling water.
Other uses
In small scale soap-making, a bain-marie's inherent control over maximum temperature makes it optimal for liquefying melt-and-pour soap bases prior to molding them into bars. It offers the advantage of maintaining the base in a liquid state, or reliquefying a solidified base, with minimal deterioration. Similarly, using a water bath, traditional wood glue can be melted and kept in a stable liquid state over many hours without damage to the animal proteins it incorporates.
See also
Double steaming
Heated bath
Laboratory water bath
References
Sources
External links
Vessels
Cooking vessels
Culinary terminology
|
https://en.wikipedia.org/wiki/Zebrafish
|
The zebrafish (Danio rerio) is a freshwater fish belonging to the minnow family (Cyprinidae) of the order Cypriniformes. Native to India and South Asia, it is a popular aquarium fish, frequently sold under the trade name zebra danio (and thus often called a "tropical fish" although both tropical and subtropical). It is also found in private ponds.
The zebrafish is an important and widely used vertebrate model organism in scientific research. Zebrafish has been used for biomedicine and developmental biology. The species is used for studies, such as neurobehavioral phenomena. It is also used for psychological reasons such as abuse, cognitive, and affective disorders. The species are used to study and observe behavioral research.
Taxonomy
The zebrafish is a derived member of the genus Brachydanio, of the family Cyprinidae. It has a sister-group relationship with Danio aesculapii. Zebrafish are also closely related to the genus Devario, as demonstrated by a phylogenetic tree of close species.
Distribution
Range
The zebrafish is native to freshwater habitats in South Asia where it is found in India, Pakistan, Bangladesh, Nepal and Bhutan. The northern limit is in the South Himalayas, ranging from the Sutlej river basin in the Pakistan–India border region to the state of Arunachal Pradesh in northeast Indian. Its range is concentrated in the Ganges and Brahmaputra River basins, and the species was first described from Kosi River (lower Ganges basin) of India. Its range further south is more local, with scattered records from the Western and Eastern Ghats regions. It has frequently been said to occur in Myanmar (Burma), but this is entirely based on pre-1930 records and likely refers to close relatives only described later, notably Danio kyathit. Likewise, old records from Sri Lanka are highly questionable and remain unconfirmed.
Zebrafish have been introduced to California, Connecticut, Florida and New Mexico in the United States, presumably by deliberate release by aquarists or by escape from fish farms. The New Mexico population had been extirpated by 2003 and it is unclear if the others survive, as the last published records were decades ago. Elsewhere the species has been introduced to Colombia and Malaysia.
Habitats
Zebrafish typically inhabit moderately flowing to stagnant clear water of quite shallow depth in streams, canals, ditches, oxbow lakes, ponds and rice paddies. There is usually some vegetation, either submerged or overhanging from the banks, and the bottom is sandy, muddy or silty, often mixed with pebbles or gravel. In surveys of zebrafish locations throughout much of its Bangladeshi and Indian distribution, the water had a near-neutral to somewhat basic pH and mostly ranged from in temperature. One unusually cold site was only and another unusually warm site was , but the zebrafish still appeared healthy. The unusually cold temperature was at one of the highest known zebrafish locations at above sea level, although the species has been recorded to .
Description
The zebrafish is named for the five uniform, pigmented, horizontal, blue stripes on the side of the body, which are reminiscent of a zebra's stripes, and which extend to the end of the caudal fin. Its shape is fusiform and laterally compressed, with its mouth directed upwards. The male is torpedo-shaped, with gold stripes between the blue stripes; the female has a larger, whitish belly and silver stripes instead of gold. Adult females exhibit a small genital papilla in front of the anal fin origin. The zebrafish can reach up to in length, although they typically are in the wild with some variations depending on location. Its lifespan in captivity is around two to three years, although in ideal conditions, this may be extended to over five years. In the wild it is typically an annual species.
Psychology
In 2015, a study was published about zebrafishes' capacity for episodic memory. The individuals showed a capacity to remember context with respect to objects, locations and occasions (what, when, where). Episodic memory is a capacity of explicit memory systems, typically associated with conscious experience.
The Mauthner cells integrate a wide array of sensory stimuli to produce the escape reflex. Those stimuli are found to include the lateral line signals by McHenry et al. 2009 and visual signals consistent with looming objects by Temizer et al. 2015, Dunn et al. 2016, and Yao et al. 2016.
Reproduction
The approximate generation time for Danio rerio is three months. A male must be present for ovulation and spawning to occur. Zebrafish are asynchronous spawners and under optimal conditions (such as food availability and favorable water parameters) can spawn successfully frequently, even on a daily basis. Females are able to spawn at intervals of two to three days, laying hundreds of eggs in each clutch. Upon release, embryonic development begins; in absence of sperm, growth stops after the first few cell divisions. Fertilized eggs almost immediately become transparent, a characteristic that makes D. rerio a convenient research model species. Sex determination of common laboratory strains was shown to be a complex genetic trait, rather than to follow a simple ZW or XY system.
The zebrafish embryo develops rapidly, with precursors to all major organs appearing within 36 hours of fertilization. The embryo begins as a yolk with a single enormous cell on top (see image, 0 h panel), which divides into two (0.75 h panel) and continues dividing until there are thousands of small cells (3.25 h panel). The cells then migrate down the sides of the yolk (8 h panel) and begin forming a head and tail (16 h panel). The tail then grows and separates from the body (24 h panel). The yolk shrinks over time because the fish uses it for food as it matures during the first few days (72 h panel). After a few months, the adult fish reaches reproductive maturity (bottom panel).
To encourage the fish to spawn, some researchers use a fish tank with a sliding bottom insert, which reduces the depth of the pool to simulate the shore of a river. Zebrafish spawn best in the morning due to their Circadian rhythms. Researchers have been able to collect 10,000 embryos in 10 minutes using this method. In particular, one pair of adult fish is capable of laying 200–300 eggs in one morning in approximately 5 to 10 at time. Male zebrafish are furthermore known to respond to more pronounced markings on females, i.e., "good stripes", but in a group, males will mate with whichever females they can find. What attracts females is not currently understood. The presence of plants, even plastic plants, also apparently encourages spawning.
Exposure to environmentally relevant concentrations of diisononyl phthalate (DINP), commonly used in a large variety of plastic items, disrupt the endocannabinoid system and thereby affect reproduction in a sex-specific manner.
Feeding
Zebrafish are omnivorous, primarily eating zooplankton, phytoplankton, insects and insect larvae, although they can eat a variety of other foods, such as worms and small crustaceans, if their preferred food sources are not readily available.
In research, adult zebrafish are often fed with brine shrimp, or paramecia.
In the aquarium
Zebrafish are hardy fish and considered good for beginner aquarists. Their enduring popularity can be attributed to their playful disposition, as well as their rapid breeding, aesthetics, cheap price and broad availability. They also do well in schools or shoals of six or more, and interact well with other fish species in the aquarium. However, they are susceptible to Oodinium or velvet disease, microsporidia (Pseudoloma neurophilia), and Mycobacterium species. Given the opportunity, adults eat hatchlings, which may be protected by separating the two groups with a net, breeding box or separate tank.
In captivity, zebrafish live approximately forty-two months. Some captive zebrafish can develop a curved spine.
The zebra danio was also used to make genetically modified fish and were the first species to be sold as GloFish (fluorescent colored fish).
Strains
In late 2003, transgenic zebrafish that express green, red, and yellow fluorescent proteins became commercially available in the United States. The fluorescent strains are tradenamed GloFish; other cultivated varieties include "golden", "sandy", "longfin" and "leopard".
The leopard danio, previously known as Danio frankei, is a spotted colour morph of the zebrafish which arose due to a pigment mutation. Xanthistic forms of both the zebra and leopard pattern, along with long-finned strains, have been obtained via selective breeding programs for the aquarium trade.
Various transgenic and mutant strains of zebrafish were stored at the China Zebrafish Resource Center (CZRC), a non-profit organization, which was jointly supported by the Ministry of Science and Technology of China and the Chinese Academy of Sciences.
Wild-type strains
The Zebrafish Information Network (ZFIN) provides up-to-date information about current known wild-type (WT) strains of D. rerio, some of which are listed below.
AB (AB)
AB/C32 (AB/C32)
AB/TL (AB/TL)
AB/Tuebingen (AB/TU)
C32 (C32)
Cologne (KOLN)
Darjeeling (DAR)
Ekkwill (EKW)
HK/AB (HK/AB)
HK/Sing (HK/SING)
Hong Kong (HK)
India (IND)
Indonesia (INDO)
Nadia (NA)
RIKEN WT (RW)
Singapore (SING)
SJA (SJA)
SJD (SJD)
SJD/C32 (SJD/C32)
Tuebingen (TU)
Tupfel long fin (TL)
Tupfel long fin nacre (TLN)
WIK (WIK)
WIK/AB (WIK/AB)
Hybrids
Hybrids between different Danio species may be fertile: for example, between D. rerio and D. nigrofasciatus.
Scientific research
D. rerio is a common and useful scientific model organism for studies of vertebrate development and gene function. Its use as a laboratory animal was pioneered by the American molecular biologist George Streisinger and his colleagues at the University of Oregon in the 1970s and 1980s; Streisinger's zebrafish clones were among the earliest successful vertebrate clones created. Its importance has been consolidated by successful large-scale forward genetic screens (commonly referred to as the Tübingen/Boston screens). The fish has a dedicated online database of genetic, genomic, and developmental information, the Zebrafish Information Network (ZFIN). The Zebrafish International Resource Center (ZIRC) is a genetic resource repository with 29,250 alleles available for distribution to the research community. D. rerio is also one of the few fish species to have been sent into space.
Research with D. rerio has yielded advances in the fields of developmental biology, oncology, toxicology, reproductive studies, teratology, genetics, neurobiology, environmental sciences, stem cell research, regenerative medicine, muscular dystrophies and evolutionary theory.
Model characteristics
As a model biological system, the zebrafish possesses numerous advantages for scientists. Its genome has been fully sequenced, and it has well-understood, easily observable and testable developmental behaviors. Its embryonic development is very rapid, and its embryos are relatively large, robust, and transparent, and able to develop outside their mother. Furthermore, well-characterized mutant strains are readily available.
Other advantages include the species' nearly constant size during early development, which enables simple staining techniques to be used, and the fact that its two-celled embryo can be fused into a single cell to create a homozygous embryo. The zebrafish is also demonstrably similar to mammalian models and humans in toxicity testing, and exhibits a diurnal sleep cycle with similarities to mammalian sleep behavior. However, zebrafish are not a universally ideal research model; there are a number of disadvantages to their scientific use, such as the absence of a standard diet and the presence of small but important differences between zebrafish and mammals in the roles of some genes related to human disorders.
Regeneration
Zebrafish have the ability to regenerate their heart and lateral line hair cells during their larval stages. The cardiac regenerative process likely involves signaling pathways such as Notch and Wnt; hemodynamic changes in the damaged heart are sensed by ventricular endothelial cells and their associated cardiac cilia by way of the mechanosensitive ion channel TRPV4, subsequently facilitating the Notch signaling pathway via KLF2 and activating various downstream effectors such as BMP-2 and HER2/neu. In 2011, the British Heart Foundation ran an advertising campaign publicising its intention to study the applicability of this ability to humans, stating that it aimed to raise £50 million in research funding.
Zebrafish have also been found to regenerate photoreceptor cells and retinal neurons following injury, which has been shown to be mediated by the dedifferentiation and proliferation of Müller glia. Researchers frequently amputate the dorsal and ventral tail fins and analyze their regrowth to test for mutations. It has been found that histone demethylation occurs at the site of the amputation, switching the zebrafish's cells to an "active", regenerative, stem cell-like state. In 2012, Australian scientists published a study revealing that zebrafish use a specialised protein, known as fibroblast growth factor, to ensure their spinal cords heal without glial scarring after injury. In addition, hair cells of the posterior lateral line have also been found to regenerate following damage or developmental disruption. Study of gene expression during regeneration has allowed for the identification of several important signaling pathways involved in the process, such as Wnt signaling and Fibroblast growth factor.
In probing disorders of the nervous system, including neurodegenerative diseases, movement disorders, psychiatric disorders and deafness, researchers are using the zebrafish to understand how the genetic defects underlying these conditions cause functional abnormalities in the human brain, spinal cord and sensory organs. Researchers have also studied the zebrafish to gain new insights into the complexities of human musculoskeletal diseases, such as muscular dystrophy. Another focus of zebrafish research is to understand how a gene called Hedgehog, a biological signal that underlies a number of human cancers, controls cell growth.
Genetics
Background genetics
Inbred strains and traditional outbred stocks have not been developed for laboratory zebrafish, and the genetic variability of wild-type lines among institutions may contribute to the replication crisis in biomedical research. Genetic differences in wild-type lines among populations maintained at different research institutions have been demonstrated using both Single-nucleotide polymorphisms and microsatellite analysis.
Gene expression
Due to their fast and short life cycles and relatively large clutch sizes, D. rerio or zebrafish are a useful model for genetic studies. A common reverse genetics technique is to reduce gene expression or modify splicing using Morpholino antisense technology. Morpholino oligonucleotides (MO) are stable, synthetic macromolecules that contain the same bases as DNA or RNA; by binding to complementary RNA sequences, they can reduce the expression of specific genes or block other processes from occurring on RNA. MO can be injected into one cell of an embryo after the 32-cell stage, reducing gene expression in only cells descended from that cell. However, cells in the early embryo (less than 32 cells) are interpermeable to large molecules, allowing diffusion between cells. Guidelines for using Morpholinos in zebrafish describe appropriate control strategies. Morpholinos are commonly microinjected in 500pL directly into 1-2 cell stage zebrafish embryos. The morpholino is able to integrate into most cells of the embryo.
A known problem with gene knockdowns is that, because the genome underwent a duplication after the divergence of ray-finned fishes and lobe-finned fishes, it is not always easy to silence the activity of one of the two gene paralogs reliably due to complementation by the other paralog. Despite the complications of the zebrafish genome, a number of commercially available global platforms exist for analysis of both gene expression by microarrays and promoter regulation using ChIP-on-chip.
Genome sequencing
The Wellcome Trust Sanger Institute started the zebrafish genome sequencing project in 2001, and the full genome sequence of the Tuebingen reference strain is publicly available at the National Center for Biotechnology Information (NCBI)'s Zebrafish Genome Page. The zebrafish reference genome sequence is annotated as part of the Ensembl project, and is maintained by the Genome Reference Consortium.
In 2009, researchers at the Institute of Genomics and Integrative Biology in Delhi, India, announced the sequencing of the genome of a wild zebrafish strain, containing an estimated 1.7 billion genetic letters. The genome of the wild zebrafish was sequenced at 39-fold coverage. Comparative analysis with the zebrafish reference genome revealed over 5 million single nucleotide variations and over 1.6 million insertion deletion variations. The zebrafish reference genome sequence of 1.4GB and over 26,000 protein coding genes was published by Kerstin Howe et al. in 2013.
Mitochondrial DNA
In October 2001, researchers from the University of Oklahoma published D. rerio's complete mitochondrial DNA sequence. Its length is 16,596 base pairs. This is within 100 base pairs of other related species of fish, and it is notably only 18 pairs longer than the goldfish (Carassius auratus) and 21 longer than the carp (Cyprinus carpio). Its gene order and content are identical to the common vertebrate form of mitochondrial DNA. It contains 13 protein-coding genes and a noncoding control region containing the origin of replication for the heavy strand. In between a grouping of five tRNA genes, a sequence resembling vertebrate origin of light strand replication is found. It is difficult to draw evolutionary conclusions because it is difficult to determine whether base pair changes have adaptive significance via comparisons with other vertebrates' nucleotide sequences.
Developmental genetics
T-boxes and homeoboxes are vital in Danio similarly to other vertebrates. The Bruce et al. team are known for this area, and in Bruce et al. 2003 & Bruce et al. 2005 uncover the role of two of these elements in oocytes of this species. By interfering via a dominant nonfunctional allele and a morpholino they find the T-box transcription activator Eomesodermin and its target mtx2 – a transcription factor – are vital to epiboly. (In Bruce et al. 2003 they failed to support the possibility that Eomesodermin behaves like Vegt. Neither they nor anyone else has been able to locate any mutation which – in the mother – will prevent initiation of the mesoderm or endoderm development processes in this species.)
Pigmentation genes
In 1999, the nacre mutation was identified in the zebrafish ortholog of the mammalian MITF transcription factor. Mutations in human MITF result in eye defects and loss of pigment, a type of Waardenburg Syndrome. In December 2005, a study of the golden strain identified the gene responsible for its unusual pigmentation as SLC24A5, a solute carrier that appeared to be required for melanin production, and confirmed its function with a Morpholino knockdown. The orthologous gene was then characterized in humans and a one base pair difference was found to strongly segregate fair-skinned Europeans and dark-skinned Africans. Zebrafish with the nacre mutation have since been bred with fish with a roy orbison (roy) mutation to make Casper strain fish that have no melanophores or iridophores, and are transparent into adulthood. These fish are characterized by uniformly pigmented eyes and translucent skin.
Transgenesis
Transgenesis is a popular approach to study the function of genes in zebrafish. Construction of transgenic zebrafish is rather easy by a method using the Tol2 transposon system. Tol2 element which encodes a gene for a fully functional transposase capable of catalyzing transposition in the zebrafish germ lineage. Tol2 is the only natural DNA transposable element in vertebrates from which an autonomous member has been identified. Examples include the artificial interaction produced between LEF1 and Catenin beta-1/β-catenin/CTNNB1. Dorsky et al. 2002 investigated the developmental role of Wnt by transgenically expressing a Lef1/β-catenin reporter.
There are well-established protocols for editing zebrafish genes using CRISPR-Cas9 and this tool has been used to generate genetically modified models.
Transparent adult bodies
In 2008, researchers at Boston Children's Hospital developed a new strain of zebrafish, named Casper, whose adult bodies had transparent skin. This allows for detailed visualization of cellular activity, circulation, metastasis and many other phenomena. In 2019 researchers published a crossing of a prkdc-/- and a IL2rga-/- strain that produced transparent, immunodeficient offspring, lacking natural killer cells as well as B- and T-cells. This strain can be adapted to warm water and the absence of an immune system makes the use of patient derived xenografts possible. In January 2013, Japanese scientists genetically modified a transparent zebrafish specimen to produce a visible glow during periods of intense brain activity.
In January 2007, Chinese researchers at Fudan University genetically modified zebrafish to detect oestrogen pollution in lakes and rivers, which is linked to male infertility. The researchers cloned oestrogen-sensitive genes and injected them into the fertile eggs of zebrafish. The modified fish turned green if placed into water that was polluted by oestrogen.
RNA splicing
In 2015, researchers at Brown University discovered that 10% of zebrafish genes do not need to rely on the U2AF2 protein to initiate RNA splicing. These genes have the DNA base pairs AC and TG as repeated sequences at the ends of each intron. On the 3'ss (3' splicing site), the base pairs adenine and cytosine alternate and repeat, and on the 5'ss (5' splicing site), their complements thymine and guanine alternate and repeat as well. They found that there was less reliance on U2AF2 protein than in humans, in which the protein is required for the splicing process to occur. The pattern of repeating base pairs around introns that alters RNA secondary structure was found in other teleosts, but not in tetrapods. This indicates that an evolutionary change in tetrapods may have led to humans relying on the U2AF2 protein for RNA splicing while these genes in zebrafish undergo splicing regardless of the presence of the protein.
Orthology
D. rerio has three transferrins, all of which cluster closely with other vertebrates.
Inbreeding depression
When close relatives mate, progeny may exhibit the detrimental effects of inbreeding depression. Inbreeding depression is predominantly caused by the homozygous expression of recessive deleterious alleles. For zebrafish, inbreeding depression might be expected to be more severe in stressful environments, including those caused by anthropogenic pollution. Exposure of zebrafish to environmental stress induced by the chemical clotrimazole, an imidazole fungicide used in agriculture and in veterinary and human medicine, amplified the effects of inbreeding on key reproductive traits. Embryo viability was significantly reduced in inbred exposed fish and there was a tendency for inbred males to sire fewer offspring.
Aquaculture research
Zebrafish are common models for research into fish farming, including pathogens and parasites causing yield loss and/or spread to adjacent wild populations.
This usefulness is less than it might be due to Danios taxonomic distance from the most common aquaculture species. Because the most common are salmonids and cod in the Protacanthopterygii and sea bass, sea bream,
tilapia, and flatfish, in the Percomorpha, zebrafish results may not be perfectly applicable. Various other models Goldfish (Carassius auratus), Medaka (Oryzias latipes), Stickleback (Gasterosteus aculeatus), Roach (Rutilus rutilus), Pufferfish (Takifugu rubripes), Swordtail (Xiphophorus hellerii) are less used normally but would be closer to particular target species.
The only exception are the Carp (including Grass Carp, Ctenopharyngodon idella) and Milkfish (Chanos chanos) which are quite close, both being in the Cyprinidae. However it should also be noted that Danio consistently proves to be a useful model for mammals in many cases and there is dramatically more genetic distance between them than between Danio and any farmed fish.
Neurochemistry
In a glucocorticoid receptor-defective mutant with reduced exploratory behavior, fluoxetine rescued the normal exploratory behavior. This demonstrates relationships between glucocorticoids, fluoxetine, and exploration in this fish.
Drug discovery and development
The zebrafish and zebrafish larva is a suitable model organism for drug discovery and development. As a vertebrate with 70% genetic homology with humans, it can be predictive of human health and disease, while its small size and fast development facilitates experiments on a larger and quicker scale than with more traditional in vivo studies, including the development of higher-throughput, automated investigative tools. As demonstrated through ongoing research programmes, the zebrafish model enables researchers not only to identify genes that might underlie human disease, but also to develop novel therapeutic agents in drug discovery programmes. Zebrafish embryos have proven to be a rapid, cost-efficient, and reliable teratology assay model.
Drug screens
Drug screens in zebrafish can be used to identify novel classes of compounds with biological effects, or to repurpose existing drugs for novel uses; an example of the latter would be a screen which found that a commonly used statin (rosuvastatin) can suppress the growth of prostate cancer. To date, 65 small-molecule screens have been carried out and at least one has led to clinical trials. Within these screens, many technical challenges remain to be resolved, including differing rates of drug absorption resulting in levels of internal exposure that cannot be extrapolated from the water concentration, and high levels of natural variation between individual animals.
Toxico- or pharmacokinetics
To understand drug effects, the internal drug exposure is essential, as this drives the pharmacological effect. Translating experimental results from zebrafish to higher vertebrates (like humans) requires concentration-effect relationships, which can be derived from pharmacokinetic and pharmacodynamic analysis.
Because of its small size, however, it is very challenging to quantify the internal drug exposure. Traditionally multiple blood samples would be drawn to characterize the drug concentration profile over time, but this technique remains to be developed. To date, only a single pharmacokinetic model for paracetamol has been developed in zebrafish larvae.
Computational data analysis
Using smart data analysis methods, pathophysiological and pharmacological processes can be understood and subsequently translated to higher vertebrates, including humans. An example is the use of systems pharmacology, which is the integration of systems biology and pharmacometrics.
Systems biology characterizes (part of) an organism by a mathematical description of all relevant processes. These can be for example different signal transduction pathways that upon a specific signal lead to a certain response. By quantifying these processes, their behaviour in healthy and diseased situation can be understood and predicted.
Pharmacometrics uses data from preclinical experiments and clinical trials to characterize the pharmacological processes that are underlying the relation between the drug dose and its response or clinical outcome. These can be for example the drug absorption in or clearance from the body, or its interaction with the target to achieve a certain effect. By quantifying these processes, their behaviour after different doses or in different patients can be understood and predicted to new doses or patients.
By integrating these two fields, systems pharmacology has the potential to improve the understanding of the interaction of the drug with the biological system by mathematical quantification and subsequent prediction to new situations, like new drugs or new organisms or patients.
Using these computational methods, the previously mentioned analysis of paracetamol internal exposure in zebrafish larvae showed reasonable correlation between paracetamol clearance in zebrafish with that of higher vertebrates, including humans.
Medical research
Cancer
Zebrafish have been used to make several transgenic models of cancer, including melanoma, leukemia, pancreatic cancer and hepatocellular carcinoma. Zebrafish expressing mutated forms of either the BRAF or NRAS oncogenes develop melanoma when placed onto a p53 deficient background. Histologically, these tumors strongly resemble the human disease, are fully transplantable, and exhibit large-scale genomic alterations. The BRAF melanoma model was utilized as a platform for two screens published in March 2011 in the journal Nature. In one study, the model was used as a tool to understand the functional importance of genes known to be amplified and overexpressed in human melanoma. One gene, SETDB1, markedly accelerated tumor formation in the zebrafish system, demonstrating its importance as a new melanoma oncogene. This was particularly significant because SETDB1 is known to be involved in the epigenetic regulation that is increasingly appreciated to be central to tumor cell biology.
In another study, an effort was made to therapeutically target the genetic program present in the tumor's origin neural crest cell using a chemical screening approach. This revealed that an inhibition of the DHODH protein (by a small molecule called leflunomide) prevented development of the neural crest stem cells which ultimately give rise to melanoma via interference with the process of transcriptional elongation. Because this approach would aim to target the "identity" of the melanoma cell rather than a single genetic mutation, leflunomide may have utility in treating human melanoma.
Cardiovascular disease
In cardiovascular research, the zebrafish has been used to model human myocardial infarction model. The zebrafish heart completely regenerates after about 2 months of injury without any scar formation. Zebrafish is also used as a model for blood clotting, blood vessel development, and congenital heart and kidney disease.
Immune system
In programmes of research into acute inflammation, a major underpinning process in many diseases, researchers have established a zebrafish model of inflammation, and its resolution. This approach allows detailed study of the genetic controls of inflammation and the possibility of identifying potential new drugs.
Zebrafish has been extensively used as a model organism to study vertebrate innate immunity. The innate immune system is capable of phagocytic activity by 28 to 30 h postfertilization (hpf) while adaptive immunity is not functionally mature until at least 4 weeks postfertilization.
Infectious diseases
As the immune system is relatively conserved between zebrafish and humans, many human infectious diseases can be modeled in zebrafish. The transparent early life stages are well suited for in vivo imaging and genetic dissection of host-pathogen interactions. Zebrafish models for a wide range of bacterial, viral and parasitic pathogens have already been established; for example, the zebrafish model for tuberculosis provides fundamental insights into the mechanisms of pathogenesis of mycobacteria. Furthermore, robotic technology has been developed for high-throughput antimicrobial drug screening using zebrafish infection models.
Repairing retinal damage
Another notable characteristic of the zebrafish is that it possesses four types of cone cell, with ultraviolet-sensitive cells supplementing the red, green and blue cone cell subtypes found in humans. Zebrafish can thus observe a very wide spectrum of colours. The species is also studied to better understand the development of the retina; in particular, how the cone cells of the retina become arranged into the so-called 'cone mosaic'. Zebrafish, in addition to certain other teleost fish, are particularly noted for having extreme precision of cone cell arrangement.
This study of the zebrafish's retinal characteristics has also extrapolated into medical enquiry. In 2007, researchers at University College London grew a type of zebrafish adult stem cell found in the eyes of fish and mammals that develops into neurons in the retina. These could be injected into the eye to treat diseases that damage retinal neurons—nearly every disease of the eye, including macular degeneration, glaucoma, and diabetes-related blindness. The researchers studied Müller glial cells in the eyes of humans aged from 18 months to 91 years, and were able to develop them into all types of retinal neurons. They were also able to grow them easily in the lab. The stem cells successfully migrated into diseased rats' retinas, and took on the characteristics of the surrounding neurons. The team stated that they intended to develop the same approach in humans.
Muscular dystrophies
Muscular dystrophies (MD) are a heterogeneous group of genetic disorders that cause muscle weakness, abnormal contractions and muscle wasting, often leading to premature death. Zebrafish is widely used as model organism to study muscular dystrophies. For example, the sapje (sap) mutant is the zebrafish orthologue of human Duchenne muscular dystrophy (DMD). The Machuca-Tzili and co-workers applied zebrafish to determine the role of alternative splicing factor, MBNL, in myotonic dystrophy type 1 (DM1) pathogenesis. More recently, Todd et al. described a new zebrafish model designed to explore the impact of CUG repeat expression during early development in DM1 disease. Zebrafish is also an excellent animal model to study congenital muscular dystrophies including CMD Type 1 A (CMD 1A) caused by mutation in the human laminin α2 (LAMA2) gene. The zebrafish, because of its advantages discussed above, and in particular the ability of zebrafish embryos to absorb chemicals, has become a model of choice in screening and testing new drugs against muscular dystrophies.
Bone physiology and pathology
Zebrafish have been used as model organisms for bone metabolism, tissue turnover, and resorbing activity. These processes are largely evolutionary conserved. They have been used to study osteogenesis (bone formation), evaluating differentiation, matrix deposition activity, and cross-talk of skeletal cells, to create and isolate mutants modeling human bone diseases, and test new chemical compounds for the ability to revert bone defects. The larvae can be used to follow new (de novo) osteoblast formation during bone development. They start mineralising bone elements as early as 4 days post fertilisation. Recently, adult zebrafish are being used to study complex age related bone diseases such as osteoporosis and osteogenesis imperfecta. The (elasmoid) scales of zebrafish function as a protective external layer and are little bony plates made by osteoblasts. These exoskeletal structures are formed by bone matrix depositing osteoblasts and are remodeled by osteoclasts. The scales also act as the main calcium storage of the fish. They can be cultured ex-vivo (kept alive outside of the organism) in a multi-well plate, which allows manipulation with drugs and even screening for new drugs that could change bone metabolism (between osteoblasts and osteoclasts).
Diabetes
Zebrafish pancreas development is very homologous to mammals, such as mice. The signaling mechanisms and way the pancreas functions are very similar. The pancreas has an endocrine compartment, which contains a variety of cells. Pancreatic PP cells that produce polypeptides, and β-cells that produce insulin are two examples of those such cells. This structure of the pancreas, along with the glucose homeostasis system, are helpful in studying diseases, such as diabetes, that are related to the pancreas. Models for pancreas function, such as fluorescent staining of proteins, are useful in determining the processes of glucose homeostasis and the development of the pancreas. Glucose tolerance tests have been developed using zebrafish, and can now be used to test for glucose intolerance or diabetes in humans. The function of insulin are also being tested in zebrafish, which will further contribute to human medicine. The majority of work done surrounding knowledge on glucose homeostasis has come from work on zebrafish transferred to humans.
Obesity
Zebrafish have been used as a model system to study obesity, with research into both genetic obesity and over-nutrition induced obesity. Obese zebrafish, similar to obese mammals, show dysregulation of lipid controlling metabolic pathways, which leads to weight gain without normal lipid metabolism. Also like mammals, zebrafish store excess lipids in visceral, intramuscular, and subcutaneous adipose deposits. These reasons and others make zebrafish good models for studying obesity in humans and other species. Genetic obesity is usually studied in transgenic or mutated zebrafish with obesogenic genes. As an example, transgenic zebrafish with overexpressed AgRP, an endogenous melacortin antagonist, showed increased body weight and adipose deposition during growth. Though zebrafish genes may not be the exact same as human genes, these tests could provide important insight into possible genetic causes and treatments for human genetic obesity. Diet-induced obesity zebrafish models are useful, as diet can be modified from a very early age. High fat diets and general overfeeding diets both show rapid increases in adipose deposition, increased BMI, hepatosteatosis, and hypertriglyceridemia. However, the normal fat, overfed specimens are still metabolically healthy, while high-fat diet specimens are not. Understanding differences between types of feeding-induced obesity could prove useful in human treatment of obesity and related health conditions.
Environmental toxicology
Zebrafish have been used as a model system in environmental toxicology studies.
Epilepsy
Zebrafish have been used as a model system to study epilepsy. Mammalian seizures can be recapitulated molecularly, behaviorally, and electrophysiologically, using a fraction of the resources required for experiments in mammals.
See also
Japanese rice fish or medaka, another fish used for genetic, developmental, and biomedical research
List of freshwater aquarium fish species
Denison barb
References
Further reading
External links
British Association of Zebrafish Husbandry
International Zebrafish Society (IZFS)
European Society for Fish Models in Biology and Medicine (EuFishBioMed)
The Zebrafish Information Network (ZFIN)
The Zebrafish International Resource Center (ZIRC)
The European Zebrafish Resource Center (EZRC)
The China Zebrafish Resource Center (CZRC)
The Zebrafish Genome Sequencing Project at the Wellcome Trust Sanger Institute
FishMap: The Zebrafish Community Genomics Browser at the Institute of Genomics and Integrative Biology (IGIB)
WebHome Zebrafish GenomeWiki Beta Preview at the IGIB
Genome sequencing initiative at the IGIB
Danio rerio at Danios.info
Sanger Institute Zebrafish Mutation Resource
Zebrafish genome via Ensembl
FishforScience.com – using zebrafish for medical research
FishForPharma
Breeding Zebrafish
Fish described in 1822
Danio
Fish of Bangladesh
Freshwater fish of India
Freshwater fish of Pakistan
Animal models
Stem cell research
Regenerative biomedicine
Animal models in neuroscience
Taxa named by Francis Buchanan-Hamilton
Fish of Nepal
Fish of Bhutan
|
https://en.wikipedia.org/wiki/Bistability
|
In a dynamical system, bistability means the system has two stable equilibrium states. A bistable structure can be resting in either of two states. An example of a mechanical device which is bistable is a light switch. The switch lever is designed to rest in the "on" or "off" position, but not between the two. Bistable behavior can occur in mechanical linkages, electronic circuits, nonlinear optical systems, chemical reactions, and physiological and biological systems.
In a conservative force field, bistability stems from the fact that the potential energy has two local minima, which are the stable equilibrium points. These rest states need not have equal potential energy. By mathematical arguments, a local maximum, an unstable equilibrium point, must lie between the two minima. At rest, a particle will be in one of the minimum equilibrium positions, because that corresponds to the state of lowest energy. The maximum can be visualized as a barrier between them.
A system can transition from one state of minimal energy to the other if it is given enough activation energy to penetrate the barrier (compare activation energy and Arrhenius equation for the chemical case). After the barrier has been reached, assuming the system has damping, it will relax into the other minimum state in a time called the relaxation time.
Bistability is widely used in digital electronics devices to store binary data. It is the essential characteristic of the flip-flop, a circuit which is a fundamental building block of computers and some types of semiconductor memory. A bistable device can store one bit of binary data, with one state representing a "0" and the other state a "1". It is also used in relaxation oscillators, multivibrators, and the Schmitt trigger.
Optical bistability is an attribute of certain optical devices where two resonant transmissions states are possible and stable, dependent on the input.
Bistability can also arise in biochemical systems, where it creates digital, switch-like outputs from the constituent chemical concentrations and activities. It is often associated with hysteresis in such systems.
Mathematical modelling
In the mathematical language of dynamic systems analysis, one of the simplest bistable systems is
This system describes a ball rolling down a curve with shape , and has three equilibrium points: , , and . The middle point is marginally stable ( is stable but will not converge to ), while the other two points are stable. The direction of change of over time depends on the initial condition . If the initial condition is positive (), then the solution approaches 1 over time, but if the initial condition is negative (), then approaches −1 over time. Thus, the dynamics are "bistable". The final state of the system can be either or , depending on the initial conditions.
The appearance of a bistable region can be understood for the model system
which undergoes a supercritical pitchfork bifurcation with bifurcation parameter .
In biological and chemical systems
Bistability is key for understanding basic phenomena of cellular functioning, such as decision-making processes in cell cycle progression, cellular differentiation, and apoptosis. It is also involved in loss of cellular homeostasis associated with early events in cancer onset and in prion diseases as well as in the origin of new species (speciation).
Bistability can be generated by a positive feedback loop with an ultrasensitive regulatory step. Positive feedback loops, such as the simple X activates Y and Y activates X motif, essentially link output signals to their input signals and have been noted to be an important regulatory motif in cellular signal transduction because positive feedback loops can create switches with an all-or-nothing decision. Studies have shown that numerous biological systems, such as Xenopus oocyte maturation, mammalian calcium signal transduction, and polarity in budding yeast, incorporate multiple positive feedback loops with different time scales (slow and fast). Having multiple linked positive feedback loops with different time scales ("dual-time switches") allows for (a) increased regulation: two switches that have independent changeable activation and deactivation times; and (b) noise filtering.
Bistability can also arise in a biochemical system only for a particular range of parameter values, where the parameter can often be interpreted as the strength of the feedback. In several typical examples, the system has only one stable fixed point at low values of the parameter. A saddle-node bifurcation gives rise to a pair of new fixed points emerging, one stable and the other unstable, at a critical value of the parameter. The unstable solution can then form another saddle-node bifurcation with the initial stable solution at a higher value of the parameter, leaving only the higher fixed solution. Thus, at values of the parameter between the two critical values, the system has two stable solutions. An example of a dynamical system that demonstrates similar features is
where is the output, and is the parameter, acting as the input.
Bistability can be modified to be more robust and to tolerate significant changes in concentrations of reactants, while still maintaining its "switch-like" character. Feedback on both the activator of a system and inhibitor make the system able to tolerate a wide range of concentrations. An example of this in cell biology is that activated CDK1 (Cyclin Dependent Kinase 1) activates its activator Cdc25 while at the same time inactivating its inactivator, Wee1, thus allowing for progression of a cell into mitosis. Without this double feedback, the system would still be bistable, but would not be able to tolerate such a wide range of concentrations.
Bistability has also been described in the embryonic development of Drosophila melanogaster (the fruit fly). Examples are anterior-posterior and dorso-ventral axis formation and eye development.
A prime example of bistability in biological systems is that of Sonic hedgehog (Shh), a secreted signaling molecule, which plays a critical role in development. Shh functions in diverse processes in development, including patterning limb bud tissue differentiation. The Shh signaling network behaves as a bistable switch, allowing the cell to abruptly switch states at precise Shh concentrations. gli1 and gli2 transcription is activated by Shh, and their gene products act as transcriptional activators for their own expression and for targets downstream of Shh signaling. Simultaneously, the Shh signaling network is controlled by a negative feedback loop wherein the Gli transcription factors activate the enhanced transcription of a repressor (Ptc). This signaling network illustrates the simultaneous positive and negative feedback loops whose exquisite sensitivity helps create a bistable switch.
Bistability can only arise in biological and chemical systems if three necessary conditions are fulfilled: positive feedback, a mechanism to filter out small stimuli and a mechanism to prevent increase without bound.
Bistable chemical systems have been studied extensively to analyze relaxation kinetics, non-equilibrium thermodynamics, stochastic resonance, as well as climate change. In bistable spatially extended systems the onset of local correlations and propagation of traveling waves have been analyzed.
Bistability is often accompanied by hysteresis. On a population level, if many realisations of a bistable system are considered (e.g. many bistable cells (speciation)), one typically observes bimodal distributions. In an ensemble average over the population, the result may simply look like a smooth transition, thus showing the value of single-cell resolution.
A specific type of instability is known as modehopping, which is bi-stability in the frequency space. Here trajectories can shoot between two stable limit cycles, and thus show similar characteristics as normal bi-stability when measured inside a Poincare section.
In mechanical systems
Bistability as applied in the design of mechanical systems is more commonly said to be "over centre"—that is, work is done on the system to move it just past the peak, at which point the mechanism goes "over centre" to its secondary stable position. The result is a toggle-type action- work applied to the system below a threshold sufficient to send it 'over center' results in no change to the mechanism's state.
Springs are a common method of achieving an "over centre" action. A spring attached to a simple two position ratchet-type mechanism can create a button or plunger that is clicked or toggled between two mechanical states. Many ballpoint and rollerball retractable pens employ this type of bistable mechanism.
An even more common example of an over-center device is an ordinary electric wall switch. These switches are often designed to snap firmly into the "on" or "off" position once the toggle handle has been moved a certain distance past the center-point.
A ratchet-and-pawl is an elaboration—a multi-stable "over center" system used to create irreversible motion. The pawl goes over center as it is turned in the forward direction. In this case, "over center" refers to the ratchet being stable and "locked" in a given position until clicked forward again; it has nothing to do with the ratchet being unable to turn in the reverse direction.
Gallery
See also
Multistability – the generalized case of more than two stable points
In psychology
ferroelectric, ferromagnetic, hysteresis, bistable perception
Schmitt trigger
strong Allee effect
Interferometric modulator display, a bistable reflective display technology found in mirasol displays by Qualcomm
References
External links
BiStable Reed Sensor
Digital electronics
2 (number)
es:Biestable
|
https://en.wikipedia.org/wiki/Chess
|
Chess is a board game for two players, called White and Black, each controlling an army of chess pieces, with the objective to checkmate the opponent's king. It is sometimes called international chess or Western chess to distinguish it from related games such as (Chinese chess) and (Japanese chess). The recorded history of chess goes back at least to the emergence of a similar game, chaturanga, in seventh century India. The rules of chess as they are known today emerged in Europe at the end of the 15th century, with standardization and universal acceptance by the end of the 19th century. Today, chess is one of the world's most popular games played by millions of people worldwide.
Chess is an abstract strategy game that involves no hidden information and no elements of chance. It is played on a chessboard with 64 squares arranged in an 8×8 grid. At the start, each player controls sixteen pieces: one king, one queen, two rooks, two bishops, two knights, and eight pawns. White moves first, followed by Black. The game is won by checkmating the opponent's king, i.e. threatening it with inescapable capture. There are also several ways a game can end in a draw.
Organized chess arose in the 19th century. Chess competition today is governed internationally by FIDE (the International Chess Federation). The first universally recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886; Ding Liren is the current World Champion. A huge body of chess theory has developed since the game's inception. Aspects of art are found in chess composition, and chess in its turn influenced Western culture and the arts, and has connections with other fields such as mathematics, computer science, and psychology.
One of the goals of early computer scientists was to create a chess-playing machine. In 1997, Deep Blue became the first computer to beat the reigning World Champion in a match when it defeated Garry Kasparov. Today's chess engines are significantly stronger than the best human players and have deeply influenced the development of chess theory; however, chess is not a solved game.
Rules
The rules of chess are published by FIDE (Fédération Internationale des Échecs; "International Chess Federation"), chess's world governing body, in its Handbook. Rules published by national governing bodies, or by unaffiliated chess organizations, commercial publishers, etc., may differ in some details. FIDE's rules were most recently revised in 2023.
Setup
Chess sets come in a wide variety of styles. The Staunton pattern is the most common, and is usually required for competition. Chess pieces are divided into two sets, usually light and dark colored, referred to as white and black, regardless of the actual color or design. The players of the sets are referred to as White and Black, respectively. Each set consists of sixteen pieces: one king, one queen, two rooks, two bishops, two knights, and eight pawns.
The game is played on a square board of eight rows (called ) and eight columns (called ). By convention, the 64 squares alternate in color and are referred to as and squares; common colors for chessboards are white and brown, or white and green.
The pieces are set out as shown in the diagram and photo. Thus, on White's first rank, from left to right, the pieces are placed as follows: rook, knight, bishop, queen, king, bishop, knight, rook. Eight pawns are placed on the second rank. Black's position mirrors White's, with an equivalent piece on the same file. The board is placed with a light square at the right-hand corner nearest to each player. The correct positions of the king and queen may be remembered by the phrase "queen on her own color" (i.e. the white queen begins on a light square, and the black queen on a dark square).
In competitive games, the piece colors are allocated to players by the organizers; in informal games, the colors are usually decided randomly, for example by a coin toss, or by one player concealing a white pawn in one hand and a black pawn in the other, and having the opponent choose.
Movement
White moves first, after which players alternate turns, moving one piece per turn (except for castling, when two pieces are moved). A piece is moved to either an unoccupied square or one occupied by an opponent's piece, which is captured and removed from play. With the sole exception of en passant, all pieces capture by moving to the square that the opponent's piece occupies. Moving is compulsory; a player may not skip a turn, even when having to move is detrimental.
Each piece has its own way of moving. In the diagrams, crosses mark the squares to which the piece can move if there are no intervening piece(s) of either color (except the knight, which leaps over any intervening pieces). All pieces except the pawn can capture an enemy piece if it is on a square to which they could move if the square were unoccupied.
The king moves one square in any direction. There is also a special move called that involves moving the king and a rook. The king is the most valuable piece—attacks on the king must be immediately countered, and if this is impossible, the game is immediately lost (see Check and checkmate below).
A rook can move any number of squares along a rank or file, but cannot leap over other pieces. Along with the king, a rook is involved during the king's castling move.
A bishop can move any number of squares diagonally, but cannot leap over other pieces.
A queen combines the power of a rook and bishop and can move any number of squares along a rank, file, or diagonal, but cannot leap over other pieces.
A knight moves to any of the closest squares that are not on the same rank, file, or diagonal. (Thus the move forms an "L"-shape: two squares vertically and one square horizontally, or two squares horizontally and one square vertically.) The knight is the only piece that can leap over other pieces.
A pawn can move forward to the unoccupied square immediately in front of it on the same file, or on its first move it can advance two squares along the same file, provided both squares are unoccupied (black dots in the diagram). A pawn can capture an opponent's piece on a square diagonally in front of it by moving to that square (black crosses). It can capture a piece while advancing along the same file. A pawn has two special moves: the en passant capture and promotion.
Check and checkmate
When a king is under immediate attack, it is said to be in check. A move in response to a check is legal only if it results in a position where the king is no longer in check. There are three ways to counter a check:
Capture the checking piece.
Interpose a piece between the checking piece and the king (which is possible only if the attacking piece is a queen, rook, or bishop and there is a square between it and the king).
Move the king to a square where it is not under attack.
Castling is not a permissible response to a check.
The object of the game is to checkmate the opponent; this occurs when the opponent's king is in check, and there is no legal way to get it out of check. It is never legal for a player to make a move that puts or leaves the player's own king in check. In casual games, it is common to announce "check" when putting the opponent's king in check, but this is not required by the rules of chess and is usually not done in tournaments.
Castling
Once per game, each king can make a move known as . Castling consists of moving the king two squares toward a rook of the same color on the same rank, and then placing the rook on the square that the king crossed.
Castling is permissible if the following conditions are met:
Neither the king nor the rook has previously moved during the game.
There are no pieces between the king and the rook.
The king is not in check and does not pass through or finish on a square attacked by an enemy piece.
Castling is still permitted if the rook is under attack, or if the rook crosses an attacked square.
En passant
When a pawn makes a two-step advance from its starting position and there is an opponent's pawn on a square next to the destination square on an adjacent file, then the opponent's pawn can capture it en passant ("in passing"), moving to the square the pawn passed over. This can be done only on the turn immediately following the enemy pawn's two-square advance; otherwise, the right to do so is forfeited. For example, in the animated diagram, the black pawn advances two squares from g7 to g5, and the white pawn on f5 can take it en passant on g6 (but only immediately after the black pawn's advance).
Promotion
When a pawn advances to its eighth rank, as part of the move, it is and must be exchanged for the player's choice of queen, rook, bishop, or knight of the same color. Usually, the pawn is chosen to be promoted to a queen, but in some cases, another piece is chosen; this is called underpromotion. In the animated diagram, the pawn on c7 can be advanced to the eighth rank and be promoted. There is no restriction on the piece promoted to, so it is possible to have more pieces of the same type than at the start of the game (e.g., two or more queens). If the required piece is not available (e.g. a second queen) an inverted rook is sometimes used as a substitute, but this is not recognized in FIDE-sanctioned games.
End of the game
Win
A game can be won in the following ways:
Checkmate: The king is in check and the player has no legal move. (See check and checkmate above.)
Resignation: A player may resign, conceding the game to the opponent. If, however, the opponent has no way of checkmating the resigned player, this is a draw under FIDE Laws. Most tournament players consider it good etiquette to resign in a hopeless position.
Win on time: In games with a time control, a player wins if the opponent runs out of time, even if the opponent has a superior position, as long as the player has a theoretical possibility to checkmate the opponent were the game to continue.
Forfeit: A player who cheats, violates the rules, or violates the rules of conduct specified for the particular tournament can be forfeited. Occasionally, both players are forfeited.
Draw
There are several ways a game can end in a draw:
Stalemate: If the player to move has no legal move, but is not in check, the position is a stalemate, and the game is drawn.
Dead position: If neither player is able to checkmate the other by any legal sequence of moves, the game is drawn. For example, if only the kings are on the board, all other pieces having been captured, checkmate is impossible, and the game is drawn by this rule. On the other hand, if both players still have a knight, there is a highly unlikely yet theoretical possibility of checkmate, so this rule does not apply. The dead position rule supersedes the previous rule which referred to "insufficient material", extending it to include other positions where checkmate is impossible, such as blocked pawn endings where the pawns cannot be attacked.
Draw by agreement: In tournament chess, draws are most commonly reached by mutual agreement between the players. The correct procedure is to verbally offer the draw, make a move, then start the opponent's clock. Traditionally, players have been allowed to agree to a draw at any point in the game, occasionally even without playing a move. More recently efforts have been made to discourage short draws, for example by forbidding draw offers before move thirty.
Threefold repetition: This most commonly occurs when neither side is able to avoid repeating moves without incurring a disadvantage. In this situation, either player can claim a draw; this requires the players to keep a valid written record of the game so that the claim can be verified by the arbiter if challenged. The three occurrences of the position need not occur on consecutive moves for a claim to be valid. The addition of the fivefold repetition rule in 2014 requires the arbiter to intervene immediately and declare the game a draw after five occurrences of the same position, consecutive or otherwise, without requiring a claim by either player. FIDE rules make no mention of perpetual check; this is merely a specific type of draw by threefold repetition.
Fifty-move rule: If during the previous 50 moves no pawn has been moved and no capture has been made, either player can claim a draw. The addition of the seventy-five-move rule in 2014 requires the arbiter to intervene and immediately declare the game drawn after 75 moves without a pawn move or capture, without requiring a claim by either player. There are several known endgames where it is possible to force a mate but it requires more than 50 moves before a pawn move or capture is made; examples include some endgames with two knights against a pawn and some pawnless endgames such as queen against two bishops. Historically, FIDE has sometimes revised the fifty-move rule to make exceptions for these endgames, but these have since been repealed. Some correspondence chess organizations do not enforce the fifty-move rule.
Draw on time: In games with a time control, the game is drawn if a player is out of time and no sequence of legal moves would allow the opponent to checkmate the player.
Draw by resignation: Under FIDE Laws, a game is drawn if a player resigns and no sequence of legal moves would allow the opponent to checkmate that player.
Time control
In competition, chess games are played with a time control. If a player's time runs out before the game is completed, the game is automatically lost (provided the opponent has to deliver checkmate). The duration of a game ranges from long (or "classical") games, which can take up to seven hours (even longer if adjournments are permitted), to bullet chess (under 3 minutes per player for the entire game). Intermediate between these are rapid chess games, lasting between one and two hours per game, a popular time control in amateur weekend tournaments.
Time is controlled using a chess clock that has two displays, one for each player's remaining time. Analog chess clocks have been largely replaced by digital clocks, which allow for time controls with increments.
Time controls are also enforced in correspondence chess competitions. A typical time control is 50 days for every 10 moves.
Notation
Historically, many different notation systems have been used to record chess moves; the standard system today is short-form algebraic notation. In this system, each square is uniquely identified by a set of coordinates, – for the files followed by – for the ranks. The usual format is
– –
The pieces are identified by their initials. In English, these are (king), (queen), (rook), (bishop), and (knight; N is used to avoid confusion with king). For example, Qg5 means "queen moves to the g-file, 5th rank" (that is, to the square g5). Different initials may be used for other languages. In chess literature, figurine algebraic notation (FAN) is frequently used to aid understanding independent of language.
To resolve ambiguities, an additional letter or number is added to indicate the file or rank from which the piece moved (e.g. Ngf3 means "knight from the g-file moves to the square f3"; R1e2 means "rook on the first rank moves to e2"). For pawns, no letter initial is used; so e4 means "pawn moves to the square e4".
If the piece makes a capture, "x" is usually inserted before the destination square. Thus Bxf3 means "bishop captures on f3". When a pawn makes a capture, the file from which the pawn departed is used to identify the pawn making the capture, for example, exd5 (pawn on the e-file captures the piece on d5). Ranks may be omitted if unambiguous, for example, exd (pawn on the e-file captures a piece somewhere on the d-file). A minority of publications use ":" to indicate a capture, and some omit the capture symbol altogether. In its most abbreviated form, exd5 may be rendered simply as ed. An en passant capture may optionally be marked with the notation "e.p."
If a pawn moves to its last rank, achieving promotion, the piece chosen is indicated after the move (for example, e1=Q or e1Q). Castling is indicated by the special notations 0-0 (or O-O) for castling and 0-0-0 (or O-O-O) for castling. A move that places the opponent's king in check usually has the notation "+" added. There are no specific notations for discovered check or double check. Checkmate can be indicated by "#". At the end of the game, "1–0" means White won, "0–1" means Black won, and "½–½" indicates a draw.
Chess moves can be annotated with punctuation marks and other symbols. For example: "!" indicates a good move; "!!" an excellent move; "?" a mistake; "??" a blunder; "!?" an interesting move that may not be best; or "?!" a dubious move not easily refuted.
For example, one variation of a simple trap known as the Scholar's mate (see animated diagram) can be recorded:
1. e4 e5 2. Qh5 Nc6 3. Bc4 Nf6 4. Qxf7
Variants of algebraic notation include long algebraic, in which both the departure and destination square are indicated; abbreviated algebraic, in which capture signs, check signs, and ranks of pawn captures may be omitted; and Figurine Algebraic Notation, used in chess publications for universal readability regardless of language.
Portable Game Notation (PGN) is a text-based file format for recording chess games, based on short form English algebraic notation with a small amount of markup. PGN files (suffix .pgn) can be processed by most chess software, as well as being easily readable by humans.
Until about 1980, the majority of English language chess publications used descriptive notation, in which files are identified by the initial letter of the piece that occupies the first rank at the beginning of the game. In descriptive notation, the common opening move 1.e4 is rendered as "1.P-K4" ("pawn to king four"). Another system is ICCF numeric notation, recognized by the International Correspondence Chess Federation though its use is in decline.
In tournament games, players are normally required to keep a (record of the game). For this purpose, only algebraic notation is recognized in FIDE-sanctioned events; game scores recorded in a different notation system may not be used as evidence in the event of a dispute.
Chess in public spaces
Chess is often played casually in public spaces such as parks and town squares.
Organized competition
Tournaments and matches
Contemporary chess is an organized sport with structured international and national leagues, tournaments, and congresses. Thousands of chess tournaments, matches, and festivals are held around the world every year catering to players of all levels.
Tournaments with a small number of players may use the round-robin format, in which every player plays one game against every other player. For a large number of players, the Swiss system may be used, in which each player is paired against an opponent who has the same (or as similar as possible) score in each round. In either case, a player's score is usually calculated as 1 point for each game won and one-half point for each game drawn. Variations such as "football scoring" (3 points for a win, 1 point for a draw) may be used by tournament organizers, but ratings are always calculated on the basis of standard scoring. A player's score may be reported as total score out of games played (e.g. 5½/8), points for versus points against (e.g. 5½–2½), or by number of wins, losses and draws (e.g. +4−1=3).
The term "match" refers not to an individual game, but to either a series of games between two players, or a team competition in which each player of one team plays one game against a player of the other team.
Governance
Chess's international governing body is usually known by its French acronym FIDE (pronounced FEE-day) (French: Fédération internationale des échecs), or International Chess Federation. FIDE's membership consists of the national chess organizations of over 180 countries; there are also several associate members, including various supra-national organizations, the International Braille Chess Association (IBCA), International Committee of Chess for the Deaf (ICCD), and the International Physically Disabled Chess Association (IPCA). FIDE is recognized as a sports governing body by the International Olympic Committee, but chess has never been part of the Olympic Games.
FIDE's most visible activity is organizing the World Chess Championship, a role it assumed in 1948. The current World Champion is Ding Liren of China. The reigning Women's World Champion is Ju Wenjun from China.
Other competitions for individuals include the World Junior Chess Championship, the European Individual Chess Championship, the tournaments for the World Championship qualification cycle, and the various national championships. Invitation-only tournaments regularly attract the world's strongest players. Examples include Spain's Linares event, Monte Carlo's Melody Amber tournament, the Dortmund Sparkassen meeting, Sofia's M-tel Masters, and Wijk aan Zee's Tata Steel tournament.
Regular team chess events include the Chess Olympiad and the European Team Chess Championship.
The World Chess Solving Championship and World Correspondence Chess Championships include both team and individual events; these are held independently of FIDE.
Titles and rankings
In order to rank players, FIDE, ICCF, and most national chess organizations use the Elo rating system developed by Arpad Elo. An average club player has a rating of about 1500; the highest FIDE rating of all time, 2882, was achieved by Magnus Carlsen on the March 2014 FIDE rating list.
Players may be awarded lifetime titles by FIDE:
Grandmaster (GM; sometimes or IGM is used) is awarded to world-class chess masters. Apart from World Champion, Grandmaster is the highest title a chess player can attain. Before FIDE will confer the title on a player, the player must have an Elo rating of at least 2500 at one time and three results of a prescribed standard (called norms) in tournaments involving other grandmasters, including some from countries other than the applicant's. There are other milestones a player can achieve to attain the title, such as winning the World Junior Championship.
International Master (IM). The conditions are similar to GM, but less demanding. The minimum rating for the IM title is 2400.
FIDE Master (FM). The usual way for a player to qualify for the FIDE Master title is by achieving a FIDE rating of 2300 or more.
Candidate Master (CM). Similar to FM, but with a FIDE rating of at least 2200.
The above titles are open to both men and women. There are also separate women-only titles; Woman Grandmaster (WGM), Woman International Master (WIM), Woman FIDE Master (WFM) and Woman Candidate Master (WCM). These require a performance level approximately 200 Elo rating points below the similarly named open titles, and their continued existence has sometimes been controversial. Beginning with Nona Gaprindashvili in 1978, a number of women have earned the open GM title: 40 .
FIDE also awards titles for arbiters and trainers. International titles are also awarded to composers and solvers of chess problems and to correspondence chess players (by the International Correspondence Chess Federation). National chess organizations may also award titles.
Theory
Chess has an extensive literature. In 1913, the chess historian H.J.R. Murray estimated the total number of books, magazines, and chess columns in newspapers to be about 5,000. B.H. Wood estimated the number, as of 1949, to be about 20,000. David Hooper and Kenneth Whyld write that, "Since then there has been a steady increase year by year of the number of new chess publications. No one knows how many have been printed." Significant public chess libraries include the John G. White Chess and Checkers Collection at Cleveland Public Library, with over 32,000 chess books and over 6,000 bound volumes of chess periodicals; and the Chess & Draughts collection at the National Library of the Netherlands, with about 30,000 books.
Chess theory usually divides the game of chess into three phases with different sets of strategies: the opening, typically the first 10 to 20 moves, when players move their pieces to useful positions for the coming battle; the middlegame; and last the endgame, when most of the pieces are gone, kings typically take a more active part in the struggle, and pawn promotion is often decisive.
is concerned with finding the best moves in the initial phase of the game. There are dozens of different openings, and hundreds of variants. The Oxford Companion to Chess lists 1,327 named openings and variants.
is usually divided into chess tactics and chess strategy. Chess strategy concentrates on setting and achieving long-term positional advantages during the game – for example, where to place different pieces – while tactics concerns immediate maneuver. These two aspects of the gameplay cannot be completely separated, because strategic goals are mostly achieved through tactics, while the tactical opportunities are based on the previous strategy of play.
is concerned with positions where there are only a few pieces left. These positions are categorized according to the pieces, for example "King and pawn" endings or "Rook versus minor piece" endings.
Opening
A chess opening is the group of initial moves of a game (the "opening moves"). Recognized sequences of opening moves are referred to as and have been given names such as the Ruy Lopez or Sicilian Defense. They are catalogued in reference works such as the Encyclopaedia of Chess Openings. There are dozens of different openings, varying widely in character from quiet (for example, the Réti Opening) to very aggressive (the Latvian Gambit). In some opening lines, the exact sequence considered best for both sides has been worked out to more than 30 moves. Professional players spend years studying openings and continue doing so throughout their careers, as opening theory continues to evolve.
The fundamental strategic aims of most openings are similar:
Development: This is the technique of placing the pieces (particularly bishops and knights) on useful squares where they will have an optimal impact on the game.
Control of the : Control of the central squares allows pieces to be moved to any part of the board relatively easily, and can also have a cramping effect on the opponent.
King safety: It is critical to keep the king safe from dangerous possibilities. A correctly timed castling can often enhance this.
Pawn structure: Players strive to avoid the creation of pawn weaknesses such as isolated, doubled, or backward pawns, and pawn islands – and to force such weaknesses in the opponent's position.
Most players and theoreticians consider that White, by virtue of the first move, begins the game with a small advantage. This initially gives White the initiative. Black usually strives to neutralize White's advantage and achieve , or to develop in an unbalanced position.
Middlegame
The middlegame is the part of the game that starts after the opening. There is no clear line between the opening and the middlegame, but typically the middlegame will start when most pieces have been developed. (Similarly, there is no clear transition from the middlegame to the endgame; see start of the endgame.) Because the opening theory has ended, players have to form plans based on the features of the position, and at the same time take into account the tactical possibilities of the position. The middlegame is the phase in which most combinations occur. Combinations are a series of tactical moves executed to achieve some gain. Middlegame combinations are often connected with an attack against the opponent's king. Some typical patterns have their own names; for example, the Boden's Mate or the Lasker–Bauer combination.
Specific plans or strategic themes will often arise from particular groups of openings that result in a specific type of pawn structure. An example is the , which is the attack of queenside pawns against an opponent who has more pawns on the queenside. The study of openings is therefore connected to the preparation of plans that are typical of the resulting middlegames.
Another important strategic question in the middlegame is whether and how to reduce material and transition into an endgame (i.e. ). Minor material advantages can generally be transformed into victory only in an endgame, and therefore the stronger side must choose an appropriate way to achieve an ending. Not every reduction of material is good for this purpose; for example, if one side keeps a light-squared bishop and the opponent has a dark-squared one, the transformation into a bishops and pawns ending is usually advantageous for the weaker side only, because an endgame with bishops on opposite colors is likely to be a draw, even with an advantage of a pawn, or sometimes even with a two-pawn advantage.
Tactics
In chess, tactics in general concentrate on short-term actions – so short-term that they can be calculated in advance by a human player or a computer. The possible depth of calculation depends on the player's ability. In positions with many possibilities on both sides, a deep calculation is more difficult and may not be practical, while in positions with a limited number of variations, strong players can calculate long sequences of moves.
Theoreticians describe many elementary tactical methods and typical maneuvers, for example: pins, forks, skewers, batteries, discovered attacks (especially discovered checks), zwischenzugs, deflections, decoys, sacrifices, underminings, overloadings, and interferences. Simple one-move or two-move tactical actions – threats, exchanges of , and double attacks – can be combined into more complicated sequences of tactical maneuvers that are often forced from the point of view of one or both players. A forced variation that involves a sacrifice and usually results in a tangible gain is called a . Brilliant combinations – such as those in the Immortal Game – are considered beautiful and are admired by chess lovers. A common type of chess exercise, aimed at developing players' skills, is a position where a decisive combination is available and the challenge is to find it.
Strategy
Chess strategy is concerned with the evaluation of chess positions and with setting up goals and long-term plans for future play. During the evaluation, players must take into account numerous factors such as the value of the pieces on the board, control of the center and centralization, the pawn structure, king safety, and the control of key squares or groups of squares (for example, diagonals, open files, and dark or light squares).
The most basic step in evaluating a position is to count the total value of pieces of both sides. The point values used for this purpose are based on experience; usually, pawns are considered worth one point, knights and bishops about three points each, rooks about five points (the value difference between a rook and a bishop or knight being known as the exchange), and queens about nine points. The king is more valuable than all of the other pieces combined, since its checkmate loses the game. But in practical terms, in the endgame, the king as a fighting piece is generally more powerful than a bishop or knight but less powerful than a rook. These basic values are then modified by other factors like position of the piece (e.g. advanced pawns are usually more valuable than those on their initial squares), coordination between pieces (e.g. a pair of bishops usually coordinate better than a bishop and a knight), or the type of position (e.g. knights are generally better in with many pawns while bishops are more powerful in ).
Another important factor in the evaluation of chess positions is (sometimes known as the ): the configuration of pawns on the chessboard. Since pawns are the least mobile of the pieces, pawn structure is relatively static and largely determines the strategic nature of the position. Weaknesses in pawn structure include isolated, doubled, or backward pawns and ; once created, they are often permanent. Care must therefore be taken to avoid these weaknesses unless they are compensated by another valuable asset (for example, by the possibility of developing an attack).
Endgame
The endgame (also or ) is the stage of the game when there are few pieces left on the board. There are three main strategic differences between earlier stages of the game and the endgame:
Pawns become more important. Endgames often revolve around endeavors to promote a pawn by advancing it to the furthest .
The king, which requires safeguarding from attack during the middlegame, emerges as a strong piece in the endgame. It is often brought to the where it can protect its own pawns, attack enemy pawns, and hinder moves of the opponent's king.
Zugzwang, a situation in which the player who is to move is forced to incur a disadvantage, is often a factor in endgames but rarely in other stages of the game. In the example diagram, either side having the move is in zugzwang: Black to move must play 1...Kb7 allowing White to promote the pawn after 2.Kd7; White to move must permit a draw, either by 1.Kc6 stalemate or by losing the pawn after any other legal move.
Endgames can be classified according to the type of pieces remaining on the board. Basic checkmates are positions in which one side has only a king and the other side has one or two pieces and can checkmate the opposing king, with the pieces working together with their king. For example, king and pawn endgames involve only kings and pawns on one or both sides, and the task of the stronger side is to promote one of the pawns. Other more complicated endings are classified according to pieces on the board other than kings, such as "rook and pawn versus rook" endgames.
History
Origins
Texts referring to the origins of chess date from the beginning of the seventh century. Three are written in Pahlavi (Middle Persian) and one, the Harshacharita, is in Sanskrit. One of these texts, the Chatrang-namak, represents one of the earliest written accounts of chess. The narrator Bozorgmehr explains that Chatrang, "Chess" in Pahlavi, was introduced to Persia by 'Dewasarm, a great ruler of India' during the reign of Khosrow I:
The oldest known chess manual was in Arabic and dates to about 840, written by al-Adli ar-Rumi (800–870), a renowned Arab chess player, titled Kitab ash-shatranj (The Book of Chess). This is a lost manuscript, but is referenced in later works. Here also, al-Adli attributes the origins of Persian chess to India, along with the eighth-century collection of fables Kalīla wa-Dimna. By the 20th century, a substantial consensus developed regarding chess's origins in northwest India in the early seventh century. More recently, this consensus has been the subject of further scrutiny.
The early forms of chess in India were known as (), literally "four divisions" [of the military] – infantry, cavalry, elephants, and chariotry – represented by pieces that would later evolve into the modern pawn, knight, bishop, and rook, respectively. Chaturanga was played on an 8×8 uncheckered board, called . Thence it spread eastward and westward along the Silk Road. The earliest evidence of chess is found in nearby Sasanian Persia around 600 A.D., where the game came to be known by the name (). Chatrang was taken up by the Muslim world after the Islamic conquest of Persia (633–51), where it was then named (; ), with the pieces largely retaining their Persian names. In Spanish, "shatranj" was rendered as ajedrez ("al-shatranj"), in Portuguese as xadrez, and in Greek as ζατρίκιον (zatrikion, which comes directly from the Persian chatrang), but in the rest of Europe it was replaced by versions of the Persian shāh ("king"), from which the English words "check" and "chess" descend. The word "checkmate" is derived from the Persian shāh māt ("the king is dead").
Xiangqi is the form of chess best known in China. The eastern migration of chess, into China and Southeast Asia, has even less documentation than its migration west, making it largely conjectured. The word () was used in China to refer to a game from 569 A.D. at the latest, but it has not been proven if this game was or was not directly related to chess.
The first reference to Chinese chess appears in a book entitled Xuánguaì Lù (; "Record of the Mysterious and Strange"), dating to about 800. A minority view holds that Western chess arose from xiàngqí or one of its predecessors. Chess historians Jean-Louis Cazaux and Rick Knowlton contend that xiangqi's intrinsic characteristics make it easier to construct an evolutionary path from China to India/Persia than the opposite direction.
The oldest archaeological chess artifacts – ivory pieces – were excavated in ancient Afrasiab, today's Samarkand, in Uzbekistan, Central Asia, and date to about 760, with some of them possibly being older. Remarkably, almost all findings of the oldest pieces come from along the Silk Road, from the former regions of the Tarim Basin (today's Xinjiang in China), Transoxiana, Sogdiana, Bactria, Gandhara, to Iran on one end and to India through Kashmir on the other.
The game reached Western Europe and Russia via at least three routes, the earliest being in the ninth century. By the year 1000, it had spread throughout both the Muslim Iberia and Latin Europe. A Latin poem called Versus de scachis ("Verses on Chess") dated to the late 10th century, has been preserved at Einsiedeln Abbey in Switzerland.
1200–1700: Origins of the modern game
The game of chess was then played and known in all European countries. A famous 13th-century Spanish manuscript covering chess, backgammon, and dice is known as the , which is the earliest European treatise on chess as well as being the oldest document on European tables games. The rules were fundamentally similar to those of the Arabic shatranj. The differences were mostly in the use of a checkered board instead of a plain monochrome board used by Arabs and the habit of allowing some or all pawns to make an initial double step. In some regions, the queen, which had replaced the wazir, or the king could also make an initial two-square leap under some conditions.
Around 1200, the rules of shatranj started to be modified in Europe, culminating, several major changes later, in the emergence of modern chess practically as it is known today. A major change was the modern piece movement rules, which began to appear in intellectual circles in Valencia, Spain, around 1475, which established the foundations and brought it very close to current chess. These new rules then were quickly adopted in Italy and Southern France before diffusing into the rest of Europe. Pawns gained the ability to advance two squares on their first move, while bishops and queens acquired their modern movement powers. The queen replaced the earlier vizier chess piece toward the end of the 10th century and by the 15th century had become the most powerful piece; in light of that, modern chess was often referred to at the time as "Queen's Chess" or "Mad Queen Chess". Castling, derived from the "king's leap", usually in combination with a pawn or rook move to bring the king to safety, was introduced. These new rules quickly spread throughout Western Europe.
Writings about chess theory began to appear in the late 15th century. An anonymous treatise on chess of 1490 with the first part containing some openings and the second 30 endgames is deposited in the library of the University of Göttingen. The book El Libro dels jochs partitis dels schachs en nombre de 100 was written by Francesc Vicent in Segorbe in 1495, but no copy of this work has survived. The Repetición de Amores y Arte de Ajedrez (Repetition of Love and the Art of Playing Chess) by Spanish churchman Luis Ramírez de Lucena was published in Salamanca in 1497. Lucena and later masters like Portuguese Pedro Damiano, Italians Giovanni Leonardo Di Bona, Giulio Cesare Polerio and Gioachino Greco, and Spanish bishop Ruy López de Segura developed elements of opening theory and started to analyze simple endgames.
1700–1873: Romantic era
In the 18th century, the center of European chess life moved from Southern Europe to mainland France. The two most important French masters were François-André Danican Philidor, a musician by profession, who discovered the importance of pawns for chess strategy, and later Louis-Charles Mahé de La Bourdonnais, who won a famous series of matches against Irish master Alexander McDonnell in 1834. Centers of chess activity in this period were coffee houses in major European cities like Café de la Régence in Paris and Simpson's Divan in London.
At the same time, the intellectual movement of romanticism had had a far-reaching impact on chess, with aesthetics and tactical beauty being held in higher regard than objective soundness and strategic planning. As a result, virtually all games began with the Open Game, and it was considered unsportsmanlike to decline gambits that invited tactical play such as the King's Gambit and the Evans Gambit. This chess philosophy is known as Romantic chess, and a sharp, tactical style consistent with the principles of chess romanticism was predominant until the late 19th century.
The rules concerning stalemate were finalized in the early 19th century. Also in the 19th century, the convention that White moves first was established (formerly either White or Black could move first). Finally, the rules around castling and en passant captures were standardized – variations in these rules persisted in Italy until the late 19th century. The resulting standard game is sometimes referred to as or , particularly in Asia where other games of the chess family such as xiangqi are prevalent. Since the 19th century, the only rule changes, such as the establishment of the correct procedure for claiming a draw by repetition, have been technical in nature.
As the 19th century progressed, chess organization developed quickly. Many chess clubs, chess books, and chess journals appeared. There were correspondence matches between cities; for example, the London Chess Club played against the Edinburgh Chess Club in 1824. Chess problems became a regular part of 19th-century newspapers; Bernhard Horwitz, Josef Kling, and Samuel Loyd composed some of the most influential problems. In 1843, von der Lasa published his and Bilguer's Handbuch des Schachspiels (Handbook of Chess), the first comprehensive manual of chess theory.
The first modern chess tournament was organized by Howard Staunton, a leading English chess player, and was held in London in 1851. It was won by the German Adolf Anderssen, who was hailed as the leading chess master. His brilliant, energetic attacking style was typical for the time. Sparkling games like Anderssen's Immortal Game and Evergreen Game or Morphy's "Opera Game" were regarded as the highest possible summit of the art of chess.
Deeper insight into the nature of chess came with the American Paul Morphy, an extraordinary chess prodigy. Morphy won against all important competitors (except Staunton, who refused to play), including Anderssen, during his short chess career between 1857 and 1863. Morphy's success stemmed from a combination of brilliant attacks and sound strategy; he intuitively knew how to prepare attacks.
1873–1945: Birth of a sport
Prague-born Wilhelm Steinitz laid the foundations for a scientific approach to the game, the art of breaking a position down into components and preparing correct plans. In addition to his theoretical achievements, Steinitz founded an important tradition: his triumph over the leading German master Johannes Zukertort in 1886 is regarded as the first official World Chess Championship. This win marked a stylistic transition at the highest levels of chess from an attacking, tactical style predominant in the Romantic era to a more positional, strategic style introduced to the chess world by Steinitz. Steinitz lost his crown in 1894 to a much younger player, the German mathematician Emanuel Lasker, who maintained this title for 27 years, the longest tenure of any world champion.
After the end of the 19th century, the number of master tournaments and matches held annually quickly grew. The first Olympiad was held in Paris in 1924, and FIDE was founded initially for the purpose of organizing that event. In 1927, the Women's World Chess Championship was established; the first to hold the title was Czech-English master Vera Menchik.
A prodigy from Cuba, José Raúl Capablanca, known for his skill in endgames, won the World Championship from Lasker in 1921. Capablanca was undefeated in tournament play for eight years, from 1916 to 1924. His successor (1927) was the Russian-French Alexander Alekhine, a strong attacking player who died as the world champion in 1946. Alekhine briefly lost the title to Dutch player Max Euwe in 1935 and regained it two years later.
In the interwar period, chess was revolutionized by the new theoretical school of so-called hypermodernists like Aron Nimzowitsch and Richard Réti. They advocated controlling the of the board with distant pieces rather than with pawns, thus inviting opponents to occupy the center with pawns, which become objects of attack.
1945–1990: Post-World War II era
After the death of Alekhine, a new World Champion was sought. FIDE, which has controlled the title since then, ran a tournament of elite players. The winner of the 1948 tournament was Russian Mikhail Botvinnik. In 1950, FIDE established a system of titles, conferring the titles of Grandmaster and International Master on 27 players. (Some sources state that, in 1914, the title of chess Grandmaster was first formally conferred by Tsar Nicholas II of Russia to Lasker, Capablanca, Alekhine, Tarrasch, and Marshall, but this is a disputed claim.)
Botvinnik started an era of Soviet dominance in the chess world, which mainly through the Soviet government's politically inspired efforts to demonstrate intellectual superiority over the West stood almost uninterrupted for more than a half-century. Until the dissolution of the Soviet Union, there was only one non-Soviet champion, American Bobby Fischer (champion 1972–1975). Botvinnik also revolutionized opening theory. Previously, Black strove for equality, attempting to neutralize White's first-move advantage. As Black, Botvinnik strove for the initiative from the beginning. In the previous informal system of World Championships, the current champion decided which challenger he would play for the title and the challenger was forced to seek sponsors for the match. FIDE set up a new system of qualifying tournaments and matches. The world's strongest players were seeded into Interzonal tournaments, where they were joined by players who had qualified from Zonal tournaments. The leading finishers in these Interzonals would go through the "Candidates" stage, which was initially a tournament, and later a series of knockout matches. The winner of the Candidates would then play the reigning champion for the title. A champion defeated in a match had a right to play a rematch a year later. This system operated on a three-year cycle. Botvinnik participated in championship matches over a period of fifteen years. He won the world championship tournament in 1948 and retained the title in tied matches in 1951 and 1954. In 1957, he lost to Vasily Smyslov, but regained the title in a rematch in 1958. In 1960, he lost the title to the 23-year-old Latvian prodigy Mikhail Tal, an accomplished tactician and attacking player who is widely regarded as one of the most creative players ever, hence his nickname "the magician from Riga". Botvinnik again regained the title in a rematch in 1961.
Following the 1961 event, FIDE abolished the automatic right of a deposed champion to a rematch, and the next champion, Armenian Tigran Petrosian, a player renowned for his defensive and positional skills, held the title for two cycles, 1963–1969. His successor, Boris Spassky from Russia (champion 1969–1972), won games in both positional and sharp tactical style. The next championship, the so-called Match of the Century, saw the first non-Soviet challenger since World War II, American Bobby Fischer. Fischer defeated his opponents in the Candidates matches by unheard-of margins, and convincingly defeated Spassky for the world championship. The match was followed closely by news media of the day, leading to a surge in popularity for chess; it also held significant political importance at the height of the Cold War, with the match being seen by both sides as a microcosm of the conflict between East and West. In 1975, however, Fischer refused to defend his title against Soviet Anatoly Karpov when he was unable to reach agreement on conditions with FIDE, and Karpov obtained the title by default. Fischer modernized many aspects of chess, especially by extensively preparing openings.
Karpov defended his title twice against Viktor Korchnoi and dominated the 1970s and early 1980s with a string of tournament successes. In the 1984 World Chess Championship, Karpov faced his toughest challenge to date, the young Garry Kasparov from Baku, Soviet Azerbaijan. The match was aborted in controversial circumstances after 5 months and 48 games with Karpov leading by 5 wins to 3, but evidently exhausted; many commentators believed Kasparov, who had won the last two games, would have won the match had it continued. Kasparov won the 1985 rematch. Kasparov and Karpov contested three further closely fought matches in 1986, 1987 and 1990, Kasparov winning them all. Kasparov became the dominant figure of world chess from the mid-1980s until his retirement from competition in 2005.
Beginnings of chess technology
Chess-playing computer programs (later known as chess engines) began to appear in the 1960s. In 1970, the first major computer chess tournament, the North American Computer Chess Championship, was held, followed in 1974 by the first World Computer Chess Championship. In the late 1970s, dedicated home chess computers such as Fidelity Electronics' Chess Challenger became commercially available, as well as software to run on home computers. The overall standard of computer chess was low, however, until the 1990s.
The first endgame tablebases, which provided perfect play for relatively simple endgames such as king and rook versus king and bishop, appeared in the late 1970s. This set a precedent to the complete six- and seven-piece tablebases that became available in the 2000s and 2010s respectively.
The first commercial chess database, a collection of chess games searchable by move and position, was introduced by the German company ChessBase in 1987. Databases containing millions of chess games have since had a profound effect on opening theory and other areas of chess research.
Digital chess clocks were invented in 1973, though they did not become commonplace until the 1990s. Digital clocks allow for time controls involving increments and delays.
1990–present: Rise of computers and online chess
Technology
The Internet enabled online chess as a new medium of playing, with chess servers allowing users to play other people from different parts of the world in real time. The first such server, known as Internet Chess Server or ICS, was developed at the University of Utah in 1992. ICS formed the basis for the first commercial chess server, the Internet Chess Club, which was launched in 1995, and for other early chess servers such as FICS (Free Internet Chess Server). Since then, many other platforms have appeared, and online chess began to rival over-the-board chess in popularity. During the 2020 COVID-19 pandemic, the isolation ensuing from quarantines imposed in many places around the world, combined with the success of the popular Netflix show The Queen's Gambit and other factors such as the popularity of online tournaments (notably PogChamps) and chess Twitch streamers, resulted in a surge of popularity not only for online chess, but for the game of chess in general; this phenomenon has been referred to in the media as the 2020 online chess boom.
Computer chess has also seen major advances. By the 1990s, chess engines could consistently defeat most amateurs, and in 1997 Deep Blue defeated World Champion Garry Kasparov in a six-game match, starting an era of computer dominance at the highest level of chess. In the 2010s, engines significantly stronger than even the best human players became accessible for free on a number of PC and mobile platforms, and free engine analysis became a commonplace feature on internet chess servers. An adverse effect of the easy availability of engine analysis on hand-held devices and personal computers has been the rise of computer cheating, which has grown to be a major concern in both over-the-board and online chess. In 2017, AlphaZero – a neural network also capable of playing shogi and Go – was introduced. Since then, many chess engines based on neural network evaluation have been written, the best of which have surpassed the traditional "brute-force" engines. AlphaZero also introduced many novel ideas and ways of playing the game, which affected the style of play at the top level.
As endgame tablebases developed, they began to provide perfect play in endgame positions in which the game-theoretical outcome was previously unknown, such as positions with king, queen and pawn against king and queen. In 1991, Lewis Stiller published a tablebase for select six-piece endgames, and by 2005, following the publication of Nalimov tablebases, all six-piece endgame positions were solved. In 2012, Lomonosov tablebases were published which solved all seven-piece endgame positions. Use of tablebases enhances the performance of chess engines by providing definitive results in some branches of analysis.
Technological progress made in the 1990s and the 21st century has influenced the way that chess is studied at all levels, as well as the state of chess as a spectator sport.
Previously, preparation at the professional level required an extensive chess library and several subscriptions to publications such as Chess Informant to keep up with opening developments and study opponents' games. Today, preparation at the professional level involves the use of databases containing millions of games, and engines to analyze different opening variations and prepare novelties. A number of online learning resources are also available for players of all levels, such as online courses, tactics trainers, and video lessons.
Since the late 1990s, it has been possible to follow major international chess events online, the players' moves being relayed in real time. Sensory boards have been developed to enable automatic transmission of moves. Chess players will frequently run engines while watching these games, allowing them to quickly identify mistakes by the players and spot tactical opportunities. While in the past the moves have been relayed live, today chess organizers will often impose a half-hour delay as an anti-cheating measure. In the mid-to-late 2010s – and especially following the 2020 online boom – it became commonplace for supergrandmasters, such as Hikaru Nakamura and Magnus Carlsen, to livestream chess content on platforms such as Twitch. Also following the boom, online chess started being viewed as an esport, with esport teams signing chess players for the first time in 2020.
Growth
Organized chess even for young children has become common. FIDE holds world championships for age levels down to 8 years old. The largest tournaments, in number of players, are those held for children.
The number of grandmasters and other chess professionals has also grown in the modern era. Kenneth Regan and Guy Haworth conducted research involving comparison of move choices by players of different levels and from different periods with the analysis of strong chess engines; they concluded that the increase in the number of grandmasters and higher Elo ratings of the top players reflect an actual increase in the average standard of play, rather than "rating inflation" or "title inflation".
Professional chess
In 1993, Garry Kasparov and Nigel Short broke ties with FIDE to organize their own match for the World Championship and formed a competing Professional Chess Association (PCA). From then until 2006, there were two simultaneous World Championships and respective World Champions: the PCA or "classical" champions extending the Steinitzian tradition in which the current champion plays a challenger in a series of games, and the other following FIDE's new format of many players competing in a large knockout tournament to determine the champion. Kasparov lost his PCA title in 2000 to Vladimir Kramnik of Russia. Due to the complicated state of world chess politics and difficulties obtaining commercial sponsorships, Kasparov was never able to challenge for the title again. Despite this, he continued to dominate in top level tournaments and remained the world's highest rated player until his retirement from competitive chess in 2005.
The World Chess Championship 2006, in which Kramnik beat the FIDE World Champion Veselin Topalov, reunified the titles and made Kramnik the undisputed World Chess Champion. In September 2007, he lost the title to Viswanathan Anand of India, who won the championship tournament in Mexico City. Anand defended his title in the revenge match of 2008, 2010 and 2012. Magnus Carlsen defeated Anand in the 2013 World Chess Championship, and defended his title in 2014, 2016, 2018, and 2021. After the 2021 match, he announced that he would not defend his title a fifth time, so the 2023 World Chess Championship was played between the winner and runner-up of the Candidates Tournament 2022: respectively, Ian Nepomniachtchi of Russia and Ding Liren of China. Ding beat Nepomniachtchi, making him the current World Chess Champion.
Connections
Arts and humanities
In the Middle Ages and during the Renaissance, chess was a part of noble culture; it was used to teach war strategy and was dubbed the "King's Game". Gentlemen are "to be meanly seene in the play at Chestes", says the overview at the beginning of Baldassare Castiglione's The Book of the Courtier (1528, English 1561 by Sir Thomas Hoby), but chess should not be a gentleman's main passion. Castiglione explains it further:
And what say you to the game at chestes? It is an honest kynde of enterteynmente and wittie, quoth Syr Friderick. But me think it hath a fault, whiche is, that a man may be to couning at it, for who ever will be excellent in the playe of chestes, I beleave he must beestowe much tyme about it, and applie it with so much study, that a man may assoone learne some noble scyence, or compase any other matter of importaunce, and yet in the ende in beestowing all that laboure, he knoweth no more but a game. Therfore in this I beleave there happeneth a very rare thing, namely, that the meane is more commendable, then the excellency.
Some of the elaborate chess sets used by the aristocracy at least partially survive, such as the Lewis chessmen.
Chess was often used as a basis of sermons on morality. An example is Liber de moribus hominum et officiis nobilium sive super ludo scacchorum ('Book of the customs of men and the duties of nobles or the Book of Chess'), written by an Italian Dominican friar Jacobus de Cessolis . This book was one of the most popular of the Middle Ages. The work was translated into many other languages (the first printed edition was published at Utrecht in 1473) and was the basis for William Caxton's The Game and Playe of the Chesse (1474), one of the first books printed in English. Different chess pieces were used as metaphors for different classes of people, and human duties were derived from the rules of the game or from visual properties of the chess pieces:
The knyght ought to be made alle armed upon an hors in suche wyse that he haue an helme on his heed and a spere in his ryght hande/ and coueryd wyth his sheld/ a swerde and a mace on his lyft syde/ Cladd wyth an hawberk and plates to fore his breste/ legge harnoys on his legges/ Spores on his heelis on his handes his gauntelettes/ his hors well broken and taught and apte to bataylle and couerid with his armes/ whan the knyghtes ben maad they ben bayned or bathed/ that is the signe that they shold lede a newe lyf and newe maners/ also they wake alle the nyght in prayers and orysons vnto god that he wylle gyue hem grace that they may gete that thynge that they may not gete by nature/ The kynge or prynce gyrdeth a boute them a swerde in signe/ that they shold abyde and kepe hym of whom they take theyr dispenses and dignyte.
Known in the circles of clerics, students, and merchants, chess entered into the popular culture of the Middle Ages. An example is the 209th song of Carmina Burana from the 13th century, which starts with the names of chess pieces, Roch, pedites, regina... The game of chess, at times, has been discouraged by various religious authorities in Middle Ages: Jewish, Catholic and Orthodox. Some Muslim authorities prohibited it even recently, for example Ruhollah Khomeini in 1979 and Abdul-Aziz ash-Sheikh even later.
During the Age of Enlightenment, chess was viewed as a means of self-improvement. Benjamin Franklin, in his article "The Morals of Chess" (1750), wrote:
The Game of Chess is not merely an idle amusement; several very valuable qualities of the mind, useful in the course of human life, are to be acquired and strengthened by it, so as to become habits ready on all occasions; for life is a kind of Chess, in which we have often points to gain, and competitors or adversaries to contend with, and in which there is a vast variety of good and ill events, that are, in some degree, the effect of prudence, or the want of it. By playing at Chess then, we may learn:
I. Foresight, which looks a little into futurity, and considers the consequences that may attend an action ...
II. Circumspection, which surveys the whole Chess-board, or scene of action: – the relation of the several Pieces, and their situations ...
III. Caution, not to make our moves too hastily ...
Chess was occasionally criticized in the 19th century as a waste of time.
Chess is taught to children in schools around the world today. Many schools host chess clubs, and there are many scholastic tournaments specifically for children. Tournaments are held regularly in many countries, hosted by organizations such as the United States Chess Federation and the National Scholastic Chess Foundation.
Chess is many times depicted in the arts; significant works where chess plays a key role range from Thomas Middleton's A Game at Chess to Through the Looking-Glass by Lewis Carroll, to Vladimir Nabokov's The Defense, to The Royal Game by Stefan Zweig. Chess has also featured in film classics such as Ingmar Bergman's The Seventh Seal, Satyajit Ray's The Chess Players, and Powell and Pressburger's A Matter of Life and Death.
Chess is also present in contemporary popular culture. For example, the characters in Star Trek play a futuristic version of the game called "Federation Tri-Dimensional Chess" and "Wizard's Chess" is played in J.K. Rowling's Harry Potter.
Mathematics
The game structure and nature of chess are related to several branches of mathematics. Many combinatorical and topological problems connected to chess, such as the knight's tour and the eight queens puzzle, have been known for hundreds of years.
The number of legal positions in chess is estimated to be with a 95% confidence level, with a game-tree complexity of approximately 10123. The game-tree complexity of chess was first calculated by Claude Shannon as 10120, a number known as the Shannon number. An average position typically has thirty to forty possible moves, but there may be as few as zero (in the case of checkmate or stalemate) or (in a constructed position) as many as 218.
In 1913, Ernst Zermelo used chess as a basis for his theory of game strategies, which is considered one of the predecessors of game theory. Zermelo's theorem states that it is possible to solve chess, i.e. to determine with certainty the outcome of a perfectly played game (either White can force a win, or Black can force a win, or both sides can force at least a draw). With 1043 legal positions in chess, however, it will take an impossibly long time to compute a perfect strategy with any feasible technology.
Psychology
There is an extensive scientific literature on chess psychology. Alfred Binet and others showed that knowledge and verbal, rather than visuospatial, ability lies at the core of expertise. In his doctoral thesis, Adriaan de Groot showed that chess masters can rapidly perceive the key features of a position. According to de Groot, this perception, made possible by years of practice and study, is more important than the sheer ability to anticipate moves. De Groot showed that chess masters can memorize positions shown for a few seconds almost perfectly. The ability to memorize does not alone account for chess-playing skill, since masters and novices, when faced with random arrangements of chess pieces, had equivalent recall (about six positions in each case). Rather, it is the ability to recognize patterns, which are then memorized, which distinguished the skilled players from the novices. When the positions of the pieces were taken from an actual game, the masters had almost total positional recall.
More recent research has focused on chess as mental training; the respective roles of knowledge and look-ahead search; brain imaging studies of chess masters and novices; blindfold chess; the role of personality and intelligence in chess skill; gender differences; and computational models of chess expertise. The role of practice and talent in the development of chess and other domains of expertise has led to much empirical investigation. Ericsson and colleagues have argued that deliberate practice is sufficient for reaching high levels of expertise in chess. Recent research, however, fails to replicate their results and indicates that factors other than practice are also important.
For example, Fernand Gobet and colleagues have shown that stronger players started playing chess at a young age and that experts born in the Northern Hemisphere are more likely to have been born in late winter and early spring. Compared to the general population, chess players are more likely to be non-right-handed, though they found no correlation between handedness and skill.
A relationship between chess skill and intelligence has long been discussed in scientific literature as well as in popular culture. Academic studies that investigate the relationship date back at least to 1927. Although one meta-analysis and most children studies find a positive correlation between general cognitive ability and chess skill, adult studies show mixed results.
Composition
Chess composition is the art of creating chess problems (also called chess compositions). The creator is known as a chess composer. There are many types of chess problems; the two most important are:
White to move first and checkmate Black within a specified number of moves, against any defense. These are often referred to as "mate in " – for example "mate in three" (a ); two- and three-move problems are the most common. These usually involve positions that would be highly unlikely to occur in an actual game, and are intended to illustrate a particular , usually requiring a surprising or counterintuitive move. Themes associated with chess problems occasionally appear in actual games, when they are referred to as "problem-like" moves.
orthodox problems where the stipulation is that White to play must win or draw. The majority of studies are endgame positions.
Fairy chess is a branch of chess problem composition involving altered rules, such as the use of unconventional pieces or boards, or unusual stipulations such as reflexmates.
Tournaments for composition and solving of chess problems are organized by the World Federation for Chess Composition, which works cooperatively with but independent of FIDE. The WFCC awards titles for composing and solving chess problems.
Online chess
Online chess is chess that is played over the internet, allowing players to play against each other in real time. This is done through the use of Internet chess servers, which pair up individual players based on their rating using an Elo or similar rating system. Online chess saw a spike in growth during the quarantines of the COVID-19 pandemic. This can be attributed to both isolation and the popularity of Netflix miniseries The Queen's Gambit, which was released in October 2020. Chess app downloads on the App Store and Google Play Store rose by 63% after the show debuted. Chess.com saw more than twice as many account registrations in November as it had in previous months, and the number of games played monthly on Lichess doubled as well. There was also a demographic shift in players, with female registration on Chess.com shifting from 22% to 27% of new players. GM Maurice Ashley said "A boom is taking place in chess like we have never seen maybe since the Bobby Fischer days", attributing the growth to an increased desire to do something constructive during the pandemic. USCF Women's Program Director Jennifer Shahade stated that chess works well on the internet, since pieces do not need to be reset and matchmaking is virtually instant.
Computer chess
The idea of creating a chess-playing machine dates to the 18th century; around 1769, the chess-playing automaton called The Turk became famous before being exposed as a hoax. Serious trials based on automata, such as El Ajedrecista, were too complex and limited to be useful.
Since the advent of the digital computer in the 1950s, chess enthusiasts, computer engineers, and computer scientists have built, with increasing degrees of seriousness and success, chess-playing machines and computer programs. The groundbreaking paper on computer chess, "Programming a Computer for Playing Chess", was published in 1950 by Claude Shannon. He wrote:
The chess machine is an ideal one to start with, since: (1) the problem is sharply defined both in allowed operations (the moves) and in the ultimate goal (checkmate); (2) it is neither so simple as to be trivial nor too difficult for satisfactory solution; (3) chess is generally considered to require "thinking" for skillful play; a solution of this problem will force us either to admit the possibility of a mechanized thinking or to further restrict our concept of "thinking"; (4) the discrete structure of chess fits well into the digital nature of
modern computers.
The Association for Computing Machinery (ACM) held the first major chess tournament for computers, the North American Computer Chess Championship, in September 1970. CHESS 3.0, a chess program from Northwestern University, won the championship. The first World Computer Chess Championship, held in 1974, was won by the Soviet program Kaissa. At first considered only a curiosity, the best chess playing programs have become extremely strong. In 1997, a computer won a chess match using classical time controls against a reigning World Champion for the first time: IBM's Deep Blue beat Garry Kasparov 3½–2½ (it scored two wins, one loss, and three draws). There was some controversy over the match, and human–computer matches were relatively close over the next few years, until convincing computer victories in 2005 and in 2006.
In 2009, a mobile phone won a category 6 tournament with a performance rating of 2898: chess engine Hiarcs 13 running on the mobile phone HTC Touch HD won the Copa Mercosur tournament with nine wins and one draw. The best chess programs are now able to consistently beat the strongest human players, to the extent that human–computer matches no longer attract interest from chess players or the media. While the World Computer Chess Championship still exists, the Top Chess Engine Championship (TCEC) is widely regarded as the unofficial world championship for chess engines. The current champion is Stockfish.
With huge databases of past games and high analytical ability, computers can help players to learn chess and prepare for matches. Internet Chess Servers allow people to find and play opponents worldwide. The presence of computers and modern communication tools have raised concerns regarding cheating during games.
Variants
There are more than two thousand published chess variants, games with similar but different rules. Most of them are of relatively recent origin. They include:
direct predecessors of chess, such as chaturanga and shatranj;
traditional national or regional games that share common ancestors with Western chess such as xiangqi (Chinese chess), shogi (Japanese chess), janggi (Korean chess), ouk chatrang (Cambodian chess), makruk (Thai chess), sittuyin (Burmese chess), and shatar (Mongolian chess);
modern variations employing different rules (e.g. Losing chess or Chess960), different forces (e.g. Dunsany's Chess), non-standard pieces (e.g. Grand Chess), and different board geometries (e.g. hexagonal chess or Infinite chess);
In the context of chess variants, chess is commonly referred to as , , , , and .
See also
Glossary of chess
Glossary of chess problems
List of World Chess Championships
Women in chess
Notes
References
Bibliography
Further reading
(see the included supplement, "How Do You Play Chess")
External links
International organizations
FIDE – World Chess Federation
ICCF – International Correspondence Chess Federation
News
Chessbase news
The Week in Chess
History
Chesshistory.com
Abstract strategy games
Individual sports
Indian inventions
Traditional board games
Partially solved games
Games related to chaturanga
|
https://en.wikipedia.org/wiki/Combinatorics
|
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms.
A mathematician who studies combinatorics is called a .
Definition
The full scope of combinatorics is not universally agreed upon. According to H.J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with:
the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems,
the existence of such structures that satisfy certain given criteria,
the construction of these structures, perhaps in many ways, and
optimization: finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion.
Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained." One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting.
History
Basic combinatorial concepts and enumerative results appeared throughout the ancient world. Indian physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26 − 1 possibilities. Greek historian Plutarch discusses an argument between Chrysippus (3rd century BCE) and Hipparchus (2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related to Schröder–Hipparchus numbers. Earlier, in the Ostomachion, Archimedes (3rd century BCE) may have considered the number of configurations of a tiling puzzle, while combinatorial interests possibly were present in lost works by Apollonius.
In the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra () provided formulae for the number of permutations and combinations, and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE. The philosopher and astronomer Rabbi Abraham ibn Ezra () established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321.
The arithmetical triangle—a graphical diagram showing relationships among the binomial coefficients—was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations.
During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J.J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) helped lay the foundation for enumerative and algebraic combinatorics. Graph theory also enjoyed an increase of interest at the same time, especially in connection with the four color problem.
In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject. In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field.
Approaches and subfields of combinatorics
Enumerative combinatorics
Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of certain combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. Fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a unified framework for counting permutations, combinations and partitions.
Analytic combinatorics
Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Partition theory
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory and has connections with statistical mechanics. Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general.
Graph theory
Graphs are fundamental objects in combinatorics. Considerations of graph theory range from enumeration (e.g., the number of graphs on n vertices with k edges) to existing structures (e.g., Hamiltonian cycles) to algebraic representations (e.g., given a graph G and two numbers x and y, does the Tutte polynomial TG(x,y) have a combinatorial interpretation?). Although there are very strong connections between graph theory and combinatorics, they are sometimes thought of as separate subjects. While combinatorial methods apply to many graph theory problems, the two disciplines are generally used to seek solutions to different types of problems.
Design theory
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Block designs are combinatorial designs of a special type. This area is one of the oldest parts of combinatorics, such as in Kirkman's schoolgirl problem proposed in 1850. The solution of the problem is a special case of a Steiner system, which systems play an important role in the classification of finite simple groups. The area has further connections to coding theory and geometric combinatorics.
Combinatorial design theory can be applied to the area of design of experiments. Some of the basic theory of combinatorial designs originated in the statistician Ronald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas including finite geometry, tournament scheduling, lotteries, mathematical chemistry, mathematical biology, algorithm design and analysis, networking, group testing and cryptography.
Finite geometry
Finite geometry is the study of geometric systems having only a finite number of points. Structures analogous to those found in continuous geometries (Euclidean plane, real projective space, etc.) but defined combinatorially are the main items studied. This area provides a rich source of examples for design theory. It should not be confused with discrete geometry (combinatorial geometry).
Order theory
Order theory is the study of partially ordered sets, both finite and infinite. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". Various examples of partial orders appear in algebra, geometry, number theory and throughout combinatorics and graph theory. Notable classes and examples of partial orders include lattices and Boolean algebras.
Matroid theory
Matroid theory abstracts part of geometry. It studies the properties of sets (usually, finite sets) of vectors in a vector space that do not depend on the particular coefficients in a linear dependence relation. Not only the structure but also enumerative properties belong to matroid theory. Matroid theory was introduced by Hassler Whitney and studied as a part of order theory. It is now an independent field of study with a number of connections with other parts of combinatorics.
Extremal combinatorics
Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concerns classes of set systems; this is called extremal set theory. For instance, in an n-element set, what is the largest number of k-element subsets that can pairwise intersect one another? What is the largest number of subsets of which none contains any other? The latter question is answered by Sperner's theorem, which gave rise to much of extremal set theory.
The types of questions addressed in this case are about the largest possible graph which satisfies certain properties. For example, the largest triangle-free graph on 2n vertices is a complete bipartite graph Kn,n. Often it is too hard even to find the extremal answer f(n) exactly and one can only give an asymptotic estimate.
Ramsey theory is another part of extremal combinatorics. It states that any sufficiently large configuration will contain some sort of order. It is an advanced generalization of the pigeonhole principle.
Probabilistic combinatorics
In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as a random graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find) by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to as the probabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finite Markov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate the mixing time.
Often associated with Paul Erdős, who did the pioneering work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. The area recently grew to become an independent field of combinatorics.
Algebraic combinatorics
Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may be enumerative in nature or involve matroids, polytopes, partially ordered sets, or finite geometries. On the algebraic side, besides group and representation theory, lattice theory and commutative algebra are common.
Combinatorics on words
Combinatorics on words deals with formal languages. It arose independently within several branches of mathematics, including number theory, group theory and probability. It has applications to enumerative combinatorics, fractal analysis, theoretical computer science, automata theory, and linguistics. While many applications are new, the classical Chomsky–Schützenberger hierarchy of classes of formal grammars is perhaps the best-known result in the field.
Geometric combinatorics
Geometric combinatorics is related to convex and discrete geometry. It asks, for example, how many faces of each dimension a convex polytope can have. Metric properties of polytopes play an important role as well, e.g. the Cauchy theorem on the rigidity of convex polytopes. Special polytopes are also considered, such as permutohedra, associahedra and Birkhoff polytopes. Combinatorial geometry is a historical name for discrete geometry.
It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron, associahedron and Birkhoff polytope.
Topological combinatorics
Combinatorial analogs of concepts and methods in topology are used to study graph coloring, fair division, partitions, partially ordered sets, decision trees, necklace problems and discrete Morse theory. It should not be confused with combinatorial topology which is an older name for algebraic topology.
Arithmetic combinatorics
Arithmetic combinatorics arose out of the interplay between number theory, combinatorics, ergodic theory, and harmonic analysis. It is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive number theory (sometimes also called additive combinatorics) refers to the special case when only the operations of addition and subtraction are involved. One important technique in arithmetic combinatorics is the ergodic theory of dynamical systems.
Infinitary combinatorics
Infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. It is a part of set theory, an area of mathematical logic, but uses tools and ideas from both set theory and extremal combinatorics. Some of the things studied include continuous graphs and trees, extensions of Ramsey's theorem, and Martin's axiom. Recent developments concern combinatorics of the continuum and combinatorics on successors of singular cardinals.
Gian-Carlo Rota used the name continuous combinatorics to describe geometric probability, since there are many analogies between counting and measure.
Related fields
Combinatorial optimization
Combinatorial optimization is the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory.
Coding theory
Coding theory started as a part of design theory with early combinatorial constructions of error-correcting codes. The main idea of the subject is to design efficient and reliable methods of data transmission. It is now a large field of study, part of information theory.
Discrete and computational geometry
Discrete geometry (also called combinatorial geometry) also began as a part of combinatorics, with early results on convex polytopes and kissing numbers. With the emergence of applications of discrete geometry to computational geometry, these two fields partially merged and became a separate field of study. There remain many connections with geometric and topological combinatorics, which themselves can be viewed as outgrowths of the early discrete geometry.
Combinatorics and dynamical systems
Combinatorial aspects of dynamical systems is another emerging field. Here dynamical systems can be defined on combinatorial objects. See for example
graph dynamical system.
Combinatorics and physics
There are increasing interactions between combinatorics and physics, particularly statistical physics. Examples include an exact solution of the Ising model, and a connection between the Potts model on one hand, and the chromatic and Tutte polynomials on the other hand.
See also
Combinatorial biology
Combinatorial chemistry
Combinatorial data analysis
Combinatorial game theory
Combinatorial group theory
Discrete mathematics
List of combinatorics topics
Phylogenetics
Polynomial method in combinatorics
Notes
References
Björner, Anders; and Stanley, Richard P.; (2010); A Combinatorial Miscellany
Bóna, Miklós; (2011); A Walk Through Combinatorics (3rd ed.).
Graham, Ronald L.; Groetschel, Martin; and Lovász, László; eds. (1996); Handbook of Combinatorics, Volumes 1 and 2. Amsterdam, NL, and Cambridge, MA: Elsevier (North-Holland) and MIT Press.
Lindner, Charles C.; and Rodger, Christopher A.; eds. (1997); Design Theory, CRC-Press. .
Stanley, Richard P. (1997, 1999); Enumerative Combinatorics, Volumes 1 and 2, Cambridge University Press.
van Lint, Jacobus H.; and Wilson, Richard M.; (2001); A Course in Combinatorics, 2nd ed., Cambridge University Press.
External links
Combinatorial Analysis – an article in Encyclopædia Britannica Eleventh Edition
Combinatorics, a MathWorld article with many references.
Combinatorics, from a MathPages.com portal.
The Hyperbook of Combinatorics, a collection of math articles links.
The Two Cultures of Mathematics by W.T. Gowers, article on problem solving vs theory building.
"Glossary of Terms in Combinatorics"
List of Combinatorics Software and Databases
|
https://en.wikipedia.org/wiki/Computing
|
Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and development of both hardware and software. Computing has scientific, engineering, mathematical, technological and social aspects. Major computing disciplines include computer engineering, computer science, cybersecurity, data science, information systems, information technology, digital art and software engineering.
The term computing is also synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers.
History
The history of computing is longer than the history of computing hardware and includes the history of methods intended for pen and paper (or for chalk and slate) with or without the aid of tables. Computing is intimately tied to the representation of numbers, though mathematical concepts necessary for computing existed before numeral systems. The earliest known tool for use in computation is the abacus, and it is thought to have been invented in Babylon circa between 2700–2300 BC. Abaci, of a more modern design, are still used as calculation tools today.
The first recorded proposal for using digital electronics in computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" then introduced the idea of using electronics for Boolean algebraic operations.
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947. In 1953, the University of Manchester built the first transistorized computer, the Manchester Baby. However, early junction transistors were relatively bulky devices that were difficult to mass-produce, which limited them to a number of specialised applications. The metal–oxide–silicon field-effect transistor (MOSFET, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET made it possible to build high-density integrated circuits, leading to what is known as the computer revolution or microcomputer revolution.
Computer
A computer is a machine that manipulates data according to a set of instructions called a computer program. The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm. Because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the CPU type.
The execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer. They trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions.
Computer hardware
Computer hardware includes the physical parts of a computer, including central processing unit, memory and input/output. Computational logic and computer architecture are key topics in the field of computer hardware.
Computer software
Computer software, or just software, is a collection of computer programs and related data, which provides instructions to a computer. Software refers to one or more computer programs and data held in the storage of the computer. It is a set of programs, procedures, algorithms, as well as its documentation concerned with the operation of a data processing system. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware (meaning physical devices). In contrast to hardware, software is intangible.
Software is also sometimes used in a more narrow sense, meaning application software only.
System software
System software, or systems software, is computer software designed to operate and control computer hardware, and to provide a platform for running application software. System software includes operating systems, utility software, device drivers, window systems, and firmware. Frequently used development tools such as compilers, linkers, and debuggers are classified as system software. System software and middleware manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user, unlike application software.
Application software
Application software, also known as an application or an app, is computer software designed to help the user perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install additional applications. The system software manages the hardware and serves the application, which in turn serves the user.
Application software applies the power of a particular computing platform or system software to a particular purpose. Some apps, such as Microsoft Office, are developed in multiple versions for several different platforms; others have narrower requirements and are generally referred to by the platform they run on. For example, a geography application for Windows or an Android application for education or Linux gaming. Applications that run only on one platform and increase the desirability of that platform due to the popularity of the application, known as killer applications.
Computer network
A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information. When at least one process in one device is able to send or receive data to or from at least one process residing in a remote device, the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope.
Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. One well-known communications protocol is Ethernet, a hardware and link layer standard that is ubiquitous in local area networks. Another common protocol is the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, host-to-host data transfer, and application-specific data transmission formats.
Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines.
Internet
The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users. This includes millions of private, public, academic, business, and government networks, ranging in scope from local to global. These networks are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web and the infrastructure to support email.
Computer programming
Computer programming is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This source code is written in a programming language, which is an artificial language that is often more restrictive than natural languages, but easily translated by the computer. Programming is used to invoke some desired behavior (customization) from the machine.
Writing high-quality source code requires knowledge of both the computer science domain and the domain in which the application will be used. The highest-quality software is thus often developed by a team of domain experts, each a specialist in some area of development. However, the term programmer may apply to a range of program quality, from hacker to open source contributor to professional. It is also possible for a single programmer to do most or all of the computer programming needed to generate the proof of concept to launch a new killer application.
Computer programmer
A programmer, computer programmer, or coder is a person who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (C, C++, Java, Lisp, Python etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with Web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming.
Computer industry
The computer industry is made up of businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, manufacturing computer components and providing information technology services, including system administration and maintenance.
The software industry includes businesses engaged in development, maintenance and publication of software. The industry also includes software services, such as training, documentation, and consulting.
Sub-disciplines of computing
Computer engineering
Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration, rather than just software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering includes not only the design of hardware within its own domain, but also the interactions between hardware and the context in which it operates.
Software engineering
Software engineering (SE) is the application of a systematic, disciplined and quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches. That is, the application of engineering to software. It is the act of using insights to conceive, model and scale a solution to a problem. The first reference to the term is the 1968 NATO Software Engineering Conference, and was intended to provoke thought regarding the perceived software crisis at the time. Software development, a widely used and more generic term, does not necessarily subsume the engineering paradigm. The generally accepted concepts of Software Engineering as an engineering discipline have been specified in the Guide to the Software Engineering Body of Knowledge (SWEBOK). The SWEBOK has become an internationally accepted standard in ISO/IEC TR 19759:2015.
Computer science
Computer science or computing science (abbreviated CS or Comp Sci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems.
Its subfields can be divided into practical techniques for its implementation and application in computer systems, and purely theoretical areas. Some, such as computational complexity theory, which studies fundamental properties of computational problems, are highly abstract, while others, such as computer graphics, emphasize real-world applications. Others focus on the challenges in implementing computations. For example, programming language theory studies approaches to the description of computations, while the study of computer programming investigates the use of programming languages and complex systems. The field of human–computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans.
Cybersecurity
The field of cybersecurity pertains to the protection of computer systems and networks. This includes information and data privacy, preventing disruption of IT services and prevention of theft of and damage to hardware, software and data.
Data science
Data science is a field that uses scientific and computing tools to extract information and insights from data, driven by the increasing volume and availability of data. Data mining, big data, statistics and machine learning are all interwoven with data science.
Information systems
Information systems (IS) is the study of complementary networks of hardware and software (see information technology) that people and organizations use to collect, filter, process, create, and distribute data. The ACM's Computing Careers describes IS as:
The study of IS bridges business and computer science, using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline. The field of Computer Information Systems (CIS) studies computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society while IS emphasizes functionality over design.
Information technology
Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise. The term is commonly used as a synonym for computers and computer networks, but also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce and computer services.
Research and emerging technologies
DNA-based computing and quantum computing are areas of active research for both computing hardware and software, such as the development of quantum algorithms. Potential infrastructure for future technologies includes DNA origami on photolithography and quantum antennae for transferring information between ion traps. By 2011, researchers had entangled 14 qubits. Fast digital circuits, including those based on Josephson junctions and rapid single flux quantum technology, are becoming more nearly realizable with the discovery of nanoscale superconductors.
Fiber-optic and photonic (optical) devices, which already have been used to transport data over long distances, are starting to be used by data centers, along with CPU and semiconductor memory components. This allows the separation of RAM from CPU by optical interconnects. IBM has created an integrated circuit with both electronic and optical information processing in one chip. This is denoted CMOS-integrated nanophotonics (CINP). One benefit of optical interconnects is that motherboards, which formerly required a certain kind of system on a chip (SoC), can now move formerly dedicated memory and network controllers off the motherboards, spreading the controllers out onto the rack. This allows standardization of backplane interconnects and motherboards for multiple types of SoCs, which allows more timely upgrades of CPUs.
Another field of research is spintronics. Spintronics can provide computing power and storage, without heat buildup. Some research is being done on hybrid chips, which combine photonics and spintronics. There is also research ongoing on combining plasmonics, photonics, and electronics.
Cloud computing
Cloud computing is a model that allows for the use of computing resources, such as servers or applications, without the need for interaction between the owner of these resources and the end user. It is typically offered as a service, making it an example of Software as a Service, Platforms as a Service, and Infrastructure as a Service, depending on the functionality offered. Key characteristics include on-demand access, broad network access, and the capability of rapid scaling. It allows individual users or small business to benefit from economies of scale.
One area of interest in this field is its potential to support energy efficiency. Allowing thousands of instances of computation to occur on one single machine instead of thousands of individual machines could help save energy. It could also ease the transition to renewable energy source, since it would suffice to power one server farm with renewable energy, rather than millions of homes and offices.
However, this centralized computing model poses several challenges, especially in security and privacy. Current legislation does not sufficiently protect users from companies mishandling their data on company servers. This suggests potential for further legislative regulations on cloud computing and tech companies.
Quantum computing
Quantum computing is an area of research that brings together the disciplines of computer science, information theory, and quantum physics. While the idea of information as part of physics is relatively new, there appears to be a strong tie between information theory and quantum mechanics. Whereas traditional computing operates on a binary system of ones and zeros, quantum computing uses qubits. Qubits are capable of being in a superposition, i.e. in both states of one and zero, simultaneously. Thus, the value of the qubit is not between 1 and 0, but changes depending on when it is measured. This trait of qubits is known as quantum entanglement, and is the core idea of quantum computing that allows quantum computers to do large scale computations. Quantum computing is often used for scientific research in cases where traditional computers do not have the computing power to do the necessary calculations, such in molecular modeling. Large molecules and their reactions are far too complex for traditional computers to calculate, but the computational power of quantum computers could provide a tool to perform such calculations.
See also
Artificial intelligence
Computational thinking
Creative computing
Electronic data processing
Enthusiast computing
Index of history of computing articles
Instruction set architecture
Lehmer sieve
List of computer term etymologies
Mobile computing
Scientific computing
References
External links
FOLDOC: the Free On-Line Dictionary Of Computing
|
https://en.wikipedia.org/wiki/Code
|
In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time.
The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish.
One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.
Theory
In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings.
Before giving a mathematically precise definition, this is a brief example. The mapping
is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords as 0 011 0 01, and these in turn can be decoded to the sequence of source symbols acab.
Using terms from formal language theory, the precise mathematical definition of this concept is as follows: let S and T be two finite sets, called the source and target alphabets, respectively. A code is a total function mapping each symbol from S to a sequence of symbols over T. The extension of , is a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols.
Variable-length codes
In this section, we consider codes that encode each source (clear text) character by a code word from some dictionary, and concatenation of such code words give us an encoded string. Variable-length codes are especially useful when clear text characters have different probabilities; see also entropy encoding.
A prefix code is a code with the "prefix property": there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. Huffman coding is the most known algorithm for deriving prefix codes. Prefix codes are widely referred to as "Huffman codes" even when the code was not produced by a Huffman algorithm. Other examples of prefix codes are country calling codes, the country and publisher parts of ISBNs, and the Secondary Synchronization Codes used in the UMTS WCDMA 3G Wireless Standard.
Kraft's inequality characterizes the sets of codeword lengths that are possible in a prefix code. Virtually any uniquely decodable one-to-many code, not necessarily a prefix one, must satisfy Kraft's inequality.
Error-correcting codes
Codes may also be used to represent data in a way more resistant to errors in transmission or storage. This so-called error-correcting code works by including carefully crafted redundancy with the stored (or transmitted) data. Examples include Hamming codes, Reed–Solomon, Reed–Muller, Walsh–Hadamard, Bose–Chaudhuri–Hochquenghem, Turbo, Golay, algebraic geometry codes, low-density parity-check codes, and space–time codes.
Error detecting codes can be optimised to detect burst errors, or random errors.
Examples
Codes in communication used for brevity
A cable code replaces words (e.g. ship or invoice) with shorter words, allowing the same information to be sent with fewer characters, more quickly, and less expensively.
Codes can be used for brevity. When telegraph messages were the state of the art in rapid long-distance communication, elaborate systems of commercial codes that encoded complete phrases into single mouths (commonly five-minute groups) were developed, so that telegraphers became conversant with such "words" as BYOXO ("Are you trying to weasel out of our deal?"), LIOUY ("Why do you not answer my question?"), BMULD ("You're a skunk!"), or AYYLU ("Not clearly coded, repeat more clearly."). Code words were chosen for various reasons: length, pronounceability, etc. Meanings were chosen to fit perceived needs: commercial negotiations, military terms for military codes, diplomatic terms for diplomatic codes, any and all of the preceding for espionage codes. Codebooks and codebook publishers proliferated, including one run as a front for the American Black Chamber run by Herbert Yardley between the First and Second World Wars. The purpose of most of these codes was to save on cable costs. The use of data coding for data compression predates the computer era; an early example is the telegraph Morse code where more-frequently used characters have shorter representations. Techniques such as Huffman coding are now used by computer-based algorithms to compress large data files into a more compact form for storage or transmission.
Character encodings
Character encodings are representations of textual data. A given character encoding may be associated with a specific character set (the collection of characters which it can represent), though some character sets have multiple character encodings and vice versa. Character encodings may be broadly grouped according to the number of bytes required to represent a single character: there are single-byte encodings, multibyte (also called wide) encodings, and variable-width (also called variable-length) encodings. The earliest character encodings were single-byte, the best-known example of which is ASCII. ASCII remains in use today, for example in HTTP headers. However, single-byte encodings cannot model character sets with more than 256 characters. Scripts that require large character sets such as Chinese, Japanese and Korean must be represented with multibyte encodings. Early multibyte encodings were fixed-length, meaning that although each character was represented by more than one byte, all characters used the same number of bytes ("word length"), making them suitable for decoding with a lookup table. The final group, variable-width encodings, is a subset of multibyte encodings. These use more complex encoding and decoding logic to efficiently represent large character sets while keeping the representations of more commonly used characters shorter or maintaining backward compatibility properties. This group includes UTF-8, an encoding of the Unicode character set; UTF-8 is the most common encoding of text media on the Internet.
Genetic code
Biological organisms contain genetic material that is used to control their function and development. This is DNA, which contains units named genes from which messenger RNA is derived. This in turn produces proteins through a genetic code in which a series of triplets (codons) of four possible nucleotides can be translated into one of twenty possible amino acids. A sequence of codons results in a corresponding sequence of amino acids that form a protein molecule; a type of codon called a stop codon signals the end of the sequence.
Gödel code
In mathematics, a Gödel code was the basis for the proof of Gödel's incompleteness theorem. Here, the idea was to map mathematical notation to a natural number (using a Gödel numbering).
Other
There are codes using colors, like traffic lights, the color code employed to mark the nominal value of the electrical resistors or that of the trashcans devoted to specific types of garbage (paper, glass, organic, etc.).
In marketing, coupon codes can be used for a financial discount or rebate when purchasing a product from a (usual internet) retailer.
In military environments, specific sounds with the cornet are used for different uses: to mark some moments of the day, to command the infantry on the battlefield, etc.
Communication systems for sensory impairments, such as sign language for deaf people and braille for blind people, are based on movement or tactile codes.
Musical scores are the most common way to encode music.
Specific games have their own code systems to record the matches, e.g. chess notation.
Cryptography
In the history of cryptography, codes were once common for ensuring the confidentiality of communications, although ciphers are now used instead.
Secret codes intended to obscure the real messages, ranging from serious (mainly espionage in military, diplomacy, business, etc.) to trivial (romance, games) can be any kind of imaginative encoding: flowers, game cards, clothes, fans, hats, melodies, birds, etc., in which the sole requirement is the pre-agreement on the meaning by both the sender and the receiver.
Other examples
Other examples of encoding include:
Encoding (in cognition) - a basic perceptual process of interpreting incoming stimuli; technically speaking, it is a complex, multi-stage process of converting relatively objective sensory input (e.g., light, sound) into a subjectively meaningful experience.
A content format - a specific encoding format for converting a specific type of data to information.
Text encoding uses a markup language to tag the structure and other features of a text to facilitate processing by computers. (See also Text Encoding Initiative.)
Semantics encoding of formal language A informal language B is a method of representing all terms (e.g. programs or descriptions) of language A using language B.
Data compression transforms a signal into a code optimized for transmission or storage, generally done with a codec.
Neural encoding - the way in which information is represented in neurons.
Memory encoding - the process of converting sensations into memories.
Television encoding: NTSC, PAL and SECAM
Other examples of decoding include:
Decoding (computer science)
Decoding methods, methods in communication theory for decoding codewords sent over a noisy channel
Digital signal processing, the study of signals in a digital representation and the processing methods of these signals
Digital-to-analog converter, the use of analog circuit for decoding operations
Word decoding, the use of phonics to decipher print patterns and translate them into the sounds of language
Codes and acronyms
Acronyms and abbreviations can be considered codes, and in a sense, all languages and writing systems are codes for human thought.
International Air Transport Association airport codes are three-letter codes used to designate airports and used for bag tags. Station codes are similarly used on railways but are usually national, so the same code can be used for different stations if they are in different countries.
Occasionally, a code word achieves an independent existence (and meaning) while the original equivalent phrase is forgotten or at least no longer has the precise meaning attributed to the code word. For example, '30' was widely used in journalism to mean "end of story", and has been used in other contexts to signify "the end".
See also
Asemic writing
Cipher
Code (semiotics)
Equipment codes
Quantum error correction
Semiotics
Universal language
References
Further reading
Signal processing
|
https://en.wikipedia.org/wiki/Coast
|
The coast, also known as the coastline or seashore, is defined as the area where land meets the ocean, or as a line that forms the boundary between the land and the coastline. Shores are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore which is created. The Earth has around of coastline. Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas they harbor saltmarshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic species. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds. In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of .
According to an atlas prepared by the United Nations, 44% of all humans live within 150 km (93 mi) of the sea. Due to its importance in society and its high population concentrations, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries (commercial, recreational, and subsistence) and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism. Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel (e.g. charcoal) and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide.
However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, and related issues such as coastal erosion, saltwater intrusion and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems. The interactive effects of climate change, habitat destruction, overfishing and water pollution (especially eutrophication) have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
Because coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline (e.g., New Zealand's West Coast, or the East, West, and Gulf Coast of the United States.) Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans (sea shore) and lakes (lake shore).
Size
The Earth has approximately of coastline. Coastal habitats, which extend to the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. about 2.86% of exclusive economic zones were part of marine protected areas.
The definition of coasts varies. Marine scientists think of the "wet" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (including seagrass, salt marsh etc.) whilst some terrestrial scientist might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems).
While there is general agreement in the scientific community regarding the definition of coast, in the political sphere, the delineation of the extents of a coast differ according to jurisdiction. Government authorities in various countries may define coast differently for economic and social policy reasons.
Exact length of coastline
Formation
Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean.
Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than ; mesotidal coasts with a tidal range of ; and microtidal coasts with a tidal range of less than . The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts.
Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast.
Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today, riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands.
Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias).
Importance for humans and ecosystems
Human settlements
More and more of the world's people live in coastal regions. According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Many major cities are on or near good harbors and have port facilities. Some landlocked places have achieved port status by building canals.
Nations defend their coasts against military invaders, smugglers and illegal migrants. Fixed coastal defenses have long been erected in many nations, and coastal countries typically have a navy and some form of coast guard.
Tourism
Coasts, especially those with beaches and warm water, attract tourists often leading to the development of seaside resort communities. In many island nations such as those of the Mediterranean, South Pacific Ocean and Caribbean, tourism is central to the economy. Coasts offer recreational activities such as swimming, fishing, surfing, boating, and sunbathing.
Growth management and coastal management can be a challenge for coastal local authorities who often struggle to provide the infrastructure required by new residents, and poor management practices of construction often leave these communities and infrastructure vulnerable to processes like coastal erosion and sea level rise. In many of these communities, management practices such as beach nourishment or when the coastal infrastructure is no longer financially sustainable, managed retreat to remove communities from the coast.
Ecosystem services
Types
Emergent coastline
According to one principle of classification, an emergent coastline is a coastline that has experienced a fall in sea level, because of either a global sea-level change, or local uplift. Emergent coastlines are identifiable by the coastal landforms, which are above the high tide mark, such as raised beaches. In contrast, a submergent coastline is one where the sea level has risen, due to a global sea-level change, local subsidence, or isostatic rebound. Submergent coastlines are identifiable by their submerged, or "drowned" landforms, such as rias (drowned valleys) and fjords
Concordant coastline
According to the second principle of classification, a concordant coastline is a coastline where bands of different rock types run parallel to the shore. These rock types are usually of varying resistance, so the coastline forms distinctive landforms, such as coves. Discordant coastlines feature distinctive landforms because the rocks are eroded by the ocean waves. The less resistant rocks erode faster, creating inlets or bay; the more resistant rocks erode more slowly, remaining as headlands or outcroppings.
Rivieras
Riviera is an Italian word for "shoreline", ultimately derived from Latin ripa ("riverbank"). It came to be applied as a proper name to the coast of the Ligurian Sea, in the form riviera ligure, then shortened to riviera. Historically, the Ligurian Riviera extended from Capo Corvo (Punta Bianca) south of Genoa, north and west into what is now French territory past Monaco and sometimes as far as Marseilles. Today, this coast is divided into the Italian Riviera and the French Riviera, although the French use the term "Riviera" to refer to the Italian Riviera and call the French portion the "Côte d'Azur".
As a result of the fame of the Ligurian rivieras, the term came into English to refer to any shoreline, especially one that is sunny, topographically diverse and popular with tourists. Such places using the term include the Australian Riviera in Queensland and the Turkish Riviera along the Aegean Sea.
Other coastal categories
A cliffed coast or abrasion coast is one where marine action has produced steep declivities known as cliffs.
A flat coast is one where the land gradually descends into the sea.
A graded shoreline is one where wind and water action has produced a flat and straight coastline.
Landforms
The following articles describe some coastal landforms:
Barrier island
Bay
Headland
Cove
Peninsula
Cliff erosion
Much of the sediment deposited along a coast is the result of erosion of a surrounding cliff, or bluff. Sea cliffs retreat landward because of the constant undercutting of slopes by waves. If the slope/cliff being undercut is made of unconsolidated sediment it will erode at a much faster rate than a cliff made of bedrock.
A natural arch is formed when a headland is eroded through by waves.
Sea caves are made when certain rock beds are more susceptible to erosion than the surrounding rock beds because of different areas of weakness. These areas are eroded at a faster pace creating a hole or crevice that, through time, by means of wave action and erosion, becomes a cave.
A stack is formed when a headland is eroded away by wave and wind action.
A stump is a shortened sea stack that has been eroded away or fallen because of instability.
Wave-cut notches are caused by the undercutting of overhanging slopes which leads to increased stress on cliff material and a greater probability that the slope material will fall. The fallen debris accumulates at the bottom of the cliff and is eventually removed by waves.
A wave-cut platform forms after erosion and retreat of a sea cliff has been occurring for a long time. Gently sloping wave-cut platforms develop early on in the first stages of cliff retreat. Later, the length of the platform decreases because the waves lose their energy as they break further offshore.
Coastal features formed by sediment
Beach
Beach cusps
Cuspate foreland
Dune system
Mudflat
Raised beach
Ria
Shoal
Spit
Strand plain
Surge channel
Tombolo
Coastal features formed by another feature
Estuary
Lagoon
Salt marsh
Mangrove forests
Kelp forests
Coral reefs
Oyster reefs
Other features on the coast
Concordant coastline
Discordant coastline
Fjord
Island
Island arc
Machair
Coastal waters
"Coastal waters" (or "coastal seas") is a rather general term used differently in different contexts, ranging geographically from the waters within a few kilometers of the coast, through to the entire continental shelf which may stretch for more than a hundred kilometers from land. Thus the term coastal waters is used in a slightly different way in discussions of legal and economic boundaries (see territorial waters and international waters) or when considering the geography of coastal landforms or the ecological systems operating through the continental shelf (marine coastal ecosystems). The research on coastal waters often divides into these separate areas too.
The dynamic fluid nature of the ocean means that all components of the whole ocean system are ultimately connected, although certain regional classifications are useful and relevant. The waters of the continental shelves represent such a region. The term "coastal waters" has been used in a wide variety of different ways in different contexts. In European Union environmental management it extends from the coast to just a few nautical miles while in the United States the US EPA considers this region to extend much further offshore.
"Coastal waters" has specific meanings in the context of commercial coastal shipping, and somewhat different meanings in the context of naval littoral warfare. Oceanographers and marine biologists have yet other takes. Coastal waters have a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf.
Similarly, the term littoral zone has no single definition. It is the part of a sea, lake, or river that is close to the shore. In coastal environments, the littoral zone extends from the high water mark, which is rarely inundated, to shoreline areas that are permanently submerged.
Coastal waters can be threatened by coastal eutrophication and harmful algal blooms.
In geology
The identification of bodies of rock formed from sediments deposited in shoreline and nearshore environments (shoreline and nearshore facies) is extremely important to geologists. These provide vital clues for reconstructing the geography of ancient continents (paleogeography). The locations of these beds show the extent of ancient seas at particular points in geological time, and provide clues to the magnitudes of tides in the distant past.
Sediments deposited in the shoreface are preserved as lenses of sandstone in which the upper part of the sandstone is coarser than the lower part (a coarsening upwards sequence). Geologists refer to these are parasequences. Each records an episode of retreat of the ocean from the shoreline over a period of 10,000 to 1,000,000 years. These often show laminations reflecting various kinds of tidal cycles.
Some of the best-studied shoreline deposits in the world are found along the former western shore of the Western Interior Seaway, a shallow sea that flooded central North America during the late Cretaceous Period (about 100 to 66 million years ago). These are beautifully exposed along the Book Cliffs of Utah and Colorado.
Geologic processes
The following articles describe the various geologic processes that affect a coastal zone:
Attrition
Currents
Denudation
Deposition
Erosion
Flooding
Longshore drift
Marine sediments
Saltation
Sea level change
eustatic
isostatic
Sedimentation
Coastal sediment supply
sediment transport
solution
subaerial processes
suspension
Tides
Water waves
diffraction
refraction
wave breaking
wave shoaling
Weathering
Wildlife
Animals
Larger animals that live in coastal areas include puffins, sea turtles and rockhopper penguins, among many others. Sea snails and various kinds of barnacles live on rocky coasts and scavenge on food deposited by the sea. Some coastal animals are used to humans in developed areas, such as dolphins and seagulls who eat food thrown for them by tourists. Since the coastal areas are all part of the littoral zone, there is a profusion of marine life found just off-coast, including sessile animals such as corals, sponges, starfish, mussels, seaweeds, fishes, and sea anemones.
There are many kinds of seabirds on various coasts. These include pelicans and cormorants, who join up with terns and oystercatchers to forage for fish and shellfish. There are sea lions on the coast of Wales and other countries.
Coastal fish
Plants
Many coastal areas are famous for their kelp beds. Kelp is a fast-growing seaweed that can grow up to half a meter a day in ideal conditions. Mangroves, seagrasses, macroalgal beds, and salt marsh are important coastal vegetation types in tropical and temperate environments respectively. Restinga is another type of coastal vegetation.
Threats
Coasts also face many human-induced environmental impacts and coastal development hazards. The most important ones are:
Pollution which can be in the form of water pollution, nutrient pollution (leading to coastal eutrophication and harmful algal blooms), oil spills or marine debris that is contaminating coasts with plastic and other trash.
Sea level rise, and associated issues like coastal erosion and saltwater intrusion.
Pollution
The pollution of coastlines is connected to marine pollution which can occur from a number of sources: Marine debris (garbage and industrial debris); the transportation of petroleum in tankers, increasing the probability of large oil spills; small oil spills created by large and small vessels, which flush bilge water into the ocean.
Marine pollution
Marine debris
Microplastics
Sea level rise due to climate change
Global goals
International attention to address the threats of coasts has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
See also
Bank (geography)
Beach cleaning
Coastal and Estuarine Research Federation
European Atlas of the Seas
Intertidal zone
Land reclamation
List of countries by length of coastline
List of U.S. states by coastline
Offshore or Intertidal zone
Ballantine Scale
Coastal path
Shorezone
References
External links
Woods Hole Oceanographic Institution - organization dedicated to ocean research, exploration, and education
Coastal and oceanic landforms
Coastal geography
Oceanographical terminology
Articles containing video clips
|
https://en.wikipedia.org/wiki/Cipher
|
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography.
Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.
Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext.
Most modern ciphers can be categorized in several ways
By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers).
By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality.
Etymology
Originating from the Arabic word for zero صفر (sifr), the word "cipher" spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept of zero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.
The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers".
Versus codes
In casual contexts, "code" and "cipher" can typically be used interchangeably, however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message.
An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams.
Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizes Kanji (meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace "The quick brown fox jumps over the lazy dog" by "The quick brown 狐 jumps 上 the lazy 犬". Stenographers sometimes use specific symbols to abbreviate whole words.
Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are used synonymously with substitution and transposition, respectively.
Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: "encoding, codetext, decoding" and so on.
However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.
Types
There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.
Historical
The Caesar Cipher is one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.[11]
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers (such as ROT13) and transposition ciphers (such as a Rail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.
William Shakespeare often used the concept of ciphers in his writing to symbolize nothingness. In Shakespeare's Henry V, he relates one of the accounting methods that brought the Arabic Numeral system and zero to Europe, to the human imagination. The actors who perform this play were not at the battles of Henry V's reign, so they represent absence. In another sense, ciphers are important to people who work with numbers, but they do not hold value. Shakespeare used this concept to outline how those who counted and identified the dead from the battles used that information as a political weapon, furthering class biases and xenophobia.
In the 1640s, the Parliamentarian commander, Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during the English Civil War.
Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. It is possible to create a secure pen and paper cipher based on a one-time pad though, but the usual disadvantages of one-time pads apply.
During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods.
Modern
Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data.
By type of key used ciphers are divided into:
symmetric key algorithms (Private-key cryptography), where one same key is used for encryption and decryption, and
asymmetric key algorithms (Public-key cryptography), where two different keys are used for encryption and decryption.
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure.[12]
Ciphers can be distinguished into two types by the type of input data:
block ciphers, which encrypt block of data of fixed size, and
stream ciphers, which encrypt continuous streams of data.
Key size and vulnerability
In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:
Computational power available, i.e., the computing power which can be brought to bear on the problem. It is important to note that average performance/capacity of a single computer is not the only factor to consider. An adversary can use multiple computers at once, for instance, to increase the speed of exhaustive search for a key (i.e., "brute force" attack) substantially.
Key size, i.e., the size of key used to encrypt a message. As the key size increases, so does the complexity of exhaustive search to the point where it becomes impractical to crack encryption directly.
Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly.
An example of this process can be found at Key Length which uses multiple reports to suggest that a symmetrical cipher with 128 bits, an asymmetric cipher with 3072 bit keys, and an elliptic curve cipher with 256 bits, all have similar difficulty at present.
Claude Shannon proved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once: one-time pad.
See also
Autokey cipher
Cover-coding
Encryption software
List of ciphertexts
Steganography
Telegraph code
Notes
References
Richard J. Aldrich, GCHQ: The Uncensored Story of Britain's Most Secret Intelligence Agency, HarperCollins July 2010.
Helen Fouché Gaines, "Cryptanalysis", 1939, Dover.
Ibrahim A. Al-Kadi, "The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126.
David Kahn, The Codebreakers - The Story of Secret Writing () (1967)
David A. King, The ciphers of the monks - A forgotten number notation of the Middle Ages, Stuttgart: Franz Steiner, 2001 ()
Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966.
William Stallings, ''Cryptography and Network Security, principles and practices, 4th Edition
"Ciphers vs. Codes (Article) | Cryptography." Khan Academy, Khan Academy, https://www.khanacademy.org/computing/computer-science/cryptography/ciphers/a/ciphers-vs-codes.
Caldwell, William Casey. "Shakespeare's Henry V and the Ciphers of History." SEL Studies in English Literature, 1500-1900, vol. 61, no. 2, 2021, pp. 241–68. EBSCOhost, .
Luciano, Dennis, and Gordon Prichett. "Cryptology: From Caesar Ciphers to Public-Key Cryptosystems." The College Mathematics Journal, vol. 18, no. 1, 1987, pp. 2–17. JSTOR, https://doi.org/10.2307/2686311. Accessed 19 Feb. 2023.
Ho Yean Li, et al. "Heuristic Cryptanalysis of Classical and Modern Ciphers." 2005 13th IEEE International Conference on Networks Jointly Held with the 2005 IEEE 7th Malaysia International Conf on Communic, Networks, 2005. Jointly Held with the 2005 IEEE 7th Malaysia International Conference on Communication., 2005 13th IEEE International Conference on, Networks and Communications, vol. 2, Jan. 2005. EBSCOhost, .
External links
Kish cypher
Cryptography
|
https://en.wikipedia.org/wiki/Constellation
|
A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object.
The origins of the earliest constellations likely go back to prehistory. People used them to relate stories of their beliefs, experiences, creation, or mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily.
Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent 400 BC in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the Zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name.
In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name.
Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major.
Terminology
The word constellation comes from the Late Latin term , which can be translated as "set of stars"; it came into use in Middle English during the 14th century. The Ancient Greek word for constellation is ἄστρον (astron). These terms historically referred to any recognisable pattern of stars whose appearance was associated with mythological characters or creatures, earthbound animals, or objects. Over time, among European astronomers, the constellations became clearly defined and widely recognised. Today, there are 88 IAU designated constellations.
A constellation or star that never sets below the horizon when viewed from a particular latitude on Earth is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic or zodiac ranging between 23½° north, the celestial equator, and 23½° south.
Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring individual stars' common proper motions or cpm by accurate astrometry and their radial velocities by astronomical spectroscopy.
Identification
The 88 constellations recognized by the International Astronomical Union as well as those that cultures have recognized throughout history are imagined figures and shapes derived from the patterns of stars in the observable sky. Many officially recognized constellations are based on the imaginations of ancient, Near Eastern and Mediterranean mythologies. H.A. Rey, who wrote popular books on astronomy, pointed out the imaginative nature of the constellations and their mythological and artistic basis, and the practical use of identifying them through definite images, according to the classical names they were given.
History of the early constellations
Lascaux Caves, southern France
It has been suggested that the 17,000-year-old cave paintings in Lascaux, southern France, depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not generally accepted among scientists.
Mesopotamia
Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations.
Ancient Near East
The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age.
The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names.
Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including "bier", "fool" and "heap" (Job 9:9, 38:31–32), rendered as "Arcturus, Orion and Pleiades" by the KJV, but ‘Ayish "the bier" actually corresponding to Ursa Major. The term Mazzaroth , translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations.
Classical antiquity
There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the "heavenly bodies". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century.
In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy.
Ancient China
Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently.
Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable.
A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the supernova of the year of 1054 in Taurus.
Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy.
Ancient Greece
A lot of well-known constellations also have histories that connect to ancient Greece.
Early modern astronomy
Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca.
Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of right ascension and declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina.
The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations.
Origin of the southern constellations
The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC.
The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci.
Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Fourteen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille, who also split the ancient constellation Argo Navis into three; these new figures appeared in his star catalogue, published in 1756.
Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco.
88 modern constellations
A list of 88 constellations was produced for the International Astronomical Union in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who introduced fourteen new constellations. Lacaille studied the stars of the southern hemisphere from 1751 until 1752 from the Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope with an aperture of .
In 1922, Henry Norris Russell produced a list of 88 constellations with three-letter abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the International Astronomical Union (IAU) formally accepted 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern.
The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come.
Symbols
The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published.
Dark cloud constellations
The Great Rift, a series of dark patches in the Milky Way, is more visible and striking in the southern hemisphere than in the northern. It vividly stands out when conditions are otherwise so dark that the Milky Way's central region casts shadows on the ground. Some cultures have discerned shapes in these patches and have given names to these "dark cloud constellations". Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the "emu in the sky" whose head is formed by the Coalsack, a dark nebula, instead of the stars.
List of dark cloud constellations
Great Rift (astronomy)
Emu in the sky
Cygnus Rift
Serpens–Aquila Rift
Dark Horse (astronomy)
Rho Ophiuchi cloud complex
See also
Celestial cartography
Constellation family
Former constellations
IAU designated constellations
Lists of stars by constellation
Constellations listed by Johannes Hevelius
Constellations listed by Lacaille
Constellations listed by Petrus Plancius
Constellations listed by Ptolemy
References
Further reading
Mythology, lore, history, and archaeoastronomy
Allen, Richard Hinckley. (1899) Star-Names And Their Meanings, G. E. Stechert, New York, hardcover; reprint 1963 as Star Names: Their Lore and Meaning, Dover Publications, Inc., Mineola, NY, softcover.
Olcott, William Tyler. (1911); Star Lore of All Ages, G. P. Putnam's Sons, New York, hardcover; reprint 2004 as Star Lore: Myths, Legends, and Facts, Dover Publications, Inc., Mineola, NY, softcover.
Kelley, David H. and Milone, Eugene F. (2004) Exploring Ancient Skies: An Encyclopedic Survey of Archaeoastronomy, Springer, hardcover.
Ridpath, Ian. (2018) Star Tales 2nd ed., Lutterworth Press, softcover.
Staal, Julius D. W. (1988) The New Patterns in the Sky: Myths and Legends of the Stars, McDonald & Woodward Publishing Co., hardcover, softcover.
Atlases and celestial maps
General and nonspecialized – entire celestial heavens
Becvar, Antonin. Atlas Coeli. Published as Atlas of the Heavens, Sky Publishing Corporation, Cambridge, MA, with coordinate grid transparency overlay.
Norton, Arthur Philip. (1910) Norton's Star Atlas, 20th Edition 2003 as Norton's Star Atlas and Reference Handbook, edited by Ridpath, Ian, Pi Press, , hardcover.
National Geographic Society. (1957, 1970, 2001, 2007) The Heavens (1970), Cartographic Division of the National Geographic Society (NGS), Washington, DC, two-sided large map chart depicting the constellations of the heavens; as a special supplement to the August 1970 issue of National Geographic. Forerunner map as A Map of The Heavens, as a special supplement to the December 1957 issue. Current version 2001 (Tirion), with 2007 reprint.
Sinnott, Roger W. and Perryman, Michael A.C. (1997) Millennium Star Atlas, Epoch 2000.0, Sky Publishing Corporation, Cambridge, MA, and European Space Agency (ESA), ESTEC, Noordwijk, The Netherlands. Subtitle: "An All-Sky Atlas Comprising One Million Stars to Visual Magnitude Eleven from the Hipparcos and Tycho Catalogues and Ten Thousand Nonstellar Objects". 3 volumes, hardcover, . Vol. 1, 0–8 Hours (Right Ascension), hardcover; Vol. 2, 8–16 Hours, hardcover; Vol. 3, 16–24 Hours, hardcover. Softcover version available. Supplemental separate purchasable coordinate grid transparent overlays.
Tirion, Wil; et al. (1987) Uranometria 2000.0, Willmann-Bell, Inc., Richmond, VA, 3 volumes, hardcover. Vol. 1 (1987): "The Northern Hemisphere to −6°", by Wil Tirion, Barry Rappaport, and George Lovi, hardcover, printed boards. Vol. 2 (1988): "The Southern Hemisphere to +6°", by Wil Tirion, Barry Rappaport and George Lovi, hardcover, printed boards. Vol. 3 (1993) as a separate added work: The Deep Sky Field Guide to Uranometria 2000.0, by Murray Cragin, James Lucyk, and Barry Rappaport, hardcover, printed boards. 2nd Edition 2001 as collective set of 3 volumes – Vol. 1: Uranometria 2000.0 Deep Sky Atlas, by Wil Tirion, Barry Rappaport, and Will Remaklus, hardcover, printed boards; Vol. 2: Uranometria 2000.0 Deep Sky Atlas, by Wil Tirion, Barry Rappaport, and Will Remaklus, hardcover, printed boards; Vol. 3: Uranometria 2000.0 Deep Sky Field Guide by Murray Cragin and Emil Bonanno, , hardcover, printed boards.
Tirion, Wil and Sinnott, Roger W. (1998) Sky Atlas 2000.0, various editions. 2nd Deluxe Edition, Cambridge University Press, Cambridge, England.
Northern celestial hemisphere and north circumpolar region
Becvar, Antonin. (1962) Atlas Borealis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, elephant folio hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition 1972 and 1978 reprint, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler.
Equatorial, ecliptic, and zodiacal celestial sky
Becvar, Antonin. (1958) Atlas Eclipticalis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, elephant folio hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition 1974, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler.
Southern celestial hemisphere and south circumpolar region
Becvar, Antonin. Atlas Australis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler.
Catalogs
Becvar, Antonin. (1959) Atlas Coeli II Katalog 1950.0, Praha, 1960 Prague. Published 1964 as Atlas of the Heavens – II Catalogue 1950.0, Sky Publishing Corporation, Cambridge, MA
Hirshfeld, Alan and Sinnott, Roger W. (1982) Sky Catalogue 2000.0, Cambridge University Press and Sky Publishing Corporation, 1st Edition, 2 volumes. both vols., and vol. 1. "Volume 1: Stars to Magnitude 8.0", (Cambridge) and hardcover, softcover. Vol. 2 (1985) – "Volume 2: Double Stars, Variable Stars, and Nonstellar Objects", (Cambridge) hardcover, (Cambridge) softcover. 2nd Edition (1991) with additional third author François Ochsenbein, 2 volumes, . Vol. 1: (Cambridge) hardcover; (Cambridge) softcover . Vol. 2 (1999): (Cambridge) softcover and 0-933346-38-7 softcover – reprint of 1985 edition.
Yale University Observatory. (1908, et al.) Catalogue of Bright Stars, New Haven, CN. Referred to commonly as "Bright Star Catalogue". Various editions with various authors historically, the longest term revising author as (Ellen) Dorrit Hoffleit. 1st Edition 1908. 2nd Edition 1940 by Frank Schlesinger and Louise F. Jenkins. 3rd Edition (1964), 4th Edition, 5th Edition (1991), and 6th Edition (pending posthumous) by Hoffleit.
External links
IAU: The Constellations, including high quality maps.
Atlascoelestis, di Felice Stoppa.
Celestia free 3D realtime space-simulation (OpenGL)
Stellarium realtime sky rendering program (OpenGL)
Strasbourg Astronomical Data Center Files on official IAU constellation boundaries
Studies of Occidental Constellations and Star Names to the Classical Period: An Annotated Bibliography
Table of Constellations
Online Text: Hyginus, Astronomica translated by Mary Grant Greco-Roman constellation myths
Neave Planetarium Adobe Flash interactive web browser planetarium and stardome with realistic movement of stars and the planets.
Audio – Cain/Gay (2009) Astronomy Cast Constellations
The Greek Star-Map short essay by Gavin White
Bucur D. The network signature of constellation line figures. PLOS ONE 17(7): e0272270 (2022). A comparative analysis on the structure of constellation line figures across 56 sky cultures.
Constellations
Celestial cartography
Constellations
Concepts in astronomy
|
https://en.wikipedia.org/wiki/Copyright
|
A copyright is a type of intellectual property that gives its owner the exclusive right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time. The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself. A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.
Some jurisdictions require "fixing" copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders. These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.
Copyrights can be granted by public law and are in that case considered "territorial rights". This means that copyrights granted by the law of a certain state do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works "cross" national borders or national rights are inconsistent.
Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities to establishing copyright, others recognize copyright in any completed work, without a formal registration. When the copyright of a work expires, it enters the public domain.
History
Background
The concept of copyright developed after the printing press came into use in Europe in the 15th and 16th centuries. The printing press made it much cheaper to produce works, but as there was initially no copyright law, anyone could buy or rent a press and print any text. Popular new works were immediately re-set and re-published by competitors, so printers needed a constant stream of new material. Fees paid to authors for new works were high, and significantly supplemented the incomes of many academics.
Printing brought profound social changes. The rise in literacy across Europe led to a dramatic increase in the demand for reading matter. Prices of reprints were low, so publications could be bought by poorer people, creating a mass audience. In German language markets before the advent of copyright, technical materials, like popular fiction, were inexpensive and widely available; it has been suggested this contributed to Germany's industrial and economic success. After copyright law became established (in 1710 in England and Scotland, and in the 1840s in German-speaking areas) the low-price mass market vanished, and fewer, more expensive editions were published; distribution of scientific and technical information was greatly reduced.
Conception
The concept of copyright first developed in England. In reaction to the printing of "scandalous books and pamphlets", the English Parliament passed the Licensing of the Press Act 1662, which required all intended publications to be registered with the government-approved Stationers' Company, giving the Stationers the right to regulate what material could be printed.
The Statute of Anne, enacted in 1710 in England and Scotland, provided the first legislation to protect copyrights (but not authors' rights). The Copyright Act of 1814 extended more rights for authors but did not protect British from reprinting in the US. The Berne International Copyright Convention of 1886 finally provided protection for authors among the countries who signed the agreement, although the US did not join the Berne Convention until 1989.
In the US, the Constitution grants Congress the right to establish copyright and patent laws. Shortly after the Constitution was passed, Congress enacted the Copyright Act of 1790, modeling it after the Statute of Anne. While the national law protected authors’ published works, authority was granted to the states to protect authors’ unpublished works. The most recent major overhaul of copyright in the US, the 1976 Copyright Act, extended federal copyright to works as soon as they are created and "fixed", without requiring publication or registration. State law continues to apply to unpublished works that are not otherwise copyrighted by federal law. This act also changed the calculation of copyright term from a fixed term (then a maximum of fifty-six years) to "life of the author plus 50 years". These changes brought the US closer to conformity with the Berne Convention, and in 1989 the United States further revised its copyright law and joined the Berne Convention officially.
Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized. Different cultural attitudes, social organizations, economic models and legal frameworks are seen to account for why copyright emerged in Europe and not, for example, in Asia. In the Middle Ages in Europe, there was generally a lack of any concept of literary property due to the general relations of production, the specific organization of literary production and the role of culture in society. The latter refers to the tendency of oral societies, such as that of Europe in the medieval period, to view knowledge as the product and expression of the collective, rather than to see it as individual property. However, with copyright laws, intellectual production comes to be seen as a product of an individual, with attendant rights. The most significant point is that patent and copyright laws support the expansion of the range of creative human activities that can be commodified. This parallels the ways in which capitalism led to the commodification of many aspects of social life that earlier had no monetary or economic value per se.
Copyright has developed into a concept that has a significant effect on nearly every modern industry, including not just literary work, but also forms of creative work such as sound recordings, films, photographs, software, and architecture.
National copyrights
Often seen as the first real copyright law, the 1709 British Statute of Anne gave the publishers rights for a fixed period, after which the copyright expired.
The act also alluded to individual rights of the artist. It began, "Whereas Printers, Booksellers, and other Persons, have of late frequently taken the Liberty of Printing ... Books, and other Writings, without the Consent of the Authors ... to their very great Detriment, and too often to the Ruin of them and their Families:". A right to benefit financially from the work is articulated, and court rulings and legislation have recognized a right to control the work, such as ensuring that the integrity of it is preserved. An irrevocable right to be recognized as the work's creator appears in some countries' copyright laws.
The Copyright Clause of the United States, Constitution (1787) authorized copyright legislation: "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." That is, by guaranteeing them a period of time in which they alone could profit from their works, they would be enabled and encouraged to invest the time required to create them, and this would be good for society as a whole. A right to profit from the work has been the philosophical underpinning for much legislation extending the duration of copyright, to the life of the creator and beyond, to their heirs.
The original length of copyright in the United States was 14 years, and it had to be explicitly applied for. If the author wished, they could apply for a second 14‑year monopoly grant, but after that the work entered the public domain, so it could be used and built upon by others.
Copyright law was enacted rather late in German states, and the historian Eckhard Höffner argues that the absence of copyright laws in the early 19th century encouraged publishing, was profitable for authors, led to a proliferation of books, enhanced knowledge, and was ultimately an important factor in the ascendency of Germany as a power during that century. However, empirical evidence derived from the exogenous differential introduction of copyright in Napoleonic Italy shows that "basic copyrights increased both the number and the quality of operas, measured by their popularity and durability".
International copyright treaties
The 1886 Berne Convention first established recognition of copyrights among sovereign nations, rather than merely bilaterally. Under the Berne Convention, copyrights for creative works do not have to be asserted or declared, as they are automatically in force at creation: an author need not "register" or "apply for" a copyright in countries adhering to the Berne Convention. As soon as a work is "fixed", that is, written or recorded on some physical medium, its author is automatically entitled to all copyrights in the work, and to any derivative works unless and until the author explicitly disclaims them, or until the copyright expires. The Berne Convention also resulted in foreign authors being treated equivalently to domestic authors, in any country signed onto the Convention. The UK signed the Berne Convention in 1887 but did not implement large parts of it until 100 years later with the passage of the Copyright, Designs and Patents Act 1988. Specially, for educational and scientific research purposes, the Berne Convention provides the developing countries issue compulsory licenses for the translation or reproduction of copyrighted works within the limits prescribed by the Convention. This was a special provision that had been added at the time of 1971 revision of the Convention, because of the strong demands of the developing countries. The United States did not sign the Berne Convention until 1989.
The United States and most Latin American countries instead entered into the Buenos Aires Convention in 1910, which required a copyright notice on the work (such as all rights reserved), and permitted signatory nations to limit the duration of copyrights to shorter and renewable terms. The Universal Copyright Convention was drafted in 1952 as another less demanding alternative to the Berne Convention, and ratified by nations such as the Soviet Union and developing nations.
The regulations of the Berne Convention are incorporated into the World Trade Organization's TRIPS agreement (1995), thus giving the Berne Convention effectively near-global application.
In 1961, the United International Bureaux for the Protection of Intellectual Property signed the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations. In 1996, this organization was succeeded by the founding of the World Intellectual Property Organization, which launched the 1996 WIPO Performances and Phonograms Treaty and the 2002 WIPO Copyright Treaty, which enacted greater restrictions on the use of technology to copy works in the nations that ratified it. The Trans-Pacific Partnership includes intellectual Property Provisions relating to copyright.
Copyright laws are standardized somewhat through these international conventions such as the Berne Convention and Universal Copyright Convention. These multilateral treaties have been ratified by nearly all countries, and international organizations such as the European Union or World Trade Organization require their member states to comply with them.
Obtaining protection
Ownership
The original holder of the copyright may be the employer of the author rather than the author themself if the work is a "work for hire". For example, in English law the Copyright, Designs and Patents Act 1988 provides that if a copyrighted work is made by an employee in the course of that employment, the copyright is automatically owned by the employer which would be a "Work for Hire". Typically, the first owner of a copyright is the person who created the work i.e. the author. But when more than one person creates the work, then a case of joint authorship can be made provided some criteria are met.
Eligible works
Copyright may apply to a wide range of creative, intellectual, or artistic forms, or "works". Specifics vary by jurisdiction, but these can include poems, theses, fictional characters, plays and other literary works, motion pictures, choreography, musical compositions, sound recordings, paintings, drawings, sculptures, photographs, computer software, radio and television broadcasts, and industrial designs. Graphic designs and industrial designs may have separate or overlapping laws applied to them in some jurisdictions.
Copyright does not cover ideas and information themselves, only the form or manner in which they are expressed. For example, the copyright to a Mickey Mouse cartoon restricts others from making copies of the cartoon or creating derivative works based on Disney's particular anthropomorphic mouse, but does not prohibit the creation of other works about anthropomorphic mice in general, so long as they are different enough to not be judged copies of Disney's. Note additionally that Mickey Mouse is not copyrighted because characters cannot be copyrighted; rather, Steamboat Willie is copyrighted and Mickey Mouse, as a character in that copyrighted work, is afforded protection.
Originality
Typically, a work must meet minimal standards of originality in order to qualify for copyright, and the copyright expires after a set period of time (some jurisdictions may allow this to be extended). Different countries impose different tests, although generally the requirements are low; in the United Kingdom there has to be some "skill, labour, and judgment" that has gone into it. In Australia and the United Kingdom it has been held that a single word is insufficient to comprise a copyright work. However, single words or a short string of words can sometimes be registered as a trademark instead.
Copyright law recognizes the right of an author based on whether the work actually is an original creation, rather than based on whether it is unique; two authors may own copyright on two substantially identical works, if it is determined that the duplication was coincidental, and neither was copied from the other.
Registration
In all countries where the Berne Convention standards apply, copyright is automatic, and need not be obtained through official registration with any government office. Once an idea has been reduced to tangible form, for example by securing it in a fixed medium (such as a drawing, sheet music, photograph, a videotape, or a computer file), the copyright holder is entitled to enforce their exclusive rights. However, while registration is not needed to exercise copyright, in jurisdictions where the laws provide for registration, it serves as prima facie evidence of a valid copyright and enables the copyright holder to seek statutory damages and attorney's fees. (In the US, registering after an infringement only enables one to receive actual damages and lost profits.)
A widely circulated strategy to avoid the cost of copyright registration is referred to as the poor man's copyright. It proposes that the creator send the work to themself in a sealed envelope by registered mail, using the postmark to establish the date. This technique has not been recognized in any published opinions of the United States courts. The United States Copyright Office says the technique is not a substitute for actual registration. The United Kingdom Intellectual Property Office discusses the technique and notes that the technique (as well as commercial registries) does not constitute dispositive proof that the work is original or establish who created the work.
Fixing
The Berne Convention allows member countries to decide whether creative works must be "fixed" to enjoy copyright. Article 2, Section 2 of the Berne Convention states: "It shall be a matter for legislation in the countries of the Union to prescribe that works in general or any specified categories of works shall not be protected unless they have been fixed in some material form." Some countries do not require that a work be produced in a particular form to obtain copyright protection. For instance, Spain, France, and Australia do not require fixation for copyright protection. The United States and Canada, on the other hand, require that most works must be "fixed in a tangible medium of expression" to obtain copyright protection. US law requires that the fixation be stable and permanent enough to be "perceived, reproduced or communicated for a period of more than transitory duration". Similarly, Canadian courts consider fixation to require that the work be "expressed to some extent at least in some material form, capable of identification and having a more or less permanent endurance".
Note this provision of US law: c) Effect of Berne Convention.—No right or interest in a work eligible for protection under this title may be claimed by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto. Any rights in a work eligible for protection under this title that derive from this title, other Federal or State statutes, or the common law, shall not be expanded or reduced by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto.
Copyright notice
Before 1989, United States law required the use of a copyright notice, consisting of the copyright symbol (©, the letter C inside a circle), the abbreviation "Copr.", or the word "Copyright", followed by the year of the first publication of the work and the name of the copyright holder. Several years may be noted if the work has gone through substantial revisions. The proper copyright notice for sound recordings of musical or other audio works is a sound recording copyright symbol (℗, the letter P inside a circle), which indicates a sound recording copyright, with the letter P indicating a "phonorecord". In addition, the phrase All rights reserved which indicates that the copyright holder reserves, or holds for their own use was once required to assert copyright, but that phrase is now legally obsolete. Almost everything on the Internet has some sort of copyright attached to it. Whether these things are watermarked, signed, or have any other sort of indication of the copyright is a different story however.
In 1989 the United States enacted the Berne Convention Implementation Act, amending the 1976 Copyright Act to conform to most of the provisions of the Berne Convention. As a result, the use of copyright notices has become optional to claim copyright, because the Berne Convention makes copyright automatic. However, the lack of notice of copyright using these marks may have consequences in terms of reduced damages in an infringement lawsuit – using notices of this form may reduce the likelihood of a defense of "innocent infringement" being successful.
Enforcement
Copyrights are generally enforced by the holder in a civil law court, but there are also criminal infringement statutes in some jurisdictions. While central registries are kept in some countries which aid in proving claims of ownership, registering does not necessarily prove ownership, nor does the fact of copying (even without permission) necessarily prove that copyright was infringed. Criminal sanctions are generally aimed at serious counterfeiting activity, but are now becoming more commonplace as copyright collectives such as the RIAA are increasingly targeting the file sharing home Internet user. Thus far, however, most such cases against file sharers have been settled out of court. (See Legal aspects of file sharing)
In most jurisdictions the copyright holder must bear the cost of enforcing copyright. This will usually involve engaging legal representation, administrative or court costs. In light of this, many copyright disputes are settled by a direct approach to the infringing party in order to settle the dispute out of court.
"... by 1978, the scope was expanded to apply to any 'expression' that has been 'fixed' in any medium, this protection granted automatically whether the maker wants it or not, no registration required."
Copyright infringement
For a work to be considered to infringe upon copyright, its use must have occurred in a nation that has domestic copyright laws or adheres to a bilateral treaty or established international convention such as the Berne Convention or WIPO Copyright Treaty. Improper use of materials outside of legislation is deemed "unauthorized edition", not copyright infringement.
Statistics regarding the effects of copyright infringement are difficult to determine. Studies have attempted to determine whether there is a monetary loss for industries affected by copyright infringement by predicting what portion of pirated works would have been formally purchased if they had not been freely available. Other reports indicate that copyright infringement does not have an adverse effect on the entertainment industry, and can have a positive effect. In particular, a 2014 university study concluded that free music content, accessed on YouTube, does not necessarily hurt sales, instead has the potential to increase sales.
According to the IP Commission Report the annual cost of intellectual property theft to the US economy "continues to exceed $225 billion in counterfeit goods, pirated software, and theft of trade secrets and could be as high as $600 billion." A 2019 study sponsored by the US Chamber of Commerce Global Innovation Policy Center (GIPC), in partnership with NERA Economic Consulting "estimates that global online piracy costs the U.S. economy at least $29.2 billion in lost revenue each year." An August 2021 report by the Digital Citizens Alliance states that "online criminals who offer stolen movies, TV shows, games, and live events through websites and apps are reaping $1.34 billion in annual advertising revenues." This comes as a result of users visiting pirate websites who are then subjected to pirated content, malware, and fraud.
Rights granted
According to World Intellectual Property Organisation, copyright protects two types of rights. Economic rights allow right owners to derive financial reward from the use of their works by others. Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work. The author or creator may be the owner of the economic rights or those rights may be transferred to one or more copyright owners. Many countries do not allow the transfer of moral rights.
Economic rights
With any kind of property, its owner may decide how it is to be used, and others can use it lawfully only if they have the owner's permission, often through a license. The owner's use of the property must, however, respect the legally recognised rights and interests of other members of society. So the owner of a copyright-protected work may decide how to use the work, and may prevent others from using it without permission. National laws usually grant copyright owners exclusive rights to allow third parties to use their works, subject to the legally recognised rights and interests of others. Most copyright laws state that authors or other right owners have the right to authorise or prevent certain acts in relation to a work. Right owners can authorise or prohibit:
reproduction of the work in various forms, such as printed publications or sound recordings;
distribution of copies of the work;
public performance of the work;
broadcasting or other communication of the work to the public;
translation of the work into other languages; and
adaptation of the work, such as turning a novel into a screenplay.
Moral rights
Moral rights are concerned with the non-economic rights of a creator. They protect the creator's connection with a work as well as the integrity of the work. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. In some EU countries, such as France, moral rights last indefinitely. In the UK, however, moral rights are finite. That is, the right of attribution and the right of integrity last only as long as the work is in copyright. When the copyright term comes to an end, so too do the moral rights in that work. This is just one reason why the moral rights regime within the UK is often regarded as weaker or inferior to the protection of moral rights in continental Europe and elsewhere in the world. The Berne Convention, in Article 6bis, requires its members to grant authors the following rights:
the right to claim authorship of a work (sometimes called the right of paternity or the right of attribution); and
the right to object to any distortion or modification of a work, or other derogatory action in relation to a work, which would be prejudicial to the author's honour or reputation (sometimes called the right of integrity).
These and other similar rights granted in national laws are generally known as the moral rights of authors. The Berne Convention requires these rights to be independent of authors’ economic rights. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. This means that even where, for example, a film producer or publisher owns the economic rights in a work, in many jurisdictions the individual author continues to have moral rights. Recently, as a part of the debates being held at the US Copyright Office on the question of inclusion of Moral Rights as a part of the framework of the Copyright Law in United States, the Copyright Office concluded that many diverse aspects of the current moral rights patchwork – including copyright law's derivative work right, state moral rights statutes, and contract law – are generally working well and should not be changed. Further, the Office concludes that there is no need for the creation of a blanket moral rights statute at this time. However, there are aspects of the US moral rights patchwork that could be improved to the benefit of individual authors and the copyright system as a whole.
The Copyright Law in the United States, several exclusive rights are granted to the holder of a copyright, as are listed below:
protection of the work;
to determine and decide how, and under what conditions, the work may be marketed, publicly displayed, reproduced, distributed, etc.
to produce copies or reproductions of the work and to sell those copies; (including, typically, electronic copies)
to import or export the work;
to create derivative works; (works that adapt the original work)
to perform or display the work publicly;
to sell or cede these rights to others;
to transmit or display by radio, video or internet.
The basic right when a work is protected by copyright is that the holder may determine and decide how and under what conditions the protected work may be used by others. This includes the right to decide to distribute the work for free. This part of copyright is often overseen. The phrase "exclusive right" means that only the copyright holder is free to exercise those rights, and others are prohibited from using the work without the holder's permission. Copyright is sometimes called a "negative right", as it serves to prohibit certain people (e.g., readers, viewers, or listeners, and primarily publishers and would be publishers) from doing something they would otherwise be able to do, rather than permitting people (e.g., authors) to do something they would otherwise be unable to do. In this way it is similar to the unregistered design right in English law and European law. The rights of the copyright holder also permit him/her to not use or exploit their copyright, for some or all of the term. There is, however, a critique which rejects this assertion as being based on a philosophical interpretation of copyright law that is not universally shared. There is also debate on whether copyright should be considered a property right or a moral right.
UK copyright law gives creators both economic rights and moral rights. While ‘copying’ someone else's work without permission may constitute an infringement of their economic rights, that is, the reproduction right or the right of communication to the public, whereas, ‘mutilating’ it might infringe the creator's moral rights. In the UK, moral rights include the right to be identified as the author of the work, which is generally identified as the right of attribution, and the right not to have your work subjected to ‘derogatory treatment’, that is the right of integrity.
Indian copyright law is at parity with the international standards as contained in TRIPS. The Indian Copyright Act, 1957, pursuant to the amendments in 1999, 2002 and 2012, fully reflects the Berne Convention and the Universal Copyrights Convention, to which India is a party. India is also a party to the Geneva Convention for the Protection of Rights of Producers of Phonograms and is an active member of the World Intellectual Property Organization (WIPO) and United Nations Educational, Scientific and Cultural Organization (UNESCO). The Indian system provides both the economic and moral rights under different provisions of its Indian Copyright Act of 1957.
Duration
Copyright subsists for a variety of lengths in different jurisdictions. The length of the term can depend on several factors, including the type of work (e.g. musical composition, novel), whether the work has been published, and whether the work was created by an individual or a corporation. In most of the world, the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire.
The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the United States used to require copyrights to be renewed after 28 years to stay in force, and formerly required a copyright notice upon first publication to gain coverage. In Italy and France, there were post-wartime extensions that could increase the term by approximately 6 years in Italy and up to about 14 in France. Many countries have extended the length of their copyright terms (sometimes retroactively). International treaties establish minimum terms for copyrights, but individual countries may enforce longer terms than those.
In the United States, all books and other works, except for sound recordings, published before 1928 have expired copyrights and are in the public domain. The applicable date for sound recordings in the United States is before 1923. In addition, works published before 1964 that did not have their copyrights renewed 28 years after first publication year also are in the public domain. Hirtle points out that the great majority of these works (including 93% of the books) were not renewed after 28 years and are in the public domain. Books originally published outside the US by non-Americans are exempt from this renewal requirement, if they are still under copyright in their home country.
But if the intended exploitation of the work includes publication (or distribution of derivative work, such as a film based on a book protected by copyright) outside the US, the terms of copyright around the world must be considered. If the author has been dead more than 70 years, the work is in the public domain in most, but not all, countries.
In 1998, the length of a copyright in the United States was increased by 20 years under the Copyright Term Extension Act. This legislation was strongly promoted by corporations which had valuable copyrights which otherwise would have expired, and has been the subject of substantial criticism on this point.
Limitations and exceptions
In many jurisdictions, copyright law makes exceptions to these restrictions when the work is copied for the purpose of commentary or other related uses. United States copyright law does not cover names, titles, short phrases or listings (such as ingredients, recipes, labels, or formulas). However, there are protections available for those areas copyright does not cover, such as trademarks and patents.
Idea–expression dichotomy and the merger doctrine
The idea–expression divide differentiates between ideas and expression, and states that copyright protects only the original expression of ideas, and not the ideas themselves. This principle, first clarified in the 1879 case of Baker v. Selden, has since been codified by the Copyright Act of 1976 at 17 U.S.C. § 102(b).
The first-sale doctrine and exhaustion of rights
Copyright law does not restrict the owner of a copy from reselling legitimately obtained copies of copyrighted works, provided that those copies were originally produced by or with the permission of the copyright holder. It is therefore legal, for example, to resell a copyrighted book or CD. In the United States this is known as the first-sale doctrine, and was established by the courts to clarify the legality of reselling books in second-hand bookstores.
Some countries may have parallel importation restrictions that allow the copyright holder to control the aftermarket. This may mean for example that a copy of a book that does not infringe copyright in the country where it was printed does infringe copyright in a country into which it is imported for retailing. The first-sale doctrine is known as exhaustion of rights in other countries and is a principle which also applies, though somewhat differently, to patent and trademark rights. It is important to note that the first-sale doctrine permits the transfer of the particular legitimate copy involved. It does not permit making or distributing additional copies.
In Kirtsaeng v. John Wiley & Sons, Inc., in 2013, the United States Supreme Court held in a 6–3 decision that the first-sale doctrine applies to goods manufactured abroad with the copyright owner's permission and then imported into the US without such permission. The case involved a plaintiff who imported Asian editions of textbooks that had been manufactured abroad with the publisher-plaintiff's permission. The defendant, without permission from the publisher, imported the textbooks and resold on eBay. The Supreme Court's holding severely limits the ability of copyright holders to prevent such importation.
In addition, copyright, in most cases, does not prohibit one from acts such as modifying, defacing, or destroying one's own legitimately obtained copy of a copyrighted work, so long as duplication is not involved. However, in countries that implement moral rights, a copyright holder can in some cases successfully prevent the mutilation or destruction of a work that is publicly visible.
Fair use and fair dealing
Copyright does not prohibit all copying or replication. In the United States, the fair use doctrine, codified by the Copyright Act of 1976 as 17 U.S.C. Section 107, permits some copying and distribution without permission of the copyright holder or payment to same. The statute does not clearly define fair use, but instead gives four non-exclusive factors to consider in a fair use analysis. Those factors are:
the purpose and character of one's use;
the nature of the copyrighted work;
what amount and proportion of the whole work was taken;
the effect of the use upon the potential market for or value of the copyrighted work.
In the United Kingdom and many other Commonwealth countries, a similar notion of fair dealing was established by the courts or through legislation. The concept is sometimes not well defined; however in Canada, private copying for personal use has been expressly permitted by statute since 1999. In Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright), 2012 SCC 37, the Supreme Court of Canada concluded that limited copying for educational purposes could also be justified under the fair dealing exemption. In Australia, the fair dealing exceptions under the Copyright Act 1968 (Cth) are a limited set of circumstances under which copyrighted material can be legally copied or adapted without the copyright holder's consent. Fair dealing uses are research and study; review and critique; news reportage and the giving of professional advice (i.e. legal advice). Under current Australian law, although it is still a breach of copyright to copy, reproduce or adapt copyright material for personal or private use without permission from the copyright owner, owners of a legitimate copy are permitted to "format shift" that work from one medium to another for personal, private use, or to "time shift" a broadcast work for later, once and only once, viewing or listening. Other technical exemptions from infringement may also apply, such as the temporary reproduction of a work in machine readable form for a computer.
In the United States the AHRA (Audio Home Recording Act Codified in Section 10, 1992) prohibits action against consumers making noncommercial recordings of music, in return for royalties on both media and devices plus mandatory copy-control mechanisms on recorders.
Later acts amended US Copyright law so that for certain purposes making 10 copies or more is construed to be commercial, but there is no general rule permitting such copying. Indeed, making one complete copy of a work, or in many cases using a portion of it, for commercial purposes will not be considered fair use. The Digital Millennium Copyright Act prohibits the manufacture, importation, or distribution of devices whose intended use, or only significant commercial use, is to bypass an access or copy control put in place by a copyright owner. An appellate court has held that fair use is not a defense to engaging in such distribution.
EU copyright laws recognise the right of EU member states to implement some national exceptions to copyright. Examples of those exceptions are:
photographic reproductions on paper or any similar medium of works (excluding sheet music) provided that the rightholders receives fair compensation;
reproduction made by libraries, educational establishments, museums or archives, which are non-commercial;
archival reproductions of broadcasts;
uses for the benefit of people with a disability;
for demonstration or repair of equipment;
for non-commercial research or private study;
when used in parody.
Accessible copies
It is legal in several countries including the United Kingdom and the United States to produce alternative versions (for example, in large print or braille) of a copyrighted work to provide improved access to a work for blind and visually impaired people without permission from the copyright holder.
Religious Service Exemption
In the US there is a Religious Service Exemption (1976 law, section 110[3]), namely "performance of a non-dramatic literary or musical work or of a dramatico-musical work of a religious nature or display of a work, in the course of services at a place of worship or other religious assembly" shall not constitute infringement of copyright.
Useful articles
In Canada, items deemed useful articles such as clothing designs are exempted from copyright protection under the Copyright Act if reproduced more than 50 times. Fast fashion brands may reproduce clothing designs from smaller companies without violating copyright protections.
Transfer, assignment and licensing
A copyright, or aspects of it (e.g. reproduction alone, all but moral rights), may be assigned or transferred from one party to another. For example, a musician who records an album will often sign an agreement with a record company in which the musician agrees to transfer all copyright in the recordings in exchange for royalties and other considerations. The creator (and original copyright holder) benefits, or expects to, from production and marketing capabilities far beyond those of the author. In the digital age of music, music may be copied and distributed at minimal cost through the Internet; however, the record industry attempts to provide promotion and marketing for the artist and their work so it can reach a much larger audience. A copyright holder need not transfer all rights completely, though many publishers will insist. Some of the rights may be transferred, or else the copyright holder may grant another party a non-exclusive license to copy or distribute the work in a particular region or for a specified period of time.
A transfer or licence may have to meet particular formal requirements in order to be effective, for example under the Australian Copyright Act 1968 the copyright itself must be expressly transferred in writing. Under the US Copyright Act, a transfer of ownership in copyright must be memorialized in a writing signed by the transferor. For that purpose, ownership in copyright includes exclusive licenses of rights. Thus exclusive licenses, to be effective, must be granted in a written instrument signed by the grantor. No special form of transfer or grant is required. A simple document that identifies the work involved and the rights being granted is sufficient. Non-exclusive grants (often called non-exclusive licenses) need not be in writing under US law. They can be oral or even implied by the behavior of the parties. Transfers of copyright ownership, including exclusive licenses, may and should be recorded in the U.S. Copyright Office. (Information on recording transfers is available on the Office's web site.) While recording is not required to make the grant effective, it offers important benefits, much like those obtained by recording a deed in a real estate transaction.
Copyright may also be licensed. Some jurisdictions may provide that certain classes of copyrighted works be made available under a prescribed statutory license (e.g. musical works in the United States used for radio broadcast or performance). This is also called a compulsory license, because under this scheme, anyone who wishes to copy a covered work does not need the permission of the copyright holder, but instead merely files the proper notice and pays a set fee established by statute (or by an agency decision under statutory guidance) for every copy made. Failure to follow the proper procedures would place the copier at risk of an infringement suit. Because of the difficulty of following every individual work, copyright collectives or collecting societies and performing rights organizations (such as ASCAP, BMI, and SESAC) have been formed to collect royalties for hundreds (thousands and more) works at once. Though this market solution bypasses the statutory license, the availability of the statutory fee still helps dictate the price per work collective rights organizations charge, driving it down to what avoidance of procedural hassle would justify.
Free licenses
Copyright licenses known as open or free licenses seek to grant several rights to licensees, either for a fee or not. Free in this context is not as much of a reference to price as it is to freedom. What constitutes free licensing has been characterised in a number of similar definitions, including by order of longevity the Free Software Definition, the Debian Free Software Guidelines, the Open Source Definition and the Definition of Free Cultural Works. Further refinements to these definitions have resulted in categories such as copyleft and permissive. Common examples of free licences are the GNU General Public License, BSD licenses and some Creative Commons licenses.
Founded in 2001 by James Boyle, Lawrence Lessig, and Hal Abelson, the Creative Commons (CC) is a non-profit organization which aims to facilitate the legal sharing of creative works. To this end, the organization provides a number of generic copyright license options to the public, gratis. These licenses allow copyright holders to define conditions under which others may use a work and to specify what types of use are acceptable.
Terms of use have traditionally been negotiated on an individual basis between copyright holder and potential licensee. Therefore, a general CC license outlining which rights the copyright holder is willing to waive enables the general public to use such works more freely. Six general types of CC licenses are available (although some of them are not properly free per the above definitions and per Creative Commons' own advice). These are based upon copyright-holder stipulations such as whether they are willing to allow modifications to the work, whether they permit the creation of derivative works and whether they are willing to permit commercial use of the work. approximately 130 million individuals had received such licenses.
Criticism
Some sources are critical of particular aspects of the copyright system. This is known as a debate over copynorms. Particularly to the background of uploading content to internet platforms and the digital exchange of original work, there is discussion about the copyright aspects of downloading and streaming, the copyright aspects of hyperlinking and framing.
Concerns are often couched in the language of digital rights, digital freedom, database rights, open data or censorship. Discussions include Free Culture, a 2004 book by Lawrence Lessig. Lessig coined the term permission culture to describe a worst-case system. Good Copy Bad Copy (documentary) and RiP!: A Remix Manifesto, discuss copyright. Some suggest an alternative compensation system. In Europe consumers are acting up against the raising costs of music, film and books, and as a result Pirate Parties have been created. Some groups reject copyright altogether, taking an anti-copyright stance. The perceived inability to enforce copyright online leads some to advocate ignoring legal statutes when on the web.
Public domain
Copyright, like other intellectual property rights, is subject to a statutorily determined term. Once the term of a copyright has expired, the formerly copyrighted work enters the public domain and may be used or exploited by anyone without obtaining permission, and normally without payment. However, in paying public domain regimes the user may still have to pay royalties to the state or to an authors' association. Courts in common law countries, such as the United States and the United Kingdom, have rejected the doctrine of a common law copyright. Public domain works should not be confused with works that are publicly available. Works posted in the internet, for example, are publicly available, but are not generally in the public domain. Copying such works may therefore violate the author's copyright.
See also
Adelphi Charter
Artificial scarcity
Authors' rights and related rights, roughly equivalent concepts in civil law countries
Conflict of laws
Copyfraud
Copyleft
Copyright abolition
Copyright Alliance
Copyright alternatives
Copyright for Creativity
Copyright in architecture in the United States
Copyright on the content of patents and in the context of patent prosecution
Criticism of copyright
Criticism of intellectual property
Directive on Copyright in the Digital Single Market (European Union)
Copyright infringement
Copyright Remedy Clarification Act (CRCA)
Digital rights management
Digital watermarking
Entertainment law
Freedom of panorama
Information literacies
Intellectual property protection of typefaces
List of Copyright Acts
List of copyright case law
Literary property
Model release
Paracopyright
Philosophy of copyright
Photography and the law
Pirate Party
Printing patent, a precursor to copyright
Private copying levy
Production music
Rent-seeking
Reproduction fees
Samizdat
Software copyright
Threshold pledge system
World Book and Copyright Day
References
Further reading
Ellis, Sara R. Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic, 78 Tenn. L. Rev. 163 (2010), available at Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic.
Ghosemajumder, Shuman. Advanced Peer-Based Technology Business Models. MIT Sloan School of Management, 2002.
Lehman, Bruce: Intellectual Property and the National Information Infrastructure (Report of the Working Group on Intellectual Property Rights, 1995)
Lindsey, Marc: Copyright Law on Campus. Washington State University Press, 2003. .
Mazzone, Jason. Copyfraud. SSRN
McDonagh, Luke. Is Creative use of Musical Works without a licence acceptable under Copyright? International Review of Intellectual Property and Competition Law (IIC) 4 (2012) 401–426, available at SSRN
Rife, by Martine Courant. Convention, Copyright, and Digital Writing (Southern Illinois University Press; 2013) 222 pages; Examines legal, pedagogical, and other aspects of online authorship.
Shipley, David E. "Thin But Not Anorexic: Copyright Protection for Compilations and Other Fact Works" UGA Legal Studies Research Paper No. 08-001; Journal of Intellectual Property Law, Vol. 15, No. 1, 2007.
Silverthorne, Sean. Music Downloads: Pirates- or Customers? . Harvard Business School Working Knowledge, 2004.
Sorce Keller, Marcello. "Originality, Authenticity and Copyright", Sonus, VII(2007), no. 2, pp. 77–85.
Rose, M. (1993), Authors and Owners: The Invention of Copyright, London: Harvard University Press
Loewenstein, J. (2002), The Author's Due: Printing and the Prehistory of Copyright, London: University of Chicago Press.
External links
A simplified guide.
WIPOLex from WIPO; global database of treaties and statutes relating to intellectual property
Copyright Berne Convention: Country List List of the 164 members of the Berne Convention for the protection of literary and artistic works
Copyright and State Sovereign Immunity, U.S. Copyright Office
The Multi-Billion-Dollar Piracy Industry with Tom Galvin of Digital Citizens Alliance, The Illusion of More Podcast
Education
Copyright Cortex
A Bibliography on the Origins of Copyright and Droit d'Auteur
MIT OpenCourseWare 6.912 Introduction to Copyright Law Free self-study course with video lectures as offered during the January 2006, Independent Activities Period (IAP)
US
Copyright Law of the United States Documents, US Government
Compendium of Copyright Practices (3rd ed.) United States Copyright Office
Copyright from UCB Libraries GovPubs
Early Copyright Records From the Rare Book and Special Collections Division at the Library of Congress
UK
Copyright: Detailed information at the UK Intellectual Property Office
Fact sheet P-01: UK copyright law (Issued April 2000, amended 25 November 2020) at the UK Copyright Service
Data management
Intellectual property law
Monopoly (economics)
Product management
Public records
Intangible assets
|
https://en.wikipedia.org/wiki/STS-51-F
|
STS-51-F (also known as Spacelab 2) was the 19th flight of NASA's Space Shuttle program and the eighth flight of Space Shuttle Challenger. It launched from Kennedy Space Center, Florida, on July 29, 1985, and landed eight days later on August 6, 1985.
While STS-51-F's primary payload was the Spacelab 2 laboratory module, the payload that received the most publicity was the Carbonated Beverage Dispenser Evaluation, which was an experiment in which both Coca-Cola and Pepsi tried to make their carbonated drinks available to astronauts. A helium-cooled infrared telescope (IRT) was also flown on this mission, and while it did have some problems, it observed 60% of the galactic plane in infrared light.
During launch, Challenger experienced multiple sensor failures in its Engine 1 Center SSME engine, which led to it shutting down and the shuttle had to perform an "Abort to Orbit" (ATO) emergency procedure. It is the only Shuttle mission to have carried out an abort after launching. As a result of the ATO, the mission was carried out at a slightly lower orbital altitude.
Crew
Backup crew
Crew seating arrangements
Crew notes
As with previous Spacelab missions, the crew was divided between two 12-hour shifts. Acton, Bridges and Henize made up the "Red Team" while Bartoe, England and Musgrave comprised the "Blue Team"; commander Fullerton could take either shift when needed. Challenger carried two Extravehicular Mobility Units (EMU) in the event of an emergency spacewalk, which would have been performed by England and Musgrave.
Launch
STS-51-F's first launch attempt on July 12, 1985, was halted with the countdown at T−3 seconds after main engine ignition, when a malfunction of the number two RS-25 coolant valve caused an automatic launch abort. Challenger launched successfully on its second attempt on July 29, 1985, at 17:00 p.m. EDT, after a delay of 1 hour 37 minutes due to a problem with the table maintenance block update uplink.
At 3 minutes 31 seconds into the ascent, one of the center engine's two high-pressure fuel turbopump turbine discharge temperature sensors failed. Two minutes and twelve seconds later, the second sensor failed, causing the shutdown of the center engine. This was the only in-flight RS-25 failure of the Space Shuttle program. Approximately 8 minutes into the flight, one of the same temperature sensors in the right engine failed, and the remaining right-engine temperature sensor displayed readings near the redline for engine shutdown. Booster Systems Engineer Jenny M. Howard acted quickly to recommend that the crew inhibit any further automatic RS-25 shutdowns based on readings from the remaining sensors, preventing the potential shutdown of a second engine and a possible abort mode that may have resulted in the loss of crew and vehicle (LOCV).
The failed RS-25 resulted in an Abort to Orbit (ATO) trajectory, whereby the shuttle achieved a lower-than-planned orbital altitude. The plan had been for a by orbit, but the mission was carried out at by .
Mission summary
STS-51-F's primary payload was the laboratory module Spacelab 2. A special part of the modular Spacelab system, the "igloo", which was located at the head of a three-pallet train, provided on-site support to instruments mounted on pallets. The main mission objective was to verify performance of Spacelab systems, determine the interface capability of the orbiter, and measure the environment created by the spacecraft. Experiments covered life sciences, plasma physics, astronomy, high-energy astrophysics, solar physics, atmospheric physics and technology research. Despite mission replanning necessitated by Challengers abort to orbit trajectory, the Spacelab mission was declared a success.
The flight marked the first time the European Space Agency (ESA) Instrument Pointing System (IPS) was tested in orbit. This unique pointing instrument was designed with an accuracy of one arcsecond. Initially, some problems were experienced when it was commanded to track the Sun, but a series of software fixes were made and the problem was corrected. In addition, Anthony W. England became the second amateur radio operator to transmit from space during the mission.
Spacelab Infrared Telescope
The Spacelab Infrared Telescope (IRT) was also flown on the mission. The IRT was a aperture helium-cooled infrared telescope, observing light between wavelengths of 1.7 to 118 μm. It was thought heat emissions from the Shuttle corrupting long-wavelength data, but it still returned useful astronomical data. Another problem was that a piece of mylar insulation broke loose and floated in the line-of-sight of the telescope. IRT collected infrared data on 60% of the galactic plane. (see also List of largest infrared telescopes) A later space mission that experienced a stray light problem from debris was Gaia astrometry spacecraft launch in 2013 by the ESA - the source of the stray light was later identified as the fibers of the sunshield, protruding beyond the edges of the shield.
Other payloads
The Plasma Diagnostics Package (PDP), which had been previously flown on STS-3, made its return on the mission, and was part of a set of plasma physics experiments designed to study the Earth's ionosphere. During the third day of the mission, it was grappled out of the payload bay by the Remote Manipulator System (Canadarm) and released for six hours. During this time, Challenger maneuvered around the PDP as part of a targeted proximity operations exercise. The PDP was successfully grappled by the Canadarm and returned to the payload bay at the beginning of the fourth day of the mission.
In a heavily publicized marketing experiment, astronauts aboard STS-51-F drank carbonated beverages from specially designed cans from Cola Wars competitors Coca-Cola and Pepsi. According to Acton, after Coke developed its experimental dispenser for an earlier shuttle flight, Pepsi insisted to American president Ronald Reagan that Coke should not be the first cola in space. The experiment was delayed until Pepsi could develop its own system, and the two companies' products were assigned to STS-51-F.
Blue Team tested Coke, and Red Team tested Pepsi. As part of the experiment, each team was photographed with the cola logo. Acton said that while the sophisticated Coke system "dispensed soda kind of like what we're used to drinking on Earth", the Pepsi can was a shaving cream can with the Pepsi logo on a paper wrapper, which "dispensed soda filled with bubbles" that was "not very drinkable". Acton said that when he gives speeches in schools, audiences are much more interested in hearing about the cola experiment than in solar physics. Post-flight, the astronauts revealed that they preferred Tang, in part because it could be mixed on-orbit with existing chilled-water supplies, whereas there was no dedicated refrigeration equipment on board to chill the cans, which also fizzed excessively in microgravity.
In an experiment during the mission, thruster rockets were fired at a point over Tasmania and also above Boston to create two "holes" – plasma depletion regions – in the ionosphere. A worldwide group of geophysicists collaborated with the observations made from Spacelab 2.
Landing
Challenger landed at Edwards Air Force Base, California, on August 6, 1985, at 12:45:26 p.m. PDT. Its rollout distance was . The mission had been extended by 17 orbits for additional payload activities due to the Abort to Orbit. The orbiter arrived back at Kennedy Space Center on August 11, 1985.
Mission insignia
The mission insignia was designed by Houston, Texas artist Skip Bradley. is depicted ascending toward the heavens in search of new knowledge in the field of solar and stellar astronomy, with its Spacelab 2 payload. The constellations Leo and Orion are shown in the positions they were in relative to the Sun during the flight. The nineteen stars indicate that the mission is the 19th shuttle flight.
Legacy
One of the purposes of the mission was to test how suitable the Shuttle was for conducting infrared observations, and the IRT was operated on this mission. However, the orbiter was found to have some draw-backs for infrared astronomy, and this led to later infrared telescopes being free-flying from the Shuttle orbiter.
See also
List of human spaceflights
List of Space Shuttle missions
Salyut 7 (a space station of the Soviet Union also in orbit at this time)
Soyuz T-13 (a mission to salvage that space station in the summer of 1985)
References
External links
NASA mission summary
Press Kit
STS-51F Video Highlights
Space Coke can
Carbonated Drinks in Space
YouTube: STS-51F launch, abort and landing
July 12 launch attempt
Space Shuttle Missions Summary
Space Shuttle missions
Edwards Air Force Base
1985 in spaceflight
1985 in the United States
Crewed space observatories
Spacecraft launched in 1985
Spacecraft which reentered in 1985
|
https://en.wikipedia.org/wiki/Carbon
|
Carbon () is a chemical element with the symbol C and atomic number 6. It is nonmetallic and tetravalent—its atom making four electrons available to form covalent chemical bonds. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, C and C being stable, while C is a radionuclide, decaying with a half-life of about 5,730 years. Carbon is one of the few elements known since antiquity.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper (hence its name, from the Greek verb "γράφειν" which means "to write"), while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen.
The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions.
Characteristics
The allotropes of carbon include graphite, one of the softest known substances, and diamond, the hardest naturally occurring substance. It bonds readily with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is a component element in the large majority of all chemical compounds, with about two hundred million examples having been described in the published chemical literature. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at and , so it sublimes at about . Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure.
Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature.
Carbon is the sixth element, with a ground-state electron configuration of 1s22s22p2, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order.
Carbon-based compounds form the basis of all known life on Earth, and the carbon-nitrogen-oxygen cycle provides a small portion of the energy produced by the Sun, and most of the energy in larger stars (e.g. Sirius). Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel:
+ 4 C + 2 → 3 Fe + 4 .
Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification:
C + HO → CO + H.
Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools.
The system of carbon allotropes spans a range of extremes:
Allotropes
Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne).
Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars.
The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot), and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature.
At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that "diamonds are forever", they are thermodynamically unstable (ΔfG°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 105 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as and , diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for , without applied pressure, by 2.7 kJ/mol at T = 0 K and 3.2 kJ/mol at T = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond.
Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names fullerene and buckyball are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure.
Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C≡C)− . Carbon in this modification is linear with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond.
In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high-energy low-duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds.
In the vapor phase, some of the carbon is in the form of highly reactive diatomic carbon dicarbon (). When excited, this gas glows green.
Occurrence
Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts.
In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed "a couple of billion years" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is , this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below).
In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal "reserves" (not "resources") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about (containing about 105 gigatonnes of carbon), but studies estimate another of "unconventional" deposits such as shale gas, representing about 540 gigatonnes of carbon.
Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500, or 3,000 Gt.
According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt.
Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble, and others). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel.
As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks", or "pipes". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured.
Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, of atmospheric carbon dioxide contains carbon-14.
Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible.
Isotopes
Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C.
Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of 5730 years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years.
There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and alpha decay and has a half-life of 1.98739 × 10 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density.
Formation in stars
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.
According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. It is the opinion of most scholars that all the carbon in the Solar System and the Milky Way comes from dying stars.
The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst.
Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds.
Carbon cycle
Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned.
Compounds
Organic compounds
Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen.
The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds (see below). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as functional groups, confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules.
In most stable compounds of carbon (and nearly all stable organic compounds), carbon obeys the octet rule and is tetravalent, meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized carbocations (three bonds, positive charge), radicals (three bonds, neutral), carbanions (three bonds, negative charge) and carbenes (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates.
Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels.
When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, aromatic esters, carotenoids and terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Norman Horowitz, head of the Mariner and Viking missions to Mars (1965-1976), considered that the unique characteristics of carbon made it unlikely that any other element could replace carbon, even on another planet, to generate the biochemistry necessary for life.
Inorganic compounds
Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (), but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable. Through this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are carbonates, notably calcite. Carbon disulfide () is similar. Nevertheless, due to its physical properties and its association with organic synthesis, carbon disulfide is sometimes classified as an organic solvent.
The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding affinity. Cyanide (CN), has a similar structure, but behaves much like a halide ion (pseudohalogen). For example, it can form the nitride cyanogen molecule ((CN)), similar to diatomic halides. Likewise, the heavier analog of cyanide, cyaphide (CP), is also considered inorganic, though most simple derivatives are highly unstable. Other uncommon oxides are carbon suboxide (), the unstable dicarbon monoxide (CO), carbon trioxide (CO), cyclopentanepentone (CO), cyclohexanehexone (CO), and mellitic anhydride (CO). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic.
With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides () to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds.
Organometallic compounds
Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed metalorganic compounds.
While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12]2- unit, with one BH replaced with a CH+. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(PhPAu)C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η5-C5Me5)]2+, making it an "organic metallocene" in which a MeC3+ fragment is bonded to a η5-C5Me5− fragment through all five of the carbons of the ring.
It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding.
History and etymology
The English name carbon comes from the Latin carbo for coal and charcoal, whence also comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are Kohlenstoff, koolstof, and kulstof respectively, all literally meaning coal-substance.
Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made around Roman times by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air.
In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave "aerial acid" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name carbone (Latin carbonum) for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook.
A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto, and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly amorphous.
Production
Graphite
Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil, and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica, and feldspars in schists, gneisses, and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water.
There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry "amorphous" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka.
According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009.
Diamond
The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure).
Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore.
Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725.
Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and an accumulated total of over 4.5 billion carats have been mined since that date. Most commercially viable diamond deposits were in Russia, Botswana, Australia and the Democratic Republic of Congo. By 2005, Russia produced almost one-fifth of the global diamond output (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe) but the Argyle mine in Australia became the single largest source, producing 14 million carats in 2018. New finds, the Canadian mines at Diavik and Ekati, are expected to become even more valuable owing to their production of gem quality stones.
In the United States, diamonds have been found in Arkansas, Colorado, and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana.
Applications
Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere, and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil.
The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, as a molding material in glass manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for electric motors, and as a neutron moderator in nuclear reactors.
Charcoal is used as a drawing material in artwork, barbecue grilling, iron smelting, and in many other applications. Wood, coal and oil are used as fuel for production of energy and heating. Gem quality diamond is used in jewelry, and industrial diamonds are used in drilling, cutting and polishing tools for machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight composite materials.
Carbon fiber is made by pyrolysis of extruded and stretched filaments of polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile strength than steel.
Carbon black is used as the black pigment in printing ink, artist's oil paint, and water colours, carbon paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent in filter material in applications as diverse as gas masks, water purification, and kitchen extractor hoods, and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical reduction at high temperatures. Coke is used to reduce iron ore into iron (smelting). Case hardening of steel is achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron, and titanium are among the hardest known materials, and are used as abrasives in cutting and grinding tools. Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles and leather, and almost all of the interior surfaces in the built environment other than glass, stone, drywall and metal.
Diamonds
The diamond industry falls into two categories: one dealing with gem-grade diamonds and the other, with industrial-grade diamonds. While a large trade in both types of diamonds exists, the two markets function dramatically differently.
Unlike precious metals such as gold or platinum, gem diamonds do not trade as a commodity: there is a substantial mark-up in the sale of diamonds, and there is not a very active market for resale of diamonds.
Industrial diamonds are valued mostly for their hardness and heat conductivity, with the gemological qualities of clarity and color being mostly irrelevant. About 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and relegated for industrial use (known as bort). Synthetic diamonds, invented in the 1950s, found almost immediate industrial applications; 3 billion carats (600 tonnes) of synthetic diamond is produced annually.
The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most of these applications do not require large diamonds; in fact, most diamonds of gem-quality except for their small size can be used industrially. Diamonds are embedded in drill tips or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances in the production of synthetic diamonds, new applications are becoming feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable for microchips, and because of its exceptional heat conductance property, as a heat sink in electronics.
Precautions
Pure carbon has extremely low toxicity to humans and can be handled safely in the form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of the digestive tract. Consequently, once it enters into the body's tissues it is likely to remain there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his death. Inhalation of coal dust or soot (carbon black) in large quantities can be dangerous, irritating lung tissues and causing the congestive lung disease, coalworker's pneumoconiosis. Diamond dust used as an abrasive can be harmful if ingested or inhaled. Microparticles of carbon are produced in diesel engine exhaust fumes, and may accumulate in the lungs. In these examples, the harm may result from contaminants (e.g., organic chemicals, heavy metals) rather than from the carbon itself.
Carbon generally has low toxicity to life on Earth; but carbon nanoparticles are deadly to Drosophila.
Carbon may burn vigorously and brightly in the presence of air at high temperatures. Large accumulations of coal, which have remained inert for hundreds of millions of years in the absence of oxygen, may spontaneously combust when exposed to air in coal mine waste tips, ship cargo holds and coal bunkers, and storage dumps.
In nuclear applications where graphite is used as a neutron moderator, accumulation of Wigner energy followed by a sudden, spontaneous release may occur. Annealing to at least 250 °C can release the energy safely, although in the Windscale fire the procedure went wrong, causing other reactor materials to combust.
The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from seeds of the castor oil plant Ricinus communis, cyanide (CN), and carbon monoxide; and such essentials to life as glucose and protein.
See also
Carbon chauvinism
Carbon detonation
Carbon footprint
Carbon star
Carbon planet
Gas carbon
Low-carbon economy
Timeline of carbon nanotubes
References
Bibliography
External links
Carbon at The Periodic Table of Videos (University of Nottingham)
Carbon on Britannica
Extensive Carbon page at asu.edu (archived 18 June 2010)
Electrochemical uses of carbon (archived 9 November 2001)
Carbon—Super Stuff. Animation with sound and interactive 3D-models. (archived 9 November 2012)
Allotropes of carbon
Chemical elements with hexagonal planar structure
Chemical elements
Native element minerals
Polyatomic nonmetals
Reactive nonmetals
Reducing agents
|
https://en.wikipedia.org/wiki/Combination
|
In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient
which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by .
A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.
Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.
Number of k-combinations
The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by , or by a variation such as , , , or even (the last form is standard in French, Romanian, Russian, Chinese and Polish texts). The same number however occurs in many other mathematical contexts, where it is denoted by (often read as "n choose k"); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define for all natural numbers k at once by the relation
from which it is clear that
and further,
for k > n.
To see that these coefficients count k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements of S:
it has 2n distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes , the term for each k-combination from S becomes Xk, so that the coefficient of that power in the result equals the number of such k-combinations.
Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to , one can use (in addition to the basic cases already given) the recursion relation
for 0 < k < n, which follows from =; this leads to the construction of Pascal's triangle.
For determining an individual binomial coefficient, it is more practical to use the formula
The numerator gives the number of k-permutations of n, i.e., of sequences of k distinct elements of S, while the denominator gives the number of such k-permutations that give the same k-combination when the order is ignored.
When k exceeds n/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation
for 0 ≤ k ≤ n. This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an -combination.
Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:
where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by !, so it is certainly computationally less efficient than that formula.
The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (n − k) elements among each other produces the same combination; this explains the division in the formula.
From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions:
Together with the basic cases , these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size .
Example of counting combinations
As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:
Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:
Another alternative computation, equivalent to the first, is based on writing
which gives
When evaluated in the following order, , this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur.
Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:
Enumerating k-combinations
One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = { 1, 2, ..., n }, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics.
There are many ways to enumerate k combinations. One way is to visit all the binary numbers less than 2n. Choose those numbers having k nonzero bits, although this is very inefficient even for small n (e.g. n = 20 would require visiting about one million numbers while the maximum number of allowed k combinations is about 186 thousand for k = 10). The positions of these 1 bits in such a number is a specific k-combination of the set { 1, ..., n }. Another simple, faster way is to track k index numbers of the elements selected, starting with {0 .. k−1} (zero-based) or {1 .. k} (one-based) as the first allowed k-combination and then repeatedly moving to the next allowed k-combination by incrementing the last index number if it is lower than n-1 (zero-based) or n (one-based) or the last index number x that is less than the index number following it minus one if such an index exists and resetting the index numbers after x to {x+1, x+2, ...}.
Number of combinations with repetition
A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S of size n is given by a set of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of S and think of the elements of S as types of objects, then we can let denote the number of elements of type i in a multisubset. The number of multisubsets of size k is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation:
If S has n elements, the number of such k-multisubsets is denoted by
a notation that is analogous to the binomial coefficient which counts k-subsets. This expression, n multichoose k, can also be given in terms of binomial coefficients:
This relationship can be easily proved using a representation known as stars and bars.
A solution of the above Diophantine equation can be represented by stars, a separator (a bar), then more stars, another separator, and so on. The total number of stars in this representation is k and the number of bars is n - 1 (since a separation into n parts needs n-1 separators). Thus, a string of k + n - 1 (or n + k - 1) symbols (stars and bars) corresponds to a solution if there are k stars in the string. Any solution can be represented by choosing k out of positions to place stars and filling the remaining positions with bars. For example, the solution of the equation (n = 4 and k = 10) can be represented by
The number of such strings is the number of ways to place 10 stars in 13 positions, which is the number of 10-multisubsets of a set with 4 elements.
As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for ,
This identity follows from interchanging the stars and bars in the above representation.
Example of counting multisubsets
For example, if you have four types of donuts (n = 4) on a menu to choose from and you want three donuts (k = 3), the number of ways to choose the donuts with repetition can be calculated as
This result can be verified by listing all the 3-multisubsets of the set S = {1,2,3,4}. This is displayed in the following table. The second column lists the donuts you actually chose, the third column shows the nonnegative integer solutions of the equation and the last column gives the stars and bars representation of the solutions.
Number of k-combinations for all k
The number of k-combinations for all k is the number of subsets of a set of n elements. There are several ways to see that this number is 2n. In terms of combinations, , which is the sum of the nth row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2n − 1, where each digit position is an item from the set of n.
Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set:
Representing these subsets (in the same order) as base 2 numerals:
0 – 000
1 – 001
2 – 010
3 – 011
4 – 100
5 – 101
6 – 110
7 – 111
Probability: sampling a random combination
There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a k-combination efficiently from a population of size n is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of (see Reservoir sampling). Another is to pick a random non-negative integer less than and convert it into a combination using the combinatorial number system.
Number of ways to put objects into bins
A combination can also be thought of as a selection of two sets of items: those that go into the chosen bin and those that go into the unchosen bin. This can be generalized to any number of bins with the constraint that every item must go to exactly one bin. The number of ways to put objects into bins is given by the multinomial coefficient
where n is the number of items, m is the number of bins, and is the number of items that go into bin i.
One way to see why this equation holds is to first number the objects arbitrarily from 1 to n and put the objects with numbers into the first bin in order, the objects with numbers into the second bin in order, and so on. There are distinct numberings, but many of them are equivalent, because only the set of items in a bin matters, not their order in it. Every combined permutation of each bins' contents produces an equivalent way of putting items into bins. As a result, every equivalence class consists of distinct numberings, and the number of equivalence classes is .
The binomial coefficient is the special case where k items go into the chosen bin and the remaining items go into the unchosen bin:
See also
Binomial coefficient
Combinatorics
Block design
Kneser graph
List of permutation topics
Multiset
Pascal's triangle
Permutation
Probability
Subset
Notes
References
Erwin Kreyszig, Advanced Engineering Mathematics, John Wiley & Sons, INC, 1999.
External links
Topcoder tutorial on combinatorics
C code to generate all combinations of n elements chosen as k
Many Common types of permutation and combination math problems, with detailed solutions
The Unknown Formula For combinations when choices can be repeated and order does not matter
Combinations with repetitions (by: Akshatha AG and Smitha B)
The dice roll with a given sum problem An application of the combinations with repetition to rolling multiple dice
Combinatorics
|
https://en.wikipedia.org/wiki/Software
|
Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work.
At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction or is interrupted by the operating system. , most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past.
The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.
History
An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer.
The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1936 essay, On Computable Numbers, with an Application to the Entscheidungsproblem (decision problem). This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software (Turing's essay is an example of computer science), whereas software engineering is the application of engineering principles to development of software.
In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper "The Teaching of Concrete Mathematics" contained the earliest known usage of the term "software" found in a search of JSTOR's electronic archives, predating the Oxford English Dictionary's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term "software" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum.
Types
On virtually all computer platforms, software can be grouped into a few broad categories.
Purpose, or domain of use
Based on the goal, computer software can be divided into:
Application software uses the computer system to perform special functions beyond the basic operation of the computer itself. There are many different types of application software because the range of tasks that can be performed with a modern computer is so large—see list of software.
System software manages hardware behaviour, as to provide basic functionalities that are required by users, or for other software to run properly, if at all. System software is also designed for providing a platform for running application software, and it includes the following:
Operating systems are essential collections of software that manage resources and provide common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has one operating system.
Device drivers operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver.
Utilities are computer programs designed to assist users in the maintenance and care of their computers.
Malicious software, or malware, is software that is developed to harm or disrupt computers. Malware is closely associated with computer-related crimes, though some malicious programs may have been designed as practical jokes.
Nature or domain of execution
Desktop applications such as web browsers and Microsoft Office and LibreOffice and WordPerfect, as well as smartphone and tablet applications (called "apps").
JavaScript scripts are pieces of software traditionally embedded in web pages that are run directly inside the web browser when a web page is loaded without the need for a web browser plugin. Software written in other programming languages can also be run within the web browser if the software is either translated into JavaScript, or if a web browser plugin that supports that language is installed; the most common example of the latter is ActionScript scripts, which are supported by the Adobe Flash plugin.
Server software, including:
Web applications, which usually run on the web server and output dynamically generated web pages to web browsers, using e.g. PHP, Java, ASP.NET, or even JavaScript that runs on the server. In modern times these commonly include some JavaScript to be run in the web browser as well, in which case they typically run partly on the server, partly in the web browser.
Plugins and extensions are software that extends or modifies the functionality of another piece of software, and require that software be used in order to function.
Embedded software resides as firmware within embedded systems, devices dedicated to a single use or a few uses such as cars and televisions (although some embedded devices such as wireless chipsets can themselves be part of an ordinary, non-embedded computer system such as a PC or smartphone). In the embedded system context there is sometimes no clear distinction between the system software and the application software. However, some embedded systems run embedded operating systems, and these systems do retain the distinction between system software and application software (although typically there will only be one, fixed application which is always run).
Microcode is a special, relatively obscure type of embedded software which tells the processor itself how to execute machine code, so it is actually a lower level than machine code. It is typically proprietary to the processor manufacturer, and any necessary correctional microcode software updates are supplied by them to users (which is much cheaper than shipping replacement processor hardware). Thus an ordinary programmer would not expect to ever have to deal with it.
Programming tools
Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software.
Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE.
Topics
Architecture
People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.
Platform software: The platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC one will usually have the ability to change the platform software.
Application software: Application software is what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications.
User-written software: End-user development tailors systems to meet users' specific needs. User software includes spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers.
Execution
Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions.
Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.
Quality and reliability
Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs.
Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together.
License
The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies.
Proprietary software can be divided into two types:
freeware, which includes the category of "free trial" software or "freemium" software (in the past, the term shareware was often used for free trial/freemium software). As the name suggests, freeware can be used for free, although in the case of free trials or freemium software, this is sometimes only true for a limited period of time or with limited functionality.
software available for a fee, which can only be legally used on purchase of a license.
Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software.
Patents
Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code.
Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents.
Design and implementation
Design and implementation of software vary depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the former has much more basic functionality.
Software is usually developed in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close() and Form1.Show() to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.
Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software.
Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.
A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as "coder" and "hacker"although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems.
See also
Computer program
Independent software vendor
Open-source software
Outline of software
Software asset management
Software release life cycle
References
Sources
External links
Software at Encyclopædia Britannica
|
https://en.wikipedia.org/wiki/Creationism
|
Creationism is the religious belief that nature, and aspects such as the universe, Earth, life, and humans, originated with supernatural acts of divine creation. In its broadest sense, creationism includes a continuum of religious views, which vary in their acceptance or rejection of scientific explanations such as evolution that describe the origin and development of natural phenomena.
The term creationism most often refers to belief in special creation; the claim that the universe and lifeforms were created as they exist today by divine action, and that the only true explanations are those which are compatible with a Christian fundamentalist literal interpretation of the creation myth found in the Bible's Genesis creation narrative. Since the 1970s, the most common form of this has been Young Earth creationism which posits special creation of the universe and lifeforms within the last 10,000 years on the basis of flood geology, and promotes pseudoscientific creation science. From the 18th century onward, Old Earth creationism accepted geological time harmonized with Genesis through gap or day-age theory, while supporting anti-evolution. Modern old-Earth creationists support progressive creationism and continue to reject evolutionary explanations. Following political controversy, creation science was reformulated as intelligent design and neo-creationism.
Mainline Protestants and the Catholic Church reconcile modern science with their faith in Creation through forms of theistic evolution which hold that God purposefully created through the laws of nature, and accept evolution. Some groups call their belief evolutionary creationism. Less prominently, there are also members of the Islamic and Hindu faiths who are creationists. Use of the term "creationist" in this context dates back to Charles Darwin's unpublished 1842 sketch draft for what became On the Origin of Species, and he used the term later in letters to colleagues. In 1873, Asa Gray published an article in The Nation saying a "special creationist" who held that species "were supernaturally originated just as they are, by the very terms of his doctrine places them out of the reach of scientific explanation."
Biblical basis
The basis for many creationists' beliefs is a literal or quasi-literal interpretation of the Book of Genesis. The Genesis creation narratives (Genesis 1–2) describe how God brings the Universe into being in a series of creative acts over six days and places the first man and woman (Adam and Eve) in the Garden of Eden. This story is the basis of creationist cosmology and biology. The Genesis flood narrative (Genesis 6–9) tells how God destroys the world and all life through a great flood, saving representatives of each form of life by means of Noah's Ark. This forms the basis of creationist geology, better known as flood geology.
Recent decades have seen attempts to de-link creationism from the Bible and recast it as science; these include creation science and intelligent design.
Types
To counter the common misunderstanding that the creation–evolution controversy was a simple dichotomy of views, with "creationists" set against "evolutionists", Eugenie Scott of the National Center for Science Education produced a diagram and description of a continuum of religious views as a spectrum ranging from extreme literal biblical creationism to materialist evolution, grouped under main headings. This was used in public presentations, then published in 1999 in Reports of the NCSE. Other versions of a taxonomy of creationists were produced, and comparisons made between the different groupings. In 2009 Scott produced a revised continuum taking account of these issues, emphasizing that intelligent design creationism overlaps other types, and each type is a grouping of various beliefs and positions. The revised diagram is labelled to shows a spectrum relating to positions on the age of the Earth, and the part played by special creation as against evolution. This was published in the book Evolution Vs. Creationism: An Introduction, and the NCSE website rewritten on the basis of the book version.
The main general types are listed below.
Young Earth creationism
Young Earth creationists such as Ken Ham and Doug Phillips believe that God created the Earth within the last ten thousand years, with a literalist interpretation of the Genesis creation narrative, within the approximate time-frame of biblical genealogies. Most young Earth creationists believe that the universe has a similar age as the Earth. A few assign a much older age to the universe than to Earth. Young Earth creationism gives the universe an age consistent with the Ussher chronology and other young Earth time frames. Other young Earth creationists believe that the Earth and the universe were created with the appearance of age, so that the world appears to be much older than it is, and that this appearance is what gives the geological findings and other methods of dating the Earth and the universe their much longer timelines.
The Christian organizations Answers in Genesis (AiG), Institute for Creation Research (ICR) and the Creation Research Society (CRS) promote young Earth creationism in the United States. Carl Baugh's Creation Evidence Museum in Texas, United States AiG's Creation Museum and Ark Encounter in Kentucky, United States were opened to promote young Earth creationism. Creation Ministries International promotes young Earth views in Australia, Canada, South Africa, New Zealand, the United States, and the United Kingdom.
Among Roman Catholics, the Kolbe Center for the Study of Creation promotes similar ideas.
Old Earth creationism
Old Earth creationism holds that the physical universe was created by God, but that the creation event described in the Book of Genesis is to be taken figuratively. This group generally believes that the age of the universe and the age of the Earth are as described by astronomers and geologists, but that details of modern evolutionary theory are questionable.
Old Earth creationism itself comes in at least three types:
Gap creationism
Gap creationism (also known as ruin-restoration creationism, restoration creationism, or the Gap Theory) is a form of old Earth creationism that posits that the six-yom creation period, as described in the Book of Genesis, involved six literal 24-hour days, but that there was a gap of time between two distinct creations in the first and the second verses of Genesis, which the theory states explains many scientific observations, including the age of the Earth. Thus, the six days of creation (verse 3 onwards) start sometime after the Earth was "without form and void." This allows an indefinite gap of time to be inserted after the original creation of the universe, but prior to the Genesis creation narrative, (when present biological species and humanity were created). Gap theorists can therefore agree with the scientific consensus regarding the age of the Earth and universe, while maintaining a literal interpretation of the biblical text.
Some gap creationists expand the basic version of creationism by proposing a "primordial creation" of biological life within the "gap" of time. This is thought to be "the world that then was" mentioned in 2 Peter 3:3–6. Discoveries of fossils and archaeological ruins older than 10,000 years are generally ascribed to this "world that then was," which may also be associated with Lucifer's rebellion.
Day-age creationism
Day-age creationism, a type of old Earth creationism, is a metaphorical interpretation of the creation accounts in Genesis. It holds that the six days referred to in the Genesis account of creation are not ordinary 24-hour days, but are much longer periods (from thousands to billions of years). The Genesis account is then reconciled with the age of the Earth. Proponents of the day-age theory can be found among both theistic evolutionists, who accept the scientific consensus on evolution, and progressive creationists, who reject it. The theories are said to be built on the understanding that the Hebrew word yom is also used to refer to a time period, with a beginning and an end and not necessarily that of a 24-hour day.
The day-age theory attempts to reconcile the Genesis creation narrative and modern science by asserting that the creation "days" were not ordinary 24-hour days, but actually lasted for long periods of time (as day-age implies, the "days" each lasted an age). According to this view, the sequence and duration of the creation "days" may be paralleled to the scientific consensus for the age of the earth and the universe.
Progressive creationism
Progressive creationism is the religious belief that God created new forms of life gradually over a period of hundreds of millions of years. As a form of old Earth creationism, it accepts mainstream geological and cosmological estimates for the age of the Earth, some tenets of biology such as microevolution as well as archaeology to make its case. In this view creation occurred in rapid bursts in which all "kinds" of plants and animals appear in stages lasting millions of years. The bursts are followed by periods of stasis or equilibrium to accommodate new arrivals. These bursts represent instances of God creating new types of organisms by divine intervention. As viewed from the archaeological record, progressive creationism holds that "species do not gradually appear by the steady transformation of its ancestors; [but] appear all at once and "fully formed."
The view rejects macroevolution, claiming it is biologically untenable and not supported by the fossil record, as well as rejects the concept of common descent from a last universal common ancestor. Thus the evidence for macroevolution is claimed to be false, but microevolution is accepted as a genetic parameter designed by the Creator into the fabric of genetics to allow for environmental adaptations and survival. Generally, it is viewed by proponents as a middle ground between literal creationism and evolution. Organizations such as Reasons To Believe, founded by Hugh Ross, promote this version of creationism.
Progressive creationism can be held in conjunction with hermeneutic approaches to the Genesis creation narrative such as the day-age creationism or framework/metaphoric/poetic views.
Philosophic and scientific creationism
Creation science
Creation science, or initially scientific creationism, is a pseudoscience that emerged in the 1960s with proponents aiming to have young Earth creationist beliefs taught in school science classes as a counter to teaching of evolution. Common features of creation science argument include: creationist cosmologies which accommodate a universe on the order of thousands of years old, criticism of radiometric dating through a technical argument about radiohalos, explanations for the fossil record as a record of the Genesis flood narrative (see flood geology), and explanations for the present diversity as a result of pre-designed genetic variability and partially due to the rapid degradation of the perfect genomes God placed in "created kinds" or "baramins" due to mutations.
Neo-creationism
Neo-creationism is a pseudoscientific movement which aims to restate creationism in terms more likely to be well received by the public, by policy makers, by educators and by the scientific community. It aims to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture. This comes in response to the 1987 ruling by the United States Supreme Court in Edwards v. Aguillard that creationism is an inherently religious concept and that advocating it as correct or accurate in public-school curricula violates the Establishment Clause of the First Amendment.
One of the principal claims of neo-creationism propounds that ostensibly objective orthodox science, with a foundation in naturalism, is actually a dogmatically atheistic religion. Its proponents argue that the scientific method excludes certain explanations of phenomena, particularly where they point towards supernatural elements, thus effectively excluding religious insight from contributing to understanding the universe. This leads to an open and often hostile opposition to what neo-creationists term "Darwinism", which they generally mean to refer to evolution, but which they may extend to include such concepts as abiogenesis, stellar evolution and the Big Bang theory.
Unlike their philosophical forebears, neo-creationists largely do not believe in many of the traditional cornerstones of creationism such as a young Earth, or in a dogmatically literal interpretation of the Bible.
Intelligent design
Intelligent design (ID) is the pseudoscientific view that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." All of its leading proponents are associated with the Discovery Institute, a think tank whose wedge strategy aims to replace the scientific method with "a science consonant with Christian and theistic convictions" which accepts supernatural explanations. It is widely accepted in the scientific and academic communities that intelligent design is a form of creationism, and is sometimes referred to as "intelligent design creationism."
ID originated as a re-branding of creation science in an attempt to avoid a series of court decisions ruling out the teaching of creationism in American public schools, and the Discovery Institute has run a series of campaigns to change school curricula. In Australia, where curricula are under the control of state governments rather than local school boards, there was a public outcry when the notion of ID being taught in science classes was raised by the Federal Education Minister Brendan Nelson; the minister quickly conceded that the correct forum for ID, if it were to be taught, is in religious or philosophy classes.
In the US, teaching of intelligent design in public schools has been decisively ruled by a federal district court to be in violation of the Establishment Clause of the First Amendment to the United States Constitution. In Kitzmiller v. Dover, the court found that intelligent design is not science and "cannot uncouple itself from its creationist, and thus religious, antecedents," and hence cannot be taught as an alternative to evolution in public school science classrooms under the jurisdiction of that court. This sets a persuasive precedent, based on previous US Supreme Court decisions in Edwards v. Aguillard and Epperson v. Arkansas (1968), and by the application of the Lemon test, that creates a legal hurdle to teaching intelligent design in public school districts in other federal court jurisdictions.
Geocentrism
In astronomy, the geocentric model (also known as geocentrism, or the Ptolemaic system), is a description of the cosmos where Earth is at the orbital center of all celestial bodies. This model served as the predominant cosmological system in many ancient civilizations such as ancient Greece. As such, they assumed that the Sun, Moon, stars, and naked eye planets circled Earth, including the noteworthy systems of Aristotle (see Aristotelian physics) and Ptolemy.
Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters associated with the Creation Research Society pointing to some passages in the Bible, which, when taken literally, indicate that the daily apparent motions of the Sun and the Moon are due to their actual motions around the Earth rather than due to the rotation of the Earth about its axis. For example, where the Sun and Moon are said to stop in the sky, and where the world is described as immobile. Contemporary advocates for such religious beliefs include Robert Sungenis, co-author of the self-published Galileo Was Wrong: The Church Was Right (2006). These people subscribe to the view that a plain reading of the Bible contains an accurate account of the manner in which the universe was created and requires a geocentric worldview.
Most contemporary creationist organizations reject such perspectives.
Omphalos hypothesis
The Omphalos hypothesis is one attempt to reconcile the scientific evidence that the universe is billions of years old with a literal interpretation of the Genesis creation narrative, which implies that the Earth is only a few thousand years old. It is based on the religious belief that the universe was created by a divine being, within the past six to ten thousand years (in keeping with flood geology), and that the presence of objective, verifiable evidence that the universe is older than approximately ten millennia is due to the creator introducing false evidence that makes the universe appear significantly older.
The idea was named after the title of an 1857 book, Omphalos by Philip Henry Gosse, in which Gosse argued that in order for the world to be functional God must have created the Earth with mountains and canyons, trees with growth rings, Adam and Eve with fully grown hair, fingernails, and navels (ὀμφαλός omphalos is Greek for "navel"), and all living creatures with fully formed evolutionary features, etc..., and that, therefore, no empirical evidence about the age of the Earth or universe can be taken as reliable.
Various supporters of Young Earth creationism have given different explanations for their belief that the universe is filled with false evidence of the universe's age, including a belief that some things needed to be created at a certain age for the ecosystems to function, or their belief that the creator was deliberately planting deceptive evidence. The idea has seen some revival in the 20th century by some modern creationists, who have extended the argument to address the "starlight problem". The idea has been criticised as Last Thursdayism, and on the grounds that it requires a deliberately deceptive creator.
Theistic evolution
Theistic evolution, or evolutionary creation, is a belief that "the personal God of the Bible created the universe and life through evolutionary processes." According to the American Scientific Affiliation:
Through the 19th century the term creationism most commonly referred to direct creation of individual souls, in contrast to traducianism. Following the publication of Vestiges of the Natural History of Creation, there was interest in ideas of Creation by divine law. In particular, the liberal theologian Baden Powell argued that this illustrated the Creator's power better than the idea of miraculous creation, which he thought ridiculous. When On the Origin of Species was published, the cleric Charles Kingsley wrote of evolution as "just as noble a conception of Deity." Darwin's view at the time was of God creating life through the laws of nature, and the book makes several references to "creation," though he later regretted using the term rather than calling it an unknown process. In America, Asa Gray argued that evolution is the secondary effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in theistic terms, Natural Selection not inconsistent with Natural Theology. Theistic evolution, also called, evolutionary creation, became a popular compromise, and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism. Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic mechanisms such as neo-Lamarckism were favoured as being more compatible with purpose than natural selection.
Some theists took the general view that, instead of faith being in opposition to biological evolution, some or all classical religious teachings about Christian God and creation are compatible with some or all of modern scientific theory, including specifically evolution; it is also known as "evolutionary creation." In Evolution versus Creationism, Eugenie Scott and Niles Eldredge state that it is in fact a type of evolution.
It generally views evolution as a tool used by God, who is both the first cause and immanent sustainer/upholder of the universe; it is therefore well accepted by people of strong theistic (as opposed to deistic) convictions. Theistic evolution can synthesize with the day-age creationist interpretation of the Genesis creation narrative; however most adherents consider that the first chapters of the Book of Genesis should not be interpreted as a "literal" description, but rather as a literary framework or allegory.
From a theistic viewpoint, the underlying laws of nature were designed by God for a purpose, and are so self-sufficient that the complexity of the entire physical universe evolved from fundamental particles in processes such as stellar evolution, life forms developed in biological evolution, and in the same way the origin of life by natural causes has resulted from these laws.
In one form or another, theistic evolution is the view of creation taught at the majority of mainline Protestant seminaries. For Roman Catholics, human evolution is not a matter of religious teaching, and must stand or fall on its own scientific merits. Evolution and the Roman Catholic Church are not in conflict. The Catechism of the Catholic Church comments positively on the theory of evolution, which is neither precluded nor required by the sources of faith, stating that scientific studies "have splendidly enriched our knowledge of the age and dimensions of the cosmos, the development of life-forms and the appearance of man." Roman Catholic schools teach evolution without controversy on the basis that scientific knowledge does not extend beyond the physical, and scientific truth and religious truth cannot be in conflict. Theistic evolution can be described as "creationism" in holding that divine intervention brought about the origin of life or that divine laws govern formation of species, though many creationists (in the strict sense) would deny that the position is creationism at all. In the creation–evolution controversy, its proponents generally take the "evolutionist" side. This sentiment was expressed by Fr. George Coyne, (the Vatican's chief astronomer between 1978 and 2006):...in America, creationism has come to mean some fundamentalistic, literal, scientific interpretation of Genesis. Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in a belief that everything depends upon God, or better, all is a gift from God.
While supporting the methodological naturalism inherent in modern science, the proponents of theistic evolution reject the implication taken by some atheists that this gives credence to ontological materialism. In fact, many modern philosophers of science, including atheists, refer to the long-standing convention in the scientific method that observable events in nature should be explained by natural causes, with the distinction that it does not assume the actual existence or non-existence of the supernatural.
Religious views
There are also non-Christian forms of creationism, notably Islamic creationism and Hindu creationism.
Bahá'í Faith
In the creation myth taught by Bahá'u'lláh, the Bahá'í Faith founder, the universe has "neither beginning nor ending," and that the component elements of the material world have always existed and will always exist. With regard to evolution and the origin of human beings, 'Abdu'l-Bahá gave extensive comments on the subject when he addressed western audiences in the beginning of the 20th century. Transcripts of these comments can be found in Some Answered Questions, Paris Talks and The Promulgation of Universal Peace. 'Abdu'l-Bahá described the human species as having evolved from a primitive form to modern man, but that the capacity to form human intelligence was always in existence.
Buddhism
Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are sometimes misperceived to be a creator. While Buddhism includes belief in divine beings called devas, it holds that they are mortal, limited in their power, and that none of them are creators of the universe. In the Saṃyutta Nikāya, the Buddha also states that the cycle of rebirths stretches back hundreds of thousands of eons, without discernible beginning.
Major Buddhist Indian philosophers such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers.
Christianity
, most Christians around the world accepted evolution as the most likely explanation for the origins of species, and did not take a literal view of the Genesis creation narrative. The United States is an exception where belief in religious fundamentalism is much more likely to affect attitudes towards evolution than it is for believers elsewhere. Political partisanship affecting religious belief may be a factor because political partisanship in the US is highly correlated with fundamentalist thinking, unlike in Europe.
Most contemporary Christian leaders and scholars from mainstream churches, such as Anglicans and Lutherans, consider that there is no conflict between the spiritual meaning of creation and the science of evolution. According to the former archbishop of Canterbury, Rowan Williams, "for most of the history of Christianity, and I think this is fair enough, most of the history of the Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time."
Leaders of the Anglican and Roman Catholic churches have made statements in favor of evolutionary theory, as have scholars such as the physicist John Polkinghorne, who argues that evolution is one of the principles through which God created living beings. Earlier supporters of evolutionary theory include Frederick Temple, Asa Gray and Charles Kingsley who were enthusiastic supporters of Darwin's theories upon their publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Another example is that of Liberal theology, not providing any creation models, but instead focusing on the symbolism in beliefs of the time of authoring Genesis and the cultural environment.
Many Christians and Jews had been considering the idea of the creation history as an allegory (instead of historical) long before the development of Darwin's theory of evolution. For example, Philo, whose works were taken up by early Church writers, wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. Augustine of the late fourth century who was also a former neoplatonist argued that everything in the universe was created by God at the same moment in time (and not in six days as a literal reading of the Book of Genesis would seem to require); It appears that both Philo and Augustine felt uncomfortable with the idea of a seven-day creation because it detracted from the notion of God's omnipotence. In 1950, Pope Pius XII stated limited support for the idea in his encyclical . In 1996, Pope John Paul II stated that "new knowledge has led to the recognition of the theory of evolution as more than a hypothesis," but, referring to previous papal writings, he concluded that "if the human body takes its origin from pre-existent living matter, the spiritual soul is immediately created by God."
In the US, Evangelical Christians have continued to believe in a literal Genesis. , members of evangelical Protestant (70%), Mormon (76%) and Jehovah's Witnesses (90%) denominations were the most likely to reject the evolutionary interpretation of the origins of life.
Jehovah's Witnesses adhere to a combination of gap creationism and day-age creationism, asserting that scientific evidence about the age of the universe is compatible with the Bible, but that the 'days' after Genesis 1:1 were each thousands of years in length.
The historic Christian literal interpretation of creation requires the harmonization of the two creation stories, Genesis 1:1–2:3 and Genesis 2:4–25, for there to be a consistent interpretation. They sometimes seek to ensure that their belief is taught in science classes, mainly in American schools. Opponents reject the claim that the literalistic biblical view meets the criteria required to be considered scientific. Many religious groups teach that God created the Cosmos. From the days of the early Christian Church Fathers there were allegorical interpretations of the Book of Genesis as well as literal aspects.
Christian Science, a system of thought and practice derived from the writings of Mary Baker Eddy, interprets the Book of Genesis figuratively rather than literally. It holds that the material world is an illusion, and consequently not created by God: the only real creation is the spiritual realm, of which the material world is a distorted version. Christian Scientists regard the story of the creation in the Book of Genesis as having symbolic rather than literal meaning. According to Christian Science, both creationism and evolution are false from an absolute or "spiritual" point of view, as they both proceed from a (false) belief in the reality of a material universe. However, Christian Scientists do not oppose the teaching of evolution in schools, nor do they demand that alternative accounts be taught: they believe that both material science and literalist theology are concerned with the illusory, mortal and material, rather than the real, immortal and spiritual. With regard to material theories of creation, Eddy showed a preference for Darwin's theory of evolution over others.
Hinduism
Hindu creationists claim that species of plants and animals are material forms adopted by pure consciousness which live an endless cycle of births and rebirths. Ronald Numbers says that: "Hindu Creationists have insisted on the antiquity of humans, who they believe appeared fully formed as long, perhaps, as trillions of years ago." Hindu creationism is a form of old Earth creationism, according to Hindu creationists the universe may even be older than billions of years. These views are based on the Vedas, the creation myths of which depict an extreme antiquity of the universe and history of the Earth.
In Hindu cosmology, time cyclically repeats general events of creation and destruction, with many "first man", each known as Manu, the progenitor of mankind. Each Manu successively reigns over a 306.72 million year period known as a , each ending with the destruction of mankind followed by a (period of non-activity) before the next . 120.53million years have elapsed in the current (current mankind) according to calculations on Hindu units of time. The universe is cyclically created at the start and destroyed at the end of a (day of Brahma), lasting for 4.32billion years, which is followed by a (period of dissolution) of equal length. 1.97billion years have elapsed in the current (current universe). The universal elements or building blocks (unmanifest matter) exists for a period known as a , lasting for 311.04trillion years, which is followed by a (period of great dissolution) of equal length. 155.52trillion years have elapsed in the current .
Islam
Islamic creationism is the belief that the universe (including humanity) was directly created by God as explained in the Quran. It usually views the Book of Genesis as a corrupted version of God's message. The creation myths in the Quran are vaguer and allow for a wider range of interpretations similar to those in other Abrahamic religions.
Islam also has its own school of theistic evolutionism, which holds that mainstream scientific analysis of the origin of the universe is supported by the Quran. Some Muslims believe in evolutionary creation, especially among liberal movements within Islam.
Writing for The Boston Globe, Drake Bennett noted: "Without a Book of Genesis to account for[...] Muslim creationists have little interest in proving that the age of the Earth is measured in the thousands rather than the billions of years, nor do they show much interest in the problem of the dinosaurs. And the idea that animals might evolve into other animals also tends to be less controversial, in part because there are passages of the Koran that seem to support it. But the issue of whether human beings are the product of evolution is just as fraught among Muslims." Khalid Anees, president of the Islamic Society of Britain, states that Muslims do not agree that one species can develop from another.
Since the 1980s, Turkey has been a site of strong advocacy for creationism, supported by American adherents.
There are several verses in the Qur'an which some modern writers have interpreted as being compatible with the expansion of the universe, Big Bang and Big Crunch theories:
Ahmadiyya
The Ahmadiyya movement actively promotes evolutionary theory. Ahmadis interpret scripture from the Qur'an to support the concept of macroevolution and give precedence to scientific theories. Furthermore, unlike orthodox Muslims, Ahmadis believe that humans have gradually evolved from different species. Ahmadis regard Adam as being the first Prophet of Godas opposed to him being the first man on Earth. Rather than wholly adopting the theory of natural selection, Ahmadis promote the idea of a "guided evolution," viewing each stage of the evolutionary process as having been selectively woven by God. Mirza Tahir Ahmad, Fourth Caliph of the Ahmadiyya Muslim Community has stated in his magnum opus Revelation, Rationality, Knowledge & Truth (1998) that evolution did occur but only through God being the One who brings it about. It does not occur itself, according to the Ahmadiyya Muslim Community.
Judaism
For Orthodox Jews who seek to reconcile discrepancies between science and the creation myths in the Bible, the notion that science and the Bible should even be reconciled through traditional scientific means is questioned. To these groups, science is as true as the Torah and if there seems to be a problem, epistemological limits are to blame for apparently irreconcilable points. They point to discrepancies between what is expected and what actually is to demonstrate that things are not always as they appear. They note that even the root word for 'world' in the Hebrew language, , means 'hidden' (). Just as they know from the Torah that God created man and trees and the light on its way from the stars in their observed state, so too can they know that the world was created in its over the six days of Creation that reflects progression to its currently-observed state, with the understanding that physical ways to verify this may eventually be identified. This knowledge has been advanced by Rabbi Dovid Gottlieb, former philosophy professor at Johns Hopkins University. Relatively old Kabbalistic sources from well before the scientifically apparent age of the universe was first determined are also in close concord with modern scientific estimates of the age of the universe, according to Rabbi Aryeh Kaplan, and based on Sefer Temunah, an early kabbalistic work attributed to the first-century Tanna Nehunya ben HaKanah. Many kabbalists accepted the teachings of the Sefer HaTemunah, including the medieval Jewish scholar Nahmanides, his close student Isaac ben Samuel of Acre, and David ben Solomon ibn Abi Zimra. Other parallels are derived, among other sources, from Nahmanides, who expounds that there was a Neanderthal-like species with which Adam mated (he did this long before Neanderthals had even been discovered scientifically). Reform Judaism does not take the Torah as a literal text, but rather as a symbolic or open-ended work.
Some contemporary writers such as Rabbi Gedalyah Nadel have sought to reconcile the discrepancy between the account in the Torah, and scientific findings by arguing that each day referred to in the Bible was not 24 hours, but billions of years long. Others claim that the Earth was created a few thousand years ago, but was deliberately made to look as if it was five billion years old, e.g. by being created with ready made fossils. The best known exponent of this approach being Rabbi Menachem Mendel Schneerson. Others state that although the world was physically created in six 24-hour days, the Torah accounts can be interpreted to mean that there was a period of billions of years before the six days of creation.
Prevalence
Most vocal literalist creationists are from the US, and strict creationist views are much less common in other developed countries. According to a study published in Science, a survey of the US, Turkey, Japan and Europe showed that public acceptance of evolution is most prevalent in Iceland, Denmark and Sweden at 80% of the population. There seems to be no significant correlation between believing in evolution and understanding evolutionary science.
Australia
A 2009 Nielsen poll showed that 23% of Australians believe "the biblical account of human origins," 42% believe in a "wholly scientific" explanation for the origins of life, while 32% believe in an evolutionary process "guided by God".
A 2013 survey conducted by Auspoll and the Australian Academy of Science found that 80% of Australians believe in evolution (70% believe it is currently occurring, 10% believe in evolution but do not think it is currently occurring), 12% were not sure and 9% stated they do not believe in evolution.
Brazil
A 2011 Ipsos survey found that 47% of responders in Brazil identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes".
In 2004, IBOPE conducted a poll in Brazil that asked questions about creationism and the teaching of creationism in schools. When asked if creationism should be taught in schools, 89% of people said that creationism should be taught in schools. When asked if the teaching of creationism should replace the teaching of evolution in schools, 75% of people said that the teaching of creationism should replace the teaching of evolution in schools.
Canada
A 2012 survey, by Angus Reid Public Opinion revealed that 61 percent of Canadians believe in evolution. The poll asked "Where did human beings come fromdid we start as singular cells millions of year ago and evolve into our present form, or did God create us in his image 10,000 years ago?"
In 2019, a Research Co. poll asked people in Canada if creationism "should be part of the school curriculum in their province". 38% of Canadians said that creationism should be part of the school curriculum, 39% of Canadians said that it should not be part of the school curriculum, and 23% of Canadians were undecided.
In 2023, a Research Co. poll found that 21% of Canadians "believe God created human beings in their present form within the last 10,000 years". The poll also found that "More than two-in-five Canadians (43%) think creationism should be part of the school curriculum in their province."
Europe
In Europe, literalist creationism is more widely rejected, though regular opinion polls are not available. Most people accept that evolution is the most widely accepted scientific theory as taught in most schools. In countries with a Roman Catholic majority, papal acceptance of evolutionary creationism as worthy of study has essentially ended debate on the matter for many people.
In the UK, a 2006 poll on the "origin and development of life", asked participants to choose between three different perspectives on the origin of life: 22% chose creationism, 17% opted for intelligent design, 48% selected evolutionary theory, and the rest did not know. A subsequent 2010 YouGov poll on the correct explanation for the origin of humans found that 9% opted for creationism, 12% intelligent design, 65% evolutionary theory and 13% didn't know. The former Archbishop of Canterbury Rowan Williams, head of the worldwide Anglican Communion, views the idea of teaching creationism in schools as a mistake. In 2009, an Ipsos Mori survey in the United Kingdom found that 54% of Britons agreed with the view: "Evolutionary theories should be taught in science lessons in schools together with other possible perspectives, such as intelligent design and creationism."
In Italy, Education Minister Letizia Moratti wanted to retire evolution from the secondary school level; after one week of massive protests, she reversed her opinion.
There continues to be scattered and possibly mounting efforts on the part of religious groups throughout Europe to introduce creationism into public education. In response, the Parliamentary Assembly of the Council of Europe has released a draft report titled The dangers of creationism in education on June 8, 2007, reinforced by a further proposal of banning it in schools dated October 4, 2007.
Serbia suspended the teaching of evolution for one week in September 2004, under education minister Ljiljana Čolić, only allowing schools to reintroduce evolution into the curriculum if they also taught creationism. "After a deluge of protest from scientists, teachers and opposition parties" says the BBC report, Čolić's deputy made the statement, "I have come here to confirm Charles Darwin is still alive" and announced that the decision was reversed. Čolić resigned after the government said that she had caused "problems that had started to reflect on the work of the entire government."
Poland saw a major controversy over creationism in 2006, when the Deputy Education Minister, Mirosław Orzechowski, denounced evolution as "one of many lies" taught in Polish schools. His superior, Minister of Education Roman Giertych, has stated that the theory of evolution would continue to be taught in Polish schools, "as long as most scientists in our country say that it is the right theory." Giertych's father, Member of the European Parliament Maciej Giertych, has opposed the teaching of evolution and has claimed that dinosaurs and humans co-existed.
A June 2015 - July 2016 Pew poll of Eastern European countries found that 56% of people from Armenia say that humans and other living things have "Existed in present state since the beginning of time". Armenia is followed by 52% from Bosnia, 42% from Moldova, 37% from Lithuania, 34% from Georgia and Ukraine, 33% from Croatia and Romania, 31% from Bulgaria, 29% from Greece and Serbia, 26% from Russia, 25% from Latvia, 23% from Belarus and Poland, 21% from Estonia and Hungary, and 16% from the Czech Republic.
South Africa
A 2011 Ipsos survey found that 56% of responders in South Africa identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes".
South Korea
In 2009, an EBS survey in South Korea found that 63% of people believed that creation and evolution should both be taught in schools simultaneously.
United States
A 2017 poll by Pew Research found that 62% of Americans believe humans have evolved over time and 34% of Americans believe humans and other living things have existed in their present form since the beginning of time. A 2019 Gallup creationism survey found that 40% of adults in the United States inclined to the view that "God created humans in their present form at one time within the last 10,000 years" when asked for their views on the origin and development of human beings.
According to a 2014 Gallup poll, about 42% of Americans believe that "God created human beings pretty much in their present form at one time within the last 10,000 years or so." Another 31% believe that "human beings have developed over millions of years from less advanced forms of life, but God guided this process,"and 19% believe that "human beings have developed over millions of years from less advanced forms of life, but God had no part in this process."
Belief in creationism is inversely correlated to education; of those with postgraduate degrees, 74% accept evolution. In 1987, Newsweek reported: "By one count there are some 700 scientists with respectable academic credentials (out of a total of 480,000 U.S. earth and life scientists) who give credence to creation-science, the general theory that complex life forms did not evolve but appeared 'abruptly.'"
A 2000 poll for People for the American Way found 70% of the US public felt that evolution was compatible with a belief in God.
According to a study published in Science, between 1985 and 2005 the number of adult North Americans who accept evolution declined from 45% to 40%, the number of adults who reject evolution declined from 48% to 39% and the number of people who were unsure increased from 7% to 21%. Besides the US the study also compared data from 32 European countries, Turkey, and Japan. The only country where acceptance of evolution was lower than in the US was Turkey (25%).
According to a 2011 Fox News poll, 45% of Americans believe in creationism, down from 50% in a similar poll in 1999. 21% believe in 'the theory of evolution as outlined by Darwin and other scientists' (up from 15% in 1999), and 27% answered that both are true (up from 26% in 1999).
In September 2012, educator and television personality Bill Nye spoke with the Associated Press and aired his fears about acceptance of creationism, believing that teaching children that creationism is the only true answer without letting them understand the way science works will prevent any future innovation in the world of science. In February 2014, Nye defended evolution in the classroom in a debate with creationist Ken Ham on the topic of whether creation is a viable model of origins in today's modern, scientific era.
Education controversies
In the US, creationism has become centered in the political controversy over creation and evolution in public education, and whether teaching creationism in science classes conflicts with the separation of church and state. Currently, the controversy comes in the form of whether advocates of the intelligent design movement who wish to "Teach the Controversy" in science classes have conflated science with religion.
People for the American Way polled 1500 North Americans about the teaching of evolution and creationism in November and December 1999. They found that most North Americans were not familiar with creationism, and most North Americans had heard of evolution, but many did not fully understand the basics of the theory. The main findings were:
In such political contexts, creationists argue that their particular religiously based origin belief is superior to those of other belief systems, in particular those made through secular or scientific rationale. Political creationists are opposed by many individuals and organizations who have made detailed critiques and given testimony in various court cases that the alternatives to scientific reasoning offered by creationists are opposed by the consensus of the scientific community.
Criticism
Christian criticism
Most Christians disagree with the teaching of creationism as an alternative to evolution in schools. Several religious organizations, among them the Catholic Church, hold that their faith does not conflict with the scientific consensus regarding evolution. The Clergy Letter Project, which has collected more than 13,000 signatures, is an "endeavor designed to demonstrate that religion and science can be compatible."
In his 2002 article "Intelligent Design as a Theological Problem," George Murphy argues against the view that life on Earth, in all its forms, is direct evidence of God's act of creation (Murphy quotes Phillip E. Johnson's claim that he is speaking "of a God who acted openly and left his fingerprints on all the evidence."). Murphy argues that this view of God is incompatible with the Christian understanding of God as "the one revealed in the cross and resurrection of Christ." The basis of this theology is Isaiah 45:15, "Verily thou art a God that hidest thyself, O God of Israel, the Saviour."
Murphy observes that the execution of a Jewish carpenter by Roman authorities is in and of itself an ordinary event and did not require divine action. On the contrary, for the crucifixion to occur, God had to limit or "empty" himself. It was for this reason that Paul the Apostle wrote, in Philippians 2:5-8:
Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross.
Murphy concludes that,Just as the Son of God limited himself by taking human form and dying on a cross, God limits divine action in the world to be in accord with rational laws which God has chosen. This enables us to understand the world on its own terms, but it also means that natural processes hide God from scientific observation.For Murphy, a theology of the cross requires that Christians accept a methodological naturalism, meaning that one cannot invoke God to explain natural phenomena, while recognizing that such acceptance does not require one to accept a metaphysical naturalism, which proposes that nature is all that there is.
The Jesuit priest George Coyne has stated that it is "unfortunate that, especially here in America, creationism has come to mean...some literal interpretation of Genesis." He argues that "...Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in belief that everything depends on God, or better, all is a gift from God."
Teaching of creationism
Other Christians have expressed qualms about teaching creationism. In March 2006, then Archbishop of Canterbury Rowan Williams, the leader of the world's Anglicans, stated his discomfort about teaching creationism, saying that creationism was "a kind of category mistake, as if the Bible were a theory like other theories." He also said: "My worry is creationism can end up reducing the doctrine of creation rather than enhancing it." The views of the Episcopal Churcha major American-based branch of the Anglican Communionon teaching creationism resemble those of Williams.
The National Science Teachers Association is opposed to teaching creationism as a science, as is the Association for Science Teacher Education, the National Association of Biology Teachers, the American Anthropological Association, the American Geosciences Institute, the Geological Society of America, the American Geophysical Union, and numerous other professional teaching and scientific societies.
In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K‐12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as "Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning." However, they, as well as other "worldviews that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others."
Randy Moore and Sehoya Cotner, from the biology program at the University of Minnesota, reflect on the relevance of teaching creationism in the article "The Creationist Down the Hall: Does It Matter When Teachers Teach Creationism?", in which they write: "Despite decades of science education reform, numerous legal decisions declaring the teaching of creationism in public-school science classes to be unconstitutional, overwhelming evidence supporting evolution, and the many denunciations of creationism as nonscientific by professional scientific societies, creationism remains popular throughout the United States."
Scientific criticism
Science is a system of knowledge based on observation, empirical evidence, and the development of theories that yield testable explanations and predictions of natural phenomena. By contrast, creationism is often based on literal interpretations of the narratives of particular religious texts. Creationist beliefs involve purported forces that lie outside of nature, such as supernatural intervention, and often do not allow predictions at all. Therefore, these can neither be confirmed nor disproved by scientists. However, many creationist beliefs can be framed as testable predictions about phenomena such as the age of the Earth, its geological history and the origins, distributions and relationships of living organisms found on it. Early science incorporated elements of these beliefs, but as science developed these beliefs were gradually falsified and were replaced with understandings based on accumulated and reproducible evidence that often allows the accurate prediction of future results.
Some scientists, such as Stephen Jay Gould, consider science and religion to be two compatible and complementary fields, with authorities in distinct areas of human experience, so-called non-overlapping magisteria. This view is also held by many theologians, who believe that ultimate origins and meaning are addressed by religion, but favor verifiable scientific explanations of natural phenomena over those of creationist beliefs. Other scientists, such as Richard Dawkins, reject the non-overlapping magisteria and argue that, in disproving literal interpretations of creationists, the scientific method also undermines religious texts as a source of truth. Irrespective of this diversity in viewpoints, since creationist beliefs are not supported by empirical evidence, the scientific consensus is that any attempt to teach creationism as science should be rejected.
Organizations
See also
Biblical inerrancy
Biogenesis
Evolution of complexity
Flying Spaghetti Monster
History of creationism
Religious cosmology
Notes
References
Citations
Works cited
"Presented as a Paleontological Society short course at the annual meeting of the Geological Society of America, Denver, Colorado, October 24, 1999."
Further reading
External links
"Creationism" at the Stanford Encyclopedia of Philosophy by Michael Ruse
"How Creationism Works" at HowStuffWorks by Julia Layton
"TIMELINE: Evolution, Creationism and Intelligent Design" Focuses on major historical and recent events in the scientific and political debate
by Warren D. Allmon, Director of the Museum of the Earth
"What is creationism?" at talk.origins by Mark Isaak
"The Creation/Evolution Continuum" by Eugenie Scott
"15 Answers to Creationist Nonsense" by John Rennie, editor in chief of Scientific American magazine
"Race, Evolution and the Science of Human Origins" by Allison Hopper, Scientific American (July 5, 2021).
Human Timeline (Interactive) Smithsonian, National Museum of Natural History (August 2016)
Christian terminology
Creation myths
Denialism
Obsolete biology theories
Origin of life
Pseudoscience
Religious cosmologies
Theism
|
https://en.wikipedia.org/wiki/Colloid
|
A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre.
Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color.
Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861.
Classification of colloids
Colloids can be classified as follows:
Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols.
Hydrocolloids
Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization control and numerous other effects. Apart from uses of the soluble forms some of the hydrocolloids have additional useful functionality in a dry form if after solubilization they have the water removed - as in the formation of films for breath strips or sausage casings or indeed, wound dressing fibers, some being more compatible with skin than others. There are many different types of hydrocolloids each with differences in structure function and utility that generally are best suited to particular application areas in the control of rheology and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber.
The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin to reduce scarring, itching and soreness.
Components
Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, i.e. polyurethane to 'stick' to the skin.
Colloid compared with solution
A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na+ and Cl− ions are surrounded by water molecules. However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Because colloid is multiple phases, it has very different properties compared to fully mixed, continuous solution.
Interaction between particles
The following forces play an important role in the interaction of colloid particles:
Excluded volume repulsion: This refers to the impossibility of any overlap between hard particles.
Electrostatic interaction: Colloidal particles often carry an electrical charge and therefore attract or repel each other. The charge of both the continuous and the dispersed phase, as well as the mobility of the phases are factors affecting this interaction.
van der Waals forces: This is due to interaction between two dipoles that are either permanent or induced. Even if the particles do not have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole in a particle. This temporary dipole induces a dipole in particles nearby. The temporary dipole and the induced dipoles are then attracted to each other. This is known as van der Waals force, and is always present (unless the refractive indexes of the dispersed and continuous phases are matched), is short-range, and is attractive.
Steric forces between polymer-covered surfaces or in solutions containing non-adsorbing polymer can modulate interparticle forces, producing an additional steric repulsive force (which is predominantly entropic in origin) or an attractive depletion force between them.
Sedimentation velocity
The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement.
The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force:
where
is the Archimedean weight of the colloidal particles,
is the viscosity of the suspension medium,
is the radius of the colloidal particle,
and is the sedimentation or creaming velocity.
The mass of the colloidal particle is found using:
where
is the volume of the colloidal particle, calculated using the volume of a sphere ,
and is the difference in mass density between the colloidal particle and the suspension medium.
By rearranging, the sedimentation or creaming velocity is:
There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension.
The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion.
Preparation
There are two principal ways to prepare colloids:
Dispersion of large particles or droplets to the colloidal dimensions by milling, spraying, or application of shear (e.g., shaking, mixing, or high shear mixing).
Condensation of small dissolved molecules into larger colloidal particles by precipitation, condensation, or redox reactions. Such processes are used in the preparation of colloidal silica or gold.
Stabilization
The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system.
A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension.
If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension.
Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation.
Electrostatic stabilization is based on the mutual repulsion of like electrical charges. The charge of colloidal particles is structured in an electrical double layer, where the particles are charged on the surface, but then attract counterions (ions of opposite charge) which surround the particle. The electrostatic repulsion between suspended colloidal particles is most readily quantified in terms of the zeta potential. The combined effect of van der Waals attraction and electrostatic repulsion on aggregation is described quantitatively by the DLVO theory. A common method of stabilising a colloid (converting it from a precipitate) is peptization, a process where it is shaken with an electrolyte.
Steric stabilization consists absorbing a layer of a polymer or surfactant on the particles to prevent them from getting close in the range of attractive forces. The polymer consists of chains that are attached to the particle surface, and the part of the chain that extends out is soluble in the suspension medium. This technique is used to stabilize colloidal particles in all types of solvents, including organic solvents.
A combination of the two mechanisms is also possible (electrosteric stabilization).
A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum.
Destabilization
Destabilization can be accomplished by different methods:
Removal of the electrostatic barrier that prevents aggregation of the particles. This can be accomplished by the addition of salt to a suspension to reduce the Debye screening length (the width of the electrical double layer) of the particles. It is also accomplished by changing the pH of a suspension to effectively neutralise the surface charge of the particles in suspension. This removes the repulsive forces that keep colloidal particles separate and allows for aggregation due to van der Waals forces. Minor changes in pH can manifest in significant alteration to the zeta potential. When the magnitude of the zeta potential lies below a certain threshold, typically around ± 5mV, rapid coagulation or aggregation tends to occur.
Addition of a charged polymer flocculant. Polymer flocculants can bridge individual colloidal particles by attractive electrostatic interactions. For example, negatively charged colloidal silica or clay particles can be flocculated by the addition of a positively charged polymer.
Addition of non-adsorbed polymers called depletants that cause aggregation due to entropic effects.
Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied.
Monitoring stability
The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids.
Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation in light intensity in this pattern is caused by the Brownian motion of the particles. If the apparent size of the particles increases due to them clumping together via aggregation, it will result in slower Brownian motion. This technique can confirm that aggregation has occurred if the apparent particle size is determined to be beyond the typical size range for colloidal particles.
Accelerating methods for shelf life prediction
The kinetic process of destabilisation can be rather long (up to several months or years for some products). Thus, it is often required for the formulator to use further accelerating methods to reach reasonable development time for new product design. Thermal methods are the most commonly used and consist of increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times.
Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. Some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Segregation of different populations of particles have been highlighted when using centrifugation and vibration.
As a model system for atoms
In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. Phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions.
Crystals
A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave.
Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids.
The large number of experiments exploring the physics and chemistry of these so-called "colloidal crystals" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation.
In biology
Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells into biomolecular condensates—similar in importance to compartmentalisation via lipid bilayer membranes, a type of liquid crystal. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid or liquid-solid phase separation within cells. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates.
In the environment
Colloidal particles can also serve as transport vector
of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks
(e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "eigencolloid" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations
because of the process of ultrafiltration occurring in dense clay membrane.
The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.
In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1μm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH.
Intravenous therapy
Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids.
References
Chemical mixtures
Colloidal chemistry
Condensed matter physics
Soft matter
Dosage forms
|
https://en.wikipedia.org/wiki/Concrete
|
Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.
When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete.
In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ.
Etymology
The word concrete comes from the Latin word "concretus" (meaning compact or condensed), the perfect passive participle of "concrescere", from "con-" (together) and "crescere" (to grow).
History
Ancient times
Mayan concrete at the ruins of Uxmal (850-925 A.D.) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock."
Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day.
Classical era
In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction.
Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400-1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures.
The Romans used concrete extensively from 300 BC to 476 AD. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome.
Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.
Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (ca. ). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed:
Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.
The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time.
The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon.
Middle Ages
After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added.
The Canal du Midi was built using concrete in 1670.
Industrial era
Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate.
A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement.
Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853.
The first concrete reinforced bridge was designed and built by Joseph Monier in 1875.
Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928.
Composition
Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or "filler" of aggregate (typically a rocky material, loose stones, and sand). The binder "glues" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product.
Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand.
Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete.
Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces.
Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar.
The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure.
Cement
Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. British masonry worker Joseph Aspdin patented Portland cement in 1824. It was named because of the similarity of its color to Portland limestone, quarried from the English Isle of Portland and used extensively in London architecture. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds which combine calcium, silicon, aluminium and iron in forms which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum).
In modern cement kilns, many advanced features are used to lower the fuel consumption per ton of clinker produced. Cement kilns are extremely large, complex, and inherently dusty industrial installations, and have emissions which must be controlled. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels.
Water
Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely.
As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. Impure water used to make concrete can cause problems when setting or in causing premature failure of the structure.
Portland cement consists of five major compounds of calcium silicates and aluminates ranging from 5 to 50% in weight, which all undergo hydration to contribute to final material's strength. Thus, the hydration of cement involves many reactions, often occurring at the same time. As the reactions proceed, the products of the cement hydration process gradually bond together the individual sand and gravel particles and other components of the concrete to form a solid mass.
Hydration of tricalcium silicate
Cement chemist notation: C3S + H → C-S-H + CH + heat
Standard notation: Ca3SiO5 + H2O → CaO・SiO2・H2O (gel) + Ca(OH)2 + heat
Balanced: 2 Ca3SiO5 + 7 H2O → 3 CaO・2 SiO2・4 H2O (gel) + 3 Ca(OH)2 + heat
(approximately as the exact ratios of CaO, SiO2 and H2O in C-S-H can vary)
Due to the nature of the chemical bonds created in these reactions and the final characteristics of the hardened cement paste formed, the process of cement hydration is considered irreversible.
Aggregates
Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted.
The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete.
Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients.
Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers.
Admixtures
Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See below.) The common types of admixtures are as follows:
Accelerators speed up the hydration (hardening) of the concrete. Typical materials used are calcium chloride, calcium nitrate and sodium nitrate. However, use of chlorides may cause corrosion in steel reinforcing and is prohibited in some countries, so that nitrates may be favored, even though they are less effective than the chloride salt. Accelerating admixtures are especially useful for modifying the properties of concrete in cold weather.
Air entraining agents add and entrain tiny air bubbles in the concrete, which reduces damage during freeze-thaw cycles, increasing durability. However, entrained air entails a tradeoff with strength, as each 1% of air may decrease compressive strength by 5%. If too much air becomes trapped in the concrete as a result of the mixing process, defoamers can be used to encourage the air bubble to agglomerate, rise to the surface of the wet concrete and then disperse.
Bonding agents are used to create a bond between old and new concrete (typically a type of polymer) with wide temperature tolerance and corrosion resistance.
Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in concrete.
Crystalline admixtures are typically added during batching of the concrete to lower permeability. The reaction takes place when exposed to water and un-hydrated cement particles to form insoluble needle-shaped crystals, which fill capillary pores and micro-cracks in the concrete to block pathways for water and waterborne contaminates. Concrete with crystalline admixture can expect to self-seal as constant exposure to water will continuously initiate crystallization to ensure permanent waterproof protection.
Pigments can be used to change the color of concrete, for aesthetics.
Plasticizers increase the workability of plastic, or "fresh", concrete, allowing it to be placed more easily, with less consolidating effort. A typical plasticizer is lignosulfonate. Plasticizers can be used to reduce the water content of a concrete while maintaining workability and are sometimes called water-reducers due to this use. Such treatment improves its strength and durability characteristics.
Superplasticizers (also called high-range water-reducers) are a class of plasticizers that have fewer deleterious effects and can be used to increase workability more than is practical with traditional plasticizers. Superplasticizers are used to increase compressive strength. It increases the workability of the concrete and lowers the need for water content by 15–30%.
Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding.
Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting is undesirable before completion of the pour. Typical polyol retarders are sugar, sucrose, sodium gluconate, glucose, citric acid, and tartaric acid.
Mineral admixtures and blended cements
Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices.
Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties.
Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties.
Silica fume: A by-product of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability.
High reactivity metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important.
Carbon nanofibers can be added to concrete to enhance compressive strength and gain a higher Young's modulus, and also to improve the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. Carbon fiber has many advantages in terms of mechanical and electrical properties (e.g., higher strength) and self-monitoring behavior due to the high tensile strength and high electrical conductivity.
Carbon products have been added to make concrete electrically conductive, for deicing purposes.
New research from Japan's University of Kitakyushu shows that a washed and dried recycled mix of used diapers can be an environmental solution to producing less landfill and using less sand in concrete production. A model home was built in Indonesia to test the strength and durability of the new diaper-cement composite.
Production
Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided.
In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready-mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central-mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant.
A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck.
Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products.
A wide variety of equipment is used for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Any interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product.
Design mix
Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate (the second example from above), a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix.
Concrete Mixes are primarily divided into nominal mix, standard mix and design mix.
Nominal mix ratios are given in volume of . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance.
Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cube strength.
Mixing
Thorough mixing is essential to produce uniform, high-quality concrete.
has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a , shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment.
Sample analysis – Workability
Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish.
Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of . A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test.
Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix.
High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted.
After mixing, concrete is a fluid and can be pumped to the location where needed.
Curing
Maintaining optimal conditions for cement hydration
Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars.
Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength.
Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking.
Curing techniques avoiding water loss by evaporation
During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use.
Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete.
For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly.
Alternative types
Asphalt
Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt.
The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.
Graphene enhanced concrete
Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene is added. These enhanced graphene concretes are designed around the concrete application.
Microbial
Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength.
Nanoconcrete
Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated.
Pervious
Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding.
Polymer
Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains.
Volcanic
Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation.
Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy.
Waste light
Waste light is form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm2) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m3 of shredded waste and no other aggregates.
Sulfur concrete
Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water.
Properties
Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.
Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.
The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures.
The strengths of concrete is dictated by its function. Very low-strength— or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, concrete is often used. concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons.
Energy efficiency
The cement produced for making concrete accounts for about 8% of worldwide emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials.
Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Fire safety
Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.
Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces.
Earthquake safety
As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey).
Construction with concrete
Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth.
Reinforced concrete
The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces.
Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element.
Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking.
Precast concrete
Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site.
Advantages to be achieved by employing precast concrete:
Preferred dimension schemes exist, with elements of tried and tested designs available from a catalogue.
Major savings in time result from manufacture of structural elements apart from the series of events which determine overall duration of the construction, known by planning engineers as the 'critical path'.
Availability of Laboratory facilities capable of the required control tests, many being certified for specific testing in accordance with National Standards.
Equipment with capability suited to specific types of production such as stressing beds with appropriate capacity, moulds and machinery dedicated to particular products.
High-quality finishes achieved direct from the mould eliminate the need for interior decoration and ensure low maintenance costs.
Mass structures
Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures.
Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass.
Surface finishes
Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing.
Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants.
Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials.
The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures.
Prestressed structures
Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by
better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this.
In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting.
There are two different systems being used:
Pretensioned concrete is almost always precast, and contains steel wires (tendons) that are held in tension while the concrete is placed and sets around them.
Post-tensioned concrete has ducts through it. After the concrete has gained strength, tendons are pulled through the ducts and stressed. The ducts are then filled with grout. Bridges built in this way have experienced considerable corrosion of the tendons, so external post-tensioning may now be used in which the tendons run along the outer surface of the concrete.
More than of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture.
Placement
Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater.
Cold weather placement
Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing.
The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is:
A period when for more than three successive days the average daily air temperature drops below 40 °F (~ 4.5 °C), and
Temperature stays below for more than one-half of any 24-hour period.
In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1:
When the air temperature is ≤ 5 °C, and
When there is a probability that the temperature may fall below 5 °C within 24 hours of placing the concrete.
The minimum strength before exposing concrete to extreme cold is . CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing.
Underwater placement
Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork.
is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled with pumped grout.
Roads
Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground.
Environment, health and safety
The manufacture and use of concrete produce a wide range of environmental, economic and social impacts.
Concrete, cement and the environment
A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is "Portland cement", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions.
The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical.
Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt.
Concrete and climate change mitigation
Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach.
An environmental investigation found that the embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding.
Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m3, while FA decreased by 17.3 kg CO2 eq/m3 when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived.
Researchers at University of Auckland are working on utilizing biochar in concrete applications to reduce carbon emissions during concrete production and to improve strength.
Concrete and climate change adaptation
High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed.
Concrete – health and safety
Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment.
Circular economy
Concrete is an excellent material with which to make long-lasting and energy-efficient buildings. However, even with good design, human needs change and potential waste will be generated.
End-of-life: concrete degradation and waste
Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminium, iron, calcium, and silicon.
Concrete may be considered waste according to the European Commission decision of 2014/955/EU for the List of Waste under the codes: 17 (construction and demolition wastes, including excavated soil from contaminated sites) 01 (concrete, bricks, tiles and ceramics), 01 (concrete), and 17.01.06* (mixtures of, separate fractions of concrete, bricks, tiles and ceramics containing hazardous substances), and 17.01.07 (mixtures of, separate fractions of concrete, bricks, tiles and ceramics other than those mentioned in 17.01.06). It is estimated that in 2018 the European Union generated 371,910 thousand tons of mineral waste from construction and demolition, and close to 4% of this quantity is considered hazardous. Germany, France and the United Kingdom were the top three polluters with 86,412 thousand tons, 68,976 and 68,732 thousand tons of construction waste generation, respectively.
Currently, there is not an End-of-Waste criteria for concrete materials in the EU. However, different sectors have been proposing alternatives for concrete waste and re purposing it as a secondary raw material in various applications, including concrete manufacturing itself.
Reuse of concrete
Reuse of blocks in original form, or by cutting into smaller blocks, has even less environmental impact; however, only a limited market currently exists. Improved building designs that allow for slab reuse and building transformation without demolition could increase this use. Hollow core concrete slabs are easy to dismantle and the span is normally constant, making them good for reuse.
Other cases of re-use are possible with pre-cast concrete pieces: through selective demolition, such pieces can be disassembled and collected for further use in other building sites. Studies show that back-building and remounting plans for building units (i.e., re-use of pre-fabricated concrete) is an alternative for a kind of construction which protects resources and saves energy. Especially long-living, durable, energy-intensive building materials, such as concrete, can be kept in the life-cycle longer through recycling. Prefabricated constructions are the prerequisites for constructions necessarily capable of being taken apart. In the case of optimal application in the building carcass, savings in costs are estimated in 26%, a lucrative complement to new building methods. However, this depends on several courses to be set. The viability of this alternative has to be studied as the logistics associated with transporting heavy pieces of concrete can impact the operation financially and also increase the carbon footprint of the project. Also, ever changing regulations on new buildings worldwide may require higher quality standards for construction elements and inhibit the use of old elements which may be classified as obsolete.
Recycling of concrete
Concrete recycling is an increasingly common method for disposing of concrete structures. Concrete debris were once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits.
Contrary to general belief, concrete recovery is achievable – concrete can be crushed and reused as aggregate in new projects.
Recycling or recovering concrete reduces natural resource exploitation and associated transportation costs, and reduces waste landfill. However, it has little impact on reducing greenhouse gas emissions as most emissions occur when cement is made, and cement alone cannot be recycled. At present, most recovered concrete is used for road sub-base and civil engineering projects. From a sustainability viewpoint, these relatively low-grade uses currently provide the optimal outcome.
The recycling process can be done in situ, with mobile plants, or in specific recycling units. The input material can be returned concrete which is fresh (wet) from ready-mix trucks, production waste at a pre-cast production facility, or waste from construction and demolition. The most significant source is demolition waste, preferably pre-sorted from selective demolition processes.
By far the most common method for recycling dry and hardened concrete involves crushing. Mobile sorters and crushers are often installed on construction sites to allow on-site processing. In other situations, specific processing sites are established, which are usually able to produce higher quality aggregate. Screens are used to achieve desired particle size, and remove dirt, foreign particles and fine material from the coarse aggregate.
Chloride and sulfates are undesired contaminants originated from soil and weathering and can provoke corrosion problems on aluminium and steel structures. The final product, Recycled Concrete Aggregate (RCA), presents interesting properties such as: angular shape, rougher surface, lower specific gravity (20%), higher water absorption, and pH greater than 11 – this elevated pH increases the risk of alkali reactions.
The lower density of RCA usually Increases project efficiency and improve job cost – recycled concrete aggregates yield more volume by weight (up to 15%). The physical properties of coarse aggregates made from crushed demolition concrete make it the preferred material for applications such as road base and sub-base. This is because recycled aggregates often have better compaction properties and require less cement for sub-base uses. Furthermore, it is generally cheaper to obtain than virgin material.
Applications of recycled concrete aggregate
The main commercial applications of the final recycled concrete aggregate are:
Aggregate base course (road base), or the untreated aggregates used as foundation for roadway pavement, is the underlying layer (under pavement surfacing) which forms a structural foundation for paving. To this date this has been the most popular application for RCA due to technical-economic aspects.
Aggregate for ready-mix concrete, by replacing from 10 to 45% of the natural aggregates in the concrete mix with a blend of cement, sand and water. Some concept buildings are showing the progress of this field. Because the RCA itself contains cement, the ratios of the mix have to be adjusted to achieve desired structural requirements such as workability, strength and water absorption.
Soil Stabilization, with the incorporation of recycled aggregate, lime, or fly ash into marginal quality subgrade material used to enhance the load bearing capacity of that subgrade.
Pipe bedding: serving as a stable bed or firm foundation in which to lay underground utilities. Some countries' regulations prohibit the use of RCA and other construction and demolition wastes in filtration and drainage beds due to potential contamination with chromium and pH-value impacts.
Landscape Materials: to promote green architecture. To date, recycled concrete aggregate has been used as boulder/stacked rock walls, underpass abutment structures, erosion structures, water features, retaining walls, and more.
Cradle-to-cradle challenges
The applications developed for RCA so far are not exhaustive, and many more uses are to be developed as regulations, institutions and norms find ways to accommodate construction and demolition waste as secondary raw materials in a safe and economic way. However, considering the purpose of having a circularity of resources in the concrete life cycle, the only application of RCA that could be considered as recycling of concrete is the replacement of natural aggregates on concrete mixes. All the other applications would fall under the category of downcycling. It is estimated that even near complete recovery of concrete from construction and demolition waste will only supply about 20% of total aggregate needs in the developed world.
The path towards circularity goes beyond concrete technology itself, depending on multilateral advances in the cement industry, research and development of alternative materials, building design and management, and demolition as well as conscious use of spaces in urban areas to reduce consumption.
World records
The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.
The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of .
The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.
The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.
The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the cofferdam to be dewatered approximately below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry.
See also
Further reading
References
External links
Advantage and Disadvantage of Concrete
Release of ultrafine particles from three simulated building processes
Concrete: The Quest for Greener Alternatives
Building materials
Masonry
Pavements
Sculpture materials
Composite materials
Heterogeneous chemical mixtures
Roofing materials
|
https://en.wikipedia.org/wiki/Condom
|
A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms.
The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times.
With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis.
Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common method after female sterilization (24%). Rates of condom use are highest in East and Southeast Asia, Europe and North America. About six to nine billion are sold a year.
Medical uses
Birth control
The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms incorrectly or do not use condoms at every act of intercourse. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables.
The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10 to 18% per year. The perfect use pregnancy rate of condoms is 2% per year. Condoms may be combined with other forms of contraception (such as spermicide) for greater protection.
Sexually transmitted infections
Condoms are widely recommended for the prevention of sexually transmitted infections (STIs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of organisms that cause AIDS, genital herpes, cervical cancer, genital warts, syphilis, chlamydia, gonorrhea, and other diseases. Condoms are often recommended as an adjunct to more effective birth control methods (such as IUD) in situations where STD protection is also desired.
For this reason, condoms are frequently used by those in the swinging (sexual practice) community.
According to a 2000 report by the National Institutes of Health (NIH), consistent use of latex condoms reduces the risk of HIV transmission by approximately 85% relative to risk when unprotected, putting the seroconversion rate (infection rate) at 0.9 per 100 person-years with condom, down from 6.7 per 100 person-years. Analysis published in 2007 from the University of Texas Medical Branch and the World Health Organization found similar risk reductions of 80–95%.
The 2000 NIH review concluded that condom use significantly reduces the risk of gonorrhea for men. A 2006 study reports that proper condom use decreases the risk of transmission of human papillomavirus (HPV) to women by approximately 70%. Another study in the same year found consistent condom use was effective at reducing transmission of herpes simplex virus-2, also known as genital herpes, in both men and women.
Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom. Infectious areas of the genitals, especially when symptoms are present, may not be covered by a condom, and as a result, some diseases like HPV and herpes may be transmitted by direct contact. The primary effectiveness issue with using condoms to prevent STDs, however, is inconsistent use.
Condoms may also be useful in treating potentially precancerous cervical changes. Exposure to human papillomavirus, even in individuals already infected with the virus, appears to increase the risk of precancerous changes. The use of condoms helps promote regression of these changes. In addition, researchers in the UK suggest that a hormone in semen can aggravate existing cervical cancer, condom use during sex can prevent exposure to the hormone.
Causes of failure
Condoms may slip off the penis after ejaculation, break due to improper application or physical damage (such as tears caused when opening the package), or break or slip due to latex degradation (typically from usage past the expiration date, improper storage, or exposure to oils). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%. Even if no breakage or slippage is observed, 1–3% of women will test positive for semen residue after intercourse with a condom. Failure rates are higher for anal sex, and until 2022, condoms were only approved by the FDA for vaginal sex. The One Male Condom received FDA approval for anal sex on February 23, 2022.
"Double bagging", using two condoms at once, is often believed to cause a higher rate of failure due to the friction of rubber on rubber. This claim is not supported by research. The limited studies that have been done found that the simultaneous use of multiple condoms decreases the risk of condom breakage.
Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins – such failures generally pose no risk to the user. One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse.
Standard condoms will fit almost any penis, with varying degrees of comfort or risk of slippage. Many condom manufacturers offer "snug" or "magnum" sizes. Some manufacturers also offer custom sized-to-fit condoms, with claims that they are more reliable and offer improved sensation/comfort. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive.
It is recommended for condoms manufacturers to avoid very thick or very thin condoms, because they are both considered less effective. Some authors encourage users to choose thinner condoms "for greater durability, sensation, and comfort", but others warn that "the thinner the condom, the smaller the force required to break it".
Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are more likely to suffer a second such failure. An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage. A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage.
Among people who intend condoms to be their form of birth control, pregnancy may occur when the user has sex without a condom. The person may have run out of condoms, or be traveling and not have a condom with them, or dislike the feel of condoms and decide to "take a chance". This behavior is the primary cause of typical use failure (as opposed to method or perfect use failure).
Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent. Some commercial sex workers from Nigeria reported clients sabotaging condoms in retaliation for being coerced into condom use. Using a fine needle to make several pinholes at the tip of the condom is believed to significantly impact on their effectiveness. Cases of such condom sabotage have occurred.
Side effects
The use of latex condoms by people with an allergy to latex can cause allergic symptoms, such as skin irritation. In people with severe latex allergies, using a latex condom can potentially be life-threatening. Repeated use of latex condoms can also cause the development of a latex allergy in some people. Irritation may also occur due to spermicides that may be present.
Use
Male condoms are usually packaged inside a foil or plastic wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. It is important that some space be left in the tip of the condom so that semen has a place to collect; otherwise it may be forced out of the base of the device. Most condoms have a teat end for this purpose. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle. Condoms are used to reduce the likelihood of pregnancy during intercourse and to reduce the likelihood of contracting sexually transmitted infections (STIs). Condoms are also used during fellatio to reduce the likelihood of contracting STIs.
Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement. Advocates of condom use also cite their advantages of being inexpensive, easy to use, and having few side effects.
Adult film industry
In 2012 proponents gathered 372,000 voter signatures through a citizens' initiative in Los Angeles County to put Measure B on the 2012 ballot. As a result, Measure B, a law requiring the use of condoms in the production of pornographic films, was passed. This requirement has received much criticism and is said by some to be counter-productive, merely forcing companies that make pornographic films to relocate to other places without this requirement. Producers claim that condom use depresses sales.
Sex education
Condoms are often used in sex education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted diseases when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying "comprehensive sexuality education programs ... discuss the appropriate use of condoms", and "promote condom use for those who are sexually active."
In the United States, teaching about condoms in public schools is opposed by some religious organizations. Planned Parenthood, which advocates family planning and sex education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 76% of American parents want their children to receive comprehensive sexuality education including condom use.
Infertility treatment
Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse.
Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervical or intrauterine insemination. Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them.
For fertility treatments, a collection condom may be used to collect semen during sexual intercourse where the semen is provided by the woman's partner. Private sperm donors may also use a collection condom to obtain samples through masturbation or by sexual intercourse with a partner and will transfer the ejaculate from the collection condom to a specially designed container. The sperm is transported in such containers, in the case of a donor, to a recipient woman to be used for insemination, and in the case of a woman's partner, to a fertility clinic for processing and use. However, transportation may reduce the fecundity of the sperm. Collection condoms may also be used where semen is produced at a sperm bank or fertility clinic.
Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates.
Other uses
Condoms excel as multipurpose containers and barriers because they are waterproof, elastic, durable, and (for military and espionage uses) will not arouse suspicion if found.
Ongoing military utilization began during World War II, and includes covering the muzzles of rifle barrels to prevent fouling, the waterproofing of firing assemblies in underwater demolitions, and storage of corrosive materials and garrotes by paramilitary agencies.
Condoms have also been used to smuggle alcohol, cocaine, heroin, and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous and potentially lethal; if the condom breaks, the drugs inside become absorbed into the bloodstream and can cause an overdose.
Medically, condoms can be used to cover endovaginal ultrasound probes, or in field chest needle decompressions they can be used to make a one-way valve.
Condoms have also been used to protect scientific samples from the environment, and to waterproof microphones for underwater recording.
Types
Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes and shapes.
They also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavored condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms also exist.
Female condom
Male condoms have a tight ring to form a seal around the penis, while female condoms usually have a large stiff ring to prevent them from slipping into the body orifice. The Female Health Company produced a female condom that was initially made of polyurethane, but newer versions are made of nitrile rubber. Medtech Products produces a female condom made of latex.
Materials
Natural latex
Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking. In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electric current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing.
While the advantages of latex have made it the most popular condom material, it does have some drawbacks. Latex condoms are damaged when used with oil-based substances as lubricants, such as petroleum jelly, cooking oil, baby oil, mineral oil, skin lotions, suntan lotions, cold creams, butter or margarine. Contact with oil makes latex condoms more likely to break or slip off due to loss of elasticity caused by the oils. Additionally, latex allergy precludes use of latex condoms and is one of the principal reasons for the use of other materials. In May 2009, the U.S. Food and Drug Administration (FDA) granted approval for the production of condoms composed of Vytex, latex that has been treated to remove 90% of the proteins responsible for allergic reactions. An allergen-free condom made of synthetic latex (polyisoprene) is also available.
Synthetic
The most common non-latex condoms are made from polyurethane. Condoms may also be made from other synthetic materials, such as AT-10 resin, and most polyisoprene.
Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick.
Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor. Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes.
However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex, lose their shape or bunch up more than latex, and are more expensive.
Polyisoprene is a synthetic version of natural rubber latex. While significantly more expensive, it has the advantages of latex (such as being softer and more elastic than polyurethane condoms) without the protein which is responsible for latex allergies. Unlike polyurethane condoms, they cannot be used with an oil-based lubricant.
Lambskin
Condoms made from sheep intestines, labeled "lambskin", are also available. Although they are generally effective as a contraceptive by blocking sperm, it is presumed that they are less effective than latex in preventing the transmission of sexually transmitted infections because of pores in the material. This is based on the idea that intestines, by their nature, are porous, permeable membranes, and while sperm are too large to pass through the pores, viruses — such as HIV, herpes, and genital warts — are small enough to pass. However, there are to date no clinical data confirming or denying this theory.
As a result of laboratory data on condom porosity, in 1989, the FDA began requiring lambskin condom manufacturers to indicate that the products were not to be used for the prevention of sexually transmitted infections. This was based on the presumption that lambskin condoms would be less effective than latex in preventing HIV transmission, rather than a conclusion that lambskin condoms lack efficacy in STI prevention altogether. An FDA publication in 1992 states that lambskin condoms "provide good birth control and a varying degree of protection against some, but not all, sexually transmitted diseases" and that the labelling requirement was decided upon because the FDA "cannot expect people to know which STDs they need to be protected against", and since "the reality is that you don't know what your partner has, we wanted natural-membrane condoms to have labels that don't allow the user to assume they're effective against the small viral STDs."
Some believe that lambskin condoms provide a more "natural" sensation and lack the allergens inherent to latex. Still, because of their lesser protection against infection, other hypoallergenic materials such as polyurethane are recommended for latex-allergic users and partners. Lambskin condoms are also significantly more expensive than different types, and as slaughter by-products, they are also not vegetarian.
Spermicide
Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, condoms lubricated with spermicide have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary tract infections in women. In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms.
Nonoxynol-9 was once believed to offer additional protection against STDs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission. The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, it recommends using a nonoxynol-9 lubricated condom over no condom at all. , nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9 and Planned Parenthood has discontinued the distribution of condoms so lubricated.
Ribbed and studded
Textured condoms include studded and ribbed condoms which can provide extra sensations to both partners. The studs or ribs can be located on the inside, outside, or both; alternatively, they are located in specific sections to provide directed stimulation to either the G-spot or frenulum. Many textured condoms which advertise "mutual pleasure" also are bulb-shaped at the top, to provide extra stimulation to the penis. Some women experience irritation during vaginal intercourse with studded condoms.
Other
A Swiss company (Lamprecht A.G) produces extra small condoms aimed at the teenage market. Designed to be used by boys as young as fourteen, Ceylor 'Hotshot' condoms are aimed at reducing teenage pregnancies.
The anti-rape condom is another variation designed to be worn by women. It is designed to cause pain to the attacker, hopefully allowing the victim a chance to escape.
A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life and may be coated on the inside with a sperm-friendly lubricant.
Some condom-like devices are intended for entertainment only, such as glow-in-the dark condoms. These novelty condoms may not provide protection against pregnancy and STDs.
In February 2022, the U.S. Food and Drug Administration (FDA) approved the first condoms specifically indicated to help reduce transmission of sexually transmitted infections (STIs) during anal intercourse.
Prevalence
The prevalence of condom use varies greatly between countries. Most surveys of contraceptive use are among married women, or women in informal unions. Japan has the highest rate of condom usage in the world: in that country, condoms account for almost 80% of contraceptive use by married women. On average, in developed countries, condoms are the most popular method of birth control: 28% of married contraceptive users rely on condoms. In the average less-developed country, condoms are less common: only 6–8% of married contraceptive users choose condoms.
History
Before the 19th century
Whether condoms were used in ancient civilizations is debated by archaeologists and historians. In ancient Egypt, Greece, and Rome, pregnancy prevention was generally seen as a woman's responsibility, and the only well documented contraception methods were female-controlled devices. In Asia before the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded. Condoms seem to have been used for contraception, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, condoms called Kabuto-gata (甲形) were made of tortoise shell or animal horn.
In 16th-century Italy, anatomist and physician Gabriele Falloppio wrote a treatise on syphilis. The earliest documented strain of syphilis, first appearing in Europe in a 1490s outbreak, caused severe symptoms and often death within a few months of contracting the disease. Falloppio's treatise is the earliest uncontested description of condom use: it describes linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Falloppio claimed that an experimental trial of the linen sheath demonstrated protection against syphilis.
After this, the use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication that these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius, who condemned them as immoral. In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of "condons", the first documented use of that word or any similar spelling. Other early spellings include "condam" and "quondam", from which the Italian derivation guantone has been suggested, from guanto, "a glove".
In addition to linen, condoms during the Renaissance were made out of intestines and bladder. In the late 16th century, Dutch traders introduced condoms made from "fine leather" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis.
Casanova in the 18th century was one of the first reported using "assurance caps" to prevent impregnating his mistresses.
From at least the 18th century, condom use was opposed in some legal, religious, and medical circles for essentially the same reasons that are given today: condoms reduce the likelihood of pregnancy, which some thought immoral or undesirable for the nation; they do not provide full protection against sexually transmitted infections, while belief in their protective powers was thought to encourage sexual promiscuity; and, they are not used consistently due to inconvenience, expense, or loss of sensation.
Despite some opposition, the condom market grew rapidly. In the 18th century, condoms were available in a variety of qualities and sizes, made from either linen treated with chemicals, or "skin" (bladder or intestine softened by treatment with sulfur and lye). They were sold at pubs, barbershops, chemist shops, open-air markets, and at the theater throughout Europe and Russia. They later spread to America, although in every place there were generally used only by the middle and upper classes, due to both expense and lack of sex education.
1800 through 1920s
The early 19th century saw contraceptives promoted to the poorer classes for the first time. Writers on contraception tended to prefer other birth control methods to the condom. By the late 19th century, many feminists expressed distrust of the condom as a contraceptive, as its use was controlled and decided upon by men alone. They advocated instead for methods controlled by women, such as diaphragms and spermicidal douches. Other writers cited both the expense of condoms and their unreliability (they were often riddled with holes and often fell off or tore). Still, they discussed condoms as a good option for some and the only contraceptive that protects from disease.
Many countries passed laws impeding the manufacture and promotion of contraceptives. In spite of these restrictions, condoms were promoted by traveling lecturers and in newspaper advertisements, using euphemisms in places where such ads were illegal. Instructions on how to make condoms at home were distributed in the United States and Europe. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method.
Beginning in the second half of the 19th century, American rates of sexually transmitted diseases skyrocketed. Causes cited by historians include the effects of the American Civil War and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sex education classes were introduced to public schools for the first time, teaching about venereal diseases and how they were transmitted. They generally taught abstinence was the only way to avoid sexually transmitted diseases. Condoms were not promoted for disease prevention because the medical community and moral watchdogs considered STDs to be punishment for sexual misbehavior. The stigma against people with these diseases was so significant that many hospitals refused to treat people with syphilis.
The German military was the first to promote condom use among its soldiers in the later 19th century. Early 20th century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted diseases. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use.
In the decades after World War I, there remained social and legal obstacles to condom use throughout the U.S. and Europe. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control because their failure rates were too high. Freud was especially opposed to the condom because he thought it cut down on sexual pleasure. Some feminists continued to oppose male-controlled contraceptives such as condoms. In 1920 the Church of England's Lambeth Conference condemned all "unnatural means of conception avoidance". The Bishop of London, Arthur Winnington-Ingram, complained of the huge number of condoms discarded in alleyways and parks, especially after weekends and holidays.
However, European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Worldwide, condom sales doubled in the 1920s.
Rubber and manufacturing advances
In 1839, Charles Goodyear discovered a way of processing natural rubber, which is too stiff when cold and too soft when warm, in such a way as to make it elastic. This proved to have advantages for the manufacture of condoms; unlike the sheep's gut condoms, they could stretch and did not tear quickly when used. The rubber vulcanization process was patented by Goodyear in 1844. The first rubber condom was produced in 1855. The earliest rubber condoms had a seam and were as thick as a bicycle inner tube. Besides this type, small rubber condoms covering only the glans were often used in England and the United States. There was more risk of losing them and if the rubber ring was too tight, it would constrict the penis. This type of condom was the original "capote" (French for condom), perhaps because of its resemblance to a woman's bonnet worn at that time, also called a capote.
For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped molds, then dipping the wrapped molds in a chemical solution to cure the rubber. In 1912, Polish-born inventor Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid.
Around 1920 patent lawyer and vice-president of the United States Rubber Company Ernest Hopkinson invented a new technique of converting latex into rubber without a coagulant (demulsifier), which featured using water as a solvent and warm air to dry the solution, as well as optionally preserving liquid latex with ammonia. Condoms made this way, commonly called "latex" ones, required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. The use of water to suspend the rubber instead of gasoline and benzene eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber).
Until the twenties, all condoms were individually hand-dipped by semi-skilled workers. Throughout the decade of the 1920s, advances in the automation of the condom assembly line were made. The first fully automated line was patented in 1930. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market.
1930 to present
In 1930 the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931 the Federal Council of Churches in the U.S. issued a similar statement. The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. In the 1930s, legal restrictions on condoms began to be relaxed. But during this period Fascist Italy and Nazi Germany increased restrictions on condoms (limited sales as disease preventatives were still allowed). During the Depression, condom lines by Schmid gained in popularity. Schmid still used the cement-dipping method of manufacture which had two advantages over the latex variety. Firstly, cement-dipped condoms could be safely used with oil-based lubricants. Secondly, while less comfortable, these older-style rubber condoms could be reused and so were more economical, a valued feature in hard times. More attention was brought to quality issues in the 1930s, and the U.S. Food and Drug Administration began to regulate the quality of condoms sold in the United States.
Throughout World War II, condoms were not only distributed to male U.S. military members, but also heavily promoted with films, posters, and lectures. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to this day. After the war, condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. The birth control pill became the world's most popular method of birth control in the years after its 1960 début, but condoms remained a strong second. The U.S. Agency for International Development pushed condom use in developing countries to help solve the "world population crises": by 1970 hundreds of millions of condoms were being used each year in India alone.(This number has grown in recent decades: in 2004, the government of India purchased 1.9 billion condoms for distribution at family planning clinics.)
In the 1960s and 1970s quality regulations tightened, and more legal barriers to condom use were removed. In Ireland, legal condom sales were allowed for the first time in 1978. Advertising, however was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television; this policy remained in place until 1979.
After it was discovered in the early 1980s that AIDS can be a sexually transmitted infection, the use of condoms was encouraged to prevent transmission of HIV. Despite opposition by some political, religious, and other figures, national condom promotion campaigns occurred in the U.S. and Europe. These campaigns increased condom use significantly.
Due to increased demand and greater social acceptance, condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Walmart. Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. The phenomenon of decreasing use of condoms as disease preventatives has been called prevention fatigue or condom fatigue. Observers have cited condom fatigue in both Europe and North America. As one response, manufacturers have changed the tone of their advertisements from scary to humorous.
New developments continued to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Worldwide condom use is expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms by 2015. , condoms are available inside prisons in Canada, most of the European Union, Australia, Brazil, Indonesia, South Africa, and the US states of Vermont (on September 17, 2013, the Californian Senate approved a bill for condom distribution inside the state's prisons, but the bill was not yet law at the time of approval).
The global condom market was estimated at US$9.2 billion in 2020.
Etymology and other terms
The term condom first appears in the early 18th century: early forms include condum (1706 and 1717), condon (1708) and cundum (1744). The word's etymology is unknown. In popular tradition, the invention and naming of the condom came to be attributed to an associate of England's King Charles II, one "Dr. Condom" or "Earl of Condom". There is however no evidence of the existence of such a person, and condoms had been used for over one hundred years before King Charles II acceded to the throne in 1660.
A variety of unproven Latin etymologies have been proposed, including (receptacle), (house), and (scabbard or case). It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove. William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown".
Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters or rubber johnnies. Additionally, condoms may be referred to using the manufacturer's name.
Society and culture
Some moral and scientific criticism of condoms exists despite the many benefits of condoms agreed on by scientific consensus and sexual health experts.
Condom usage is typically recommended for new couples who have yet to develop full trust in their partner with regard to STDs. Established couples on the other hand have few concerns about STDs, and can use other methods of birth control such as the pill, which does not act as a barrier to intimate sexual contact. Note that the polar debate with regard to condom usage is attenuated by the target group the argument is directed. Notably the age category and stable partner question are factors, as well as the distinction between heterosexual and homosexuals, who have different kinds of sex and have different risk consequences and factors.
Among the prime objections to condom usage is the blocking of erotic sensation, or the intimacy that barrier-free sex provides. As the condom is held tightly to the skin of the penis, it diminishes the delivery of stimulation through rubbing and friction. Condom proponents claim this has the benefit of making sex last longer, by diminishing sensation and delaying male ejaculation. Those who promote condom-free heterosexual sex (slang: "bareback") claim that the condom puts a barrier between partners, diminishing what is normally a highly sensual, intimate, and spiritual connection between partners.
Religious
The United Church of Christ (UCC), a Reformed denomination of the Congregationalist tradition, promotes the distribution of condoms in churches and faith-based educational settings. Michael Shuenemeyer, a UCC minister, has stated that "The practice of safer sex is a matter of life and death. People of faith make condoms available because we have chosen life so that we and our children may live."
On the other hand, the Roman Catholic Church opposes all kinds of sexual acts outside of marriage, as well as any sexual act in which the chance of successful conception has been reduced by direct and intentional acts (for example, surgery to prevent conception) or foreign objects (for example, condoms).
The use of condoms to prevent STI transmission is not specifically addressed by Catholic doctrine, and is currently a topic of debate among theologians and high-ranking Catholic authorities. A few, such as Belgian Cardinal Godfried Danneels, believe the Catholic Church should actively support condoms used to prevent disease, especially serious diseases such as AIDS. However, the majority view—including all statements from the Vatican—is that condom-promotion programs encourage promiscuity, thereby actually increasing STI transmission. This view was most recently reiterated in 2009 by Pope Benedict XVI.
The Roman Catholic Church is the largest organized body of any world religion. The church has hundreds of programs dedicated to fighting the AIDS epidemic in Africa, but its opposition to condom use in these programs has been highly controversial.
In a November 2011 interview, Pope Benedict XVI discussed for the first time the use of condoms to prevent STI transmission. He said that the use of a condom can be justified in a few individual cases if the purpose is to reduce the risk of an HIV infection. He gave as an example male prostitutes. There was some confusion at first whether the statement applied only to homosexual prostitutes and thus not to heterosexual intercourse at all. However, Federico Lombardi, spokesman for the Vatican, clarified that it applied to heterosexual and transsexual prostitutes, whether male or female, as well. He did, however, also clarify that the Vatican's principles on sexuality and contraception had not been changed.
Scientific and environmental
More generally, some scientific researchers have expressed objective concern over certain ingredients sometimes added to condoms, notably talc and nitrosamines. Dry dusting powders are applied to latex condoms before packaging to prevent the condom from sticking to itself when rolled up. Previously, talc was used by most manufacturers, but cornstarch is currently the most popular dusting powder. Although rare during normal use, talc is known to be potentially irritant to mucous membranes (such as in the vagina). Cornstarch is generally believed to be safe; however, some researchers have raised concerns over its use as well.
Nitrosamines, which are potentially carcinogenic in humans, are believed to be present in a substance used to improve elasticity in latex condoms. A 2001 review stated that humans regularly receive 1,000 to 10,000 times greater nitrosamine exposure from food and tobacco than from condom use and concluded that the risk of cancer from condom use is very low. However, a 2004 study in Germany detected nitrosamines in 29 out of 32 condom brands tested, and concluded that exposure from condoms might exceed the exposure from food by 1.5- to 3-fold.
In addition, the large-scale use of disposable condoms has resulted in concerns over their environmental impact via littering and in landfills, where they can eventually wind up in wildlife environments if not incinerated or otherwise permanently disposed of first. Polyurethane condoms in particular, given they are a form of plastic, are not biodegradable, and latex condoms take a very long time to break down. Experts, such as AVERT, recommend condoms be disposed of in a garbage receptacle, as flushing them down the toilet (which some people do) may cause plumbing blockages and other problems. Furthermore, the plastic and foil wrappers condoms are packaged in are also not biodegradable. However, the benefits condoms offer are widely considered to offset their small landfill mass. Frequent condom or wrapper disposal in public areas such as a parks have been seen as a persistent litter problem.
While biodegradable, latex condoms damage the environment when disposed of improperly. According to the Ocean Conservancy, condoms, along with certain other types of trash, cover the coral reefs and smother sea grass and other bottom dwellers. The United States Environmental Protection Agency also has expressed concerns that many animals might mistake the litter for food.
Cultural barriers to use
In much of the Western world, the introduction of the pill in the 1960s was associated with a decline in condom use. In Japan, oral contraceptives were not approved for use until September 1999, and even then access was more restricted than in other industrialized nations. Perhaps because of this restricted access to hormonal contraception, Japan has the highest rate of condom usage in the world: in 2008, 80% of contraceptive users relied on condoms.
Cultural attitudes toward gender roles, contraception, and sexual activity vary greatly around the world, and range from extremely conservative to extremely liberal. But in places where condoms are misunderstood, mischaracterised, demonised, or looked upon with overall cultural disapproval, the prevalence of condom use is directly affected. In less-developed countries and among less-educated populations, misperceptions about how disease transmission and conception work negatively affect the use of condoms; additionally, in cultures with more traditional gender roles, women may feel uncomfortable demanding that their partners use condoms.
As an example, Latino immigrants in the United States often face cultural barriers to condom use. A study on female HIV prevention published in the Journal of Sex Health Research asserts that Latino women often lack the attitudes needed to negotiate safe sex due to traditional gender-role norms in the Latino community, and may be afraid to bring up the subject of condom use with their partners. Women who participated in the study often reported that because of the general machismo subtly encouraged in Latino culture, their male partners would be angry or possibly violent at the woman's suggestion that they use condoms. A similar phenomenon has been noted in a survey of low-income American black women; the women in this study also reported a fear of violence at the suggestion to their male partners that condoms be used.
A telephone survey conducted by Rand Corporation and Oregon State University, and published in the Journal of Acquired Immune Deficiency Syndromes showed that belief in AIDS conspiracy theories among United States black men is linked to rates of condom use. As conspiracy beliefs about AIDS grow in a given sector of these black men, consistent condom use drops in that same sector. Female use of condoms was not similarly affected.
In the African continent, condom promotion in some areas has been impeded by anti-condom campaigns by some Muslim and Catholic clerics. Among the Maasai in Tanzania, condom use is hampered by an aversion to "wasting" sperm, which is given sociocultural importance beyond reproduction. Sperm is believed to be an "elixir" to women and to have beneficial health effects. Maasai women believe that, after conceiving a child, they must have sexual intercourse repeatedly so that the additional sperm aids the child's development. Frequent condom use is also considered by some Maasai to cause impotence. Some women in Africa believe that condoms are "for prostitutes" and that respectable women should not use them. A few clerics even promote the lie that condoms are deliberately laced with HIV. In the United States, possession of many condoms has been used by police to accuse women of engaging in prostitution. The Presidential Advisory Council on HIV/AIDS has condemned this practice and there are efforts to end it.
Middle-Eastern couples who have not had children, because of the strong desire and social pressure to establish fertility as soon as possible within marriage, rarely use condoms.
In 2017, India restricted TV advertisements for condoms to between the hours of 10 pm to 6 am. Family planning advocates were against this, saying it was liable to "undo decades of progress on sexual and reproductive health".
Major manufacturers
One analyst described the size of the condom market as something that "boggles the mind". Numerous small manufacturers, nonprofit groups, and government-run manufacturing plants exist around the world. Within the condom market, there are several major contributors, among them both for-profit businesses and philanthropic organizations. Most large manufacturers have ties to the business that reach back to the end of the 19th century.
Economics
In the United States condoms usually cost less than US$1.00.
Research
A spray-on condom made of latex is intended to be easier to apply and more successful in preventing the transmission of diseases. , the spray-on condom was not going to market because the drying time could not be reduced below two to three minutes.
The Invisible Condom, developed at Université Laval in Quebec, Canada, is a gel that hardens upon increased temperature after insertion into the vagina or rectum. In the lab, it has been shown to effectively block HIV and herpes simplex virus. The barrier breaks down and liquefies after several hours. , the invisible condom is in the clinical trial phase, and has not yet been approved for use.
Also developed in 2005 is a condom treated with an erectogenic compound. The drug-treated condom is intended to help the wearer maintain his erection, which should also help reduce slippage. If approved, the condom would be marketed under the Durex brand. , it was still in clinical trials. In 2009, Ansell Healthcare, the makers of Lifestyle condoms, introduced the X2 condom lubricated with "Excite Gel" which contains the amino acid L-arginine and is intended to improve the strength of the erectile response.
In March 2013, philanthropist Bill Gates offered US$100,000 grants through his foundation for a condom design that "significantly preserves or enhances pleasure" to encourage more males to adopt the use of condoms for safer sex. The grant information stated: "The primary drawback from the male perspective is that condoms decrease pleasure as compared to no condom, creating a trade-off that many men find unacceptable, particularly given that the decisions about use must be made just prior to intercourse. Is it possible to develop a product without this stigma, or better, one that is felt to enhance pleasure?" In November of the same year, 11 research teams were selected to receive the grant money.
References
External links
"Sheathing Cupid's Arrow: the Oldest Artificial Contraceptive May Be Ripe for a Makeover", The Economist, February 2014.
16th-century introductions
HIV/AIDS
Prevention of HIV/AIDS
Penis
Sexual health
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Contraception for males
|
https://en.wikipedia.org/wiki/Cladistics
|
Cladistics (; ) is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.
As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). Upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.
Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found.
The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.)
Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent.
Cladistics is now the most commonly used method to classify organisms.
History
The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field.
What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term "clade" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, "cladogenesis" in 1958, "cladistic" by Arthur Cain and Harrison in 1960, "cladist" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and "cladistics" in 1966. Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr.
Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data.
In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics.
Methodology
The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping. Synapomorphies (shared, derived character states) are viewed as evidence of grouping, while symplesiomorphies (shared ancestral character states) are not. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets
Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.
Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:
If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the lived earlier than the last common ancestor of lizards and birds, near the . Most molecular evidence, however, produces cladograms more like this:
If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct.
The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea.
Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups.
Terminology for character states
The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:
A plesiomorphy ("close form") or ancestral state is a character state that a taxon has retained from its ancestors. When two or more taxa that are not nested within each other share a plesiomorphy, it is a symplesiomorphy (from syn-, "together"). Symplesiomorphies do not mean that the taxa that exhibit that character state are necessarily closely related. For example, Reptilia is traditionally characterized by (among other things) being cold-blooded (i.e., not maintaining a constant high body temperature), whereas birds are warm-blooded. Since cold-bloodedness is a plesiomorphy, inherited from the common ancestor of traditional reptiles and birds, and thus a symplesiomorphy of turtles, snakes and crocodiles (among others), it does not mean that turtles, snakes and crocodiles form a clade that excludes the birds.
An apomorphy ("separate form") or derived state is an innovation. It can thus be used to diagnose a clade – or even to help define a clade name in phylogenetic nomenclature. Features that are derived in individual taxa (a single species or a group that is represented by a single terminal in a given phylogenetic analysis) are called autapomorphies (from auto-, "self"). Autapomorphies express nothing about relationships among groups; clades are identified (or defined) by synapomorphies (from syn-, "together"). For example, the possession of digits that are homologous with those of Homo sapiens is a synapomorphy within the vertebrates. The tetrapods can be singled out as consisting of the first vertebrate with such digits homologous to those of Homo sapiens together with all descendants of this vertebrate (an apomorphy-based phylogenetic definition). Importantly, snakes and other tetrapods that do not have digits are nonetheless tetrapods: other characters, such as amniotic eggs and diapsid skulls, indicate that they descended from ancestors that possessed digits which are homologous with ours.
A character state is homoplastic or "an instance of homoplasy" if it is shared by two or more organisms but is absent from their common ancestor or from a later ancestor in the lineage leading to one of the organisms. It is therefore inferred to have evolved by convergence or reversal. Both mammals and birds are able to maintain a high constant body temperature (i.e., they are warm-blooded). However, the accepted cladogram explaining their significant features indicates that their common ancestor is in a group lacking this character state, so the state must have evolved independently in the two clades. Warm-bloodedness is separately a synapomorphy of mammals (or a larger clade) and of birds (or a larger clade), but it is not a synapomorphy of any group including both these clades. Hennig's Auxiliary Principle states that shared character states should be considered evidence of grouping unless they are contradicted by the weight of other evidence; thus, homoplasy of some feature among members of a group may only be inferred after a phylogenetic hypothesis for that group has been established.
The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.
It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.
Terminology for taxa
Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below.
Criticism
Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.
Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.
Issues
Ancestors
The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.
Extinction status
An otherwise extinct group with any extant descendants, is not considered (literally) extinct, and for instance does not have a date of extinction.
Hybridization, interbreeding
Anything having to do with biology and sex is complicated and messy, and cladistics is no exception. Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two. Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches. The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of. In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings, but typically other reasons are quoted.
Horizontal gene transfer
Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host. There are several processes in nature which can cause horizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics.
Naming stability
If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them. Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys, Ardipithecus, Australopithecus, Homo erectus all contain Homo sapiens cladistically, in their sensu lato meaning. For originally extinct stem groups, sensu lato generally means generously keeping previously included groups, which then may come to include even living species. A pruned sensu stricto meaning is often adopted instead, but the group would need to be restricted to a single branche on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics.
In disciplines other than biology
The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured.
Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features.
Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.
Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita.
Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics).
Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time.
Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.
See also
Bioinformatics
Biomathematics
Coalescent theory
Common descent
Glossary of scientific naming
Language family
Patrocladogram
Phylogenetic network
Scientific classification
Stratocladistics
Subclade
Systematics
Three-taxon analysis
Tree model
Tree structure
Notes and references
Bibliography
Available free online at Gallica (No direct URL). This is the paper credited by for the first use of the term 'clade'.
responding to .
Translated from manuscript in German eventually published in 1982 (Phylogenetische Systematik, Verlag Paul Parey, Berlin).
d'Huy, Julien (2012b), "Le motif de Pygmalion : origine afrasienne et diffusion en Afrique". Sahara, 23: 49-59 .
d'Huy, Julien (2013a), "Polyphemus (Aa. Th. 1137)." "A phylogenetic reconstruction of a prehistoric tale". Nouvelle Mythologie Comparée / New Comparative Mythology 1,
d'Huy, Julien (2013c) "Les mythes évolueraient par ponctuations". Mythologie française, 252, 2013c: 8-12.
d'Huy, Julien (2013d) "A Cosmic Hunt in the Berber sky : a phylogenetic reconstruction of Palaeolithic mythology". Les Cahiers de l'AARS, 15, 2013d: 93-106.
Reissued 1997 in paperback. Includes a reprint of Mayr's 1974 anti-cladistics paper at pp. 433–476, "Cladistic analysis or cladistic classification." This is the paper to which is a response.
.
Tehrani, Jamshid J., 2013, "The Phylogeny of Little Red Riding Hood", PLOS ONE, 13 November.
External links
OneZoom: Tree of Life – all living species as intuitive and zoomable fractal explorer (responsive design)
Willi Hennig Society
Cladistics (scholarly journal of the Willi Hennig Society)
Phylogenetics
Evolutionary biology
Zoology
Philosophy of biology
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.