source
stringlengths
31
81
text
stringlengths
72
169k
https://en.wikipedia.org/wiki/Calendar
A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills. Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term. Etymology The term calendar is taken from , the term for the first day of the month in the Roman calendar, related to the verb 'to call out', referring to the "calling" of the new moon when it was first seen. Latin meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as and from there in Middle English as by the 13th century (the spelling calendar is early modern). History The course of the sun and the moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year. The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars. During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures. A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar. A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars. Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar. The Roman calendar was reformed by Julius Caesar in 46 BC. His "Julian" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Gregorian calendar, introduced in 1582, corrected most of the remaining difference between the Julian calendar and the solar year. The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year. There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke-Henry Permanent Calendar. Such ideas are mooted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity. Systems A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years. The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction. Other calendars have one (or multiple) larger units of time. Calendars that contain one level of cycles: week and weekday – this system (without year, the week number keeps on increasing) is not very common year and ordinal date within the year, e.g., the ISO 8601 ordinal date system Calendars with two levels of cycles: year, month, and day – most systems, including the Gregorian calendar (and its very similar predecessor, the Julian calendar), the Islamic calendar, the Solar Hijri calendar and the Hebrew calendar year, week, and weekday – e.g., the ISO week date Cycles can be synchronized with periodic phenomena: Lunar calendars are synchronized to the motion of the Moon (lunar phases); an example is the Islamic calendar. Solar calendars are based on perceived seasonal changes synchronized to the apparent motion of the Sun; an example is the Persian calendar. Lunisolar calendars are based on a combination of both solar and lunar reckonings; examples include the traditional calendar of China, the Hindu calendar in India and Nepal, and the Hebrew calendar. The week cycle is an example of one that is not synchronized to any external phenomenon (although it may have been derived from lunar phases, beginning anew every month). Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements. Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia. Solar Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day. Lunar Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar. Alexander Marshack, in a controversial reading, believed that marks on a bone baton () represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar. Lunisolar A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle. Subdivisions Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week. Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length. Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito. Other types Arithmetical and astronomical An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult. An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar. Complete and incomplete Calendars may be either complete or incomplete. Complete calendars provide a way of naming each consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of designating the days of the winter months other than to lump them together as "winter", is an example of an incomplete calendar, while the Gregorian calendar is an example of a complete calendar. Usage The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season. Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase. Gregorian The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days). The calendar was introduced in 1582 as a refinement to the Julian calendar, which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923. The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era). Religious The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days. While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes. Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season. Eastern Christians, including the Orthodox Church, use the Julian calendar. The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622) With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years. Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states. The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar. Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century). The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques). Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days. National The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes. The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar. Fiscal A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival. In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar. Formats The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc. In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word. In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain. It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary. When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row. Software Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list. Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server). See also General Roman Calendar List of calendars Advent calendar Calendar reform Calendrical calculation Docket (court) History of calendars Horology List of international common standards List of unofficial observances by date Real-time clock (RTC), which underlies the Calendar software on modern computers. Unit of time References Citations Sources Further reading External links Calendar converter, including all major civil, religious and technical calendars. Units of time
https://en.wikipedia.org/wiki/Candela
The candela ( or ; symbol: cd) is the unit of luminous intensity in the International System of Units (SI). It measures luminous power per unit solid angle emitted by a light source in a particular direction. Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source's spectrum, the contribution of each wavelength is weighted by the luminous efficiency function, the model of the sensitivity of the human eye to different wavelengths, standardized by the CIE and ISO. A common wax candle emits light with a luminous intensity of roughly one candela. If emission in some directions is blocked by an opaque barrier, the emission would still be approximately one candela in the directions that are not obscured. The word candela is Latin for candle. The old name "candle" is still sometimes used, as in foot-candle and the modern definition of candlepower. Definition The 26th General Conference on Weights and Measures (CGPM) redefined the candela in 2018. The new definition, which took effect on 20 May 2019, is: The candela [...] is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency , Kcd, to be 683 when expressed in the unit lm W−1, which is equal to , or , where the kilogram, metre and second are defined in terms of h, c and ΔνCs. Explanation The frequency chosen is in the visible spectrum near green, corresponding to a wavelength of about 555 nanometres. The human eye, when adapted for bright conditions, is most sensitive near this frequency. Under these conditions, photopic vision dominates the visual perception of our eyes over the scotopic vision. At other frequencies, more radiant intensity is required to achieve the same luminous intensity, according to the frequency response of the human eye. The luminous intensity for light of a particular wavelength λ is given by where is the luminous intensity, is the radiant intensity and is the photopic luminous efficiency function. If more than one wavelength is present (as is usually the case), one must integrate over the spectrum of wavelengths to get the total luminous intensity. Examples A common candle emits light with roughly 1 cd luminous intensity. A 25 W compact fluorescent light bulb puts out around 1700 lumens; if that light is radiated equally in all directions (i.e. over 4 steradians), it will have an intensity of Focused into a 20° beam (0.095 steradians), the same light bulb would have an intensity of around 18,000 cd within the beam. History Prior to 1948, various standards for luminous intensity were in use in a number of countries. These were typically based on the brightness of the flame from a "standard candle" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these was the English standard of candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp. A better standard for luminous intensity was needed. In 1884, Jules Violle had proposed a standard based on the light emitted by 1 cm2 of platinum at its melting point (or freezing point). The resulting unit of intensity, called the "violle", was roughly equal to 60 English candlepower. Platinum was convenient for this purpose because it had a high enough melting point, was not prone to oxidation, and could be obtained in pure form. Violle showed that the intensity emitted by pure platinum was strictly dependent on its temperature, and so platinum at its melting point should have a consistent luminous intensity. In practice, realizing a standard based on Violle's proposal turned out to be more difficult than expected. Impurities on the surface of the platinum could directly affect its emissivity, and in addition impurities could affect the luminous intensity by altering the melting point. Over the following half century various scientists tried to make a practical intensity standard based on incandescent platinum. The successful approach was to suspend a hollow shell of thorium dioxide with a small hole in it in a bath of molten platinum. The shell (cavity) serves as a black body, producing black-body radiation that depends on the temperature and is not sensitive to details of how the device is constructed. In 1937, the Commission Internationale de l'Éclairage (International Commission on Illumination) and the CIPM proposed a "new candle" based on this concept, with value chosen to make it similar to the earlier unit candlepower. The decision was promulgated by the CIPM in 1946: The value of the new candle is such that the brightness of the full radiator at the temperature of solidification of platinum is 60 new candles per square centimetre. It was then ratified in 1948 by the 9th CGPM which adopted a new name for this unit, the candela. In 1967 the 13th CGPM removed the term "new candle" and gave an amended version of the candela definition, specifying the atmospheric pressure applied to the freezing platinum: The candela is the luminous intensity, in the perpendicular direction, of a surface of square metre of a black body at the temperature of freezing platinum under a pressure of  newtons per square metre. In 1979, because of the difficulties in realizing a Planck radiator at high temperatures and the new possibilities offered by radiometry, the 16th CGPM adopted a new definition of the candela: The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency and that has a radiant intensity in that direction of  watt per steradian. The definition describes how to produce a light source that (by definition) emits one candela, but does not specify the luminous efficiency function for weighting radiation at other frequencies. Such a source could then be used to calibrate instruments designed to measure luminous intensity with reference to a specified luminous efficiency function. An appendix to the SI Brochure makes it clear that the luminous efficiency function is not uniquely specified, but must be selected to fully define the candela. The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition. The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 redefinition of SI base units, which redefined the SI base units in terms of fundamental physical constants. SI photometric light units Relationships between luminous intensity, luminous flux, and illuminance If a source emits a known luminous intensity (in candelas) in a well-defined cone, the total luminous flux in lumens is given by where is the radiation angle of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 cd with a radiation angle of 40° emits about 224 lumens. See MR16 for emission angles of some common lamps. If the source emits light uniformly in all directions, the flux can be found by multiplying the intensity by 4: a uniform 1 candela source emits 12.6 lumens. For the purpose of measuring illumination, the candela is not a practical unit, as it only applies to idealized point light sources, each approximated by a source small compared to the distance from which its luminous radiation is measured, also assuming that it is done so in the absence of other light sources. What gets directly measured by a light meter is incident light on a sensor of finite area, i.e. illuminance in lm/m2 (lux). However, if designing illumination from many point light sources, like light bulbs, of known approximate omnidirectionally uniform intensities, the contributions to illuminance from incoherent light being additive, it is mathematically estimated as follows. If is the position of the ith source of uniform intensity , and is the unit vector normal to the illuminated elemental opaque area being measured, and provided that all light sources lie in the same half-space divided by the plane of this area, In the case of a single point light source of intensity Iv, at a distance r and normally incident, this reduces to SI multiples Like other SI units, the candela can also be modified by adding a metric prefix that multiplies it by a power of 10, for example millicandela (mcd) for 10−3 candela. References SI base units Units of luminous intensity
https://en.wikipedia.org/wiki/City
A city is a human settlement of a notable size. It can be defined as a permanent and densely settled place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, production of goods, and communication. Their density facilitates interaction between people, government organizations, and businesses, sometimes benefiting different parties in the process, such as improving the efficiency of goods and service distribution. Historically, city dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanization, more than half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling toward city centres for employment, entertainment, and education. However, in a world of intensifying globalization, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, climate change, and global health. Because of these major influences on global issues, the international community has prioritized investment in sustainable cities through Sustainable Development Goal 11. Due to the efficiency of transportation and the smaller land consumption, dense cities hold the potential to have a smaller ecological footprint per inhabitant than more sparsely populated areas. Therefore, compact cities are often referred to as a crucial element in fighting climate change. However, this concentration can also have significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources. Other important traits of cities besides population include the capital status and relative continued occupation of the city. For example, country capitals such as Athens,Beijing, Jakarta, Kuala Lumpur, London, Manila, Mexico City, Moscow, Nairobi, New Delhi, Paris, Rome, Seoul, Singapore, Tokyo, and Washington, D.C. reflect the identity and apex of their respective nations. Some historic capitals, such as Kyoto, Yogyakarta, and Xi'an, maintain their reflection of cultural identity even without modern capital status. Religious holy sites offer another example of capital status within a religion; examples include Jerusalem, Mecca, Varanasi, Ayodhya, Haridwar, and Prayagraj. Meaning A city can be distinguished from other human settlements by its relatively great size, but also by its functions and its special symbolic status, which may be conferred by a central authority. The term can also refer either to the physical streets and buildings of the city or to the collection of people who dwell there and can be used in a general sense to mean urban rather than rural territory. National censuses use a variety of definitions – invoking factors such as population, population density, number of dwellings, economic function, and infrastructure – to classify populations as urban. Typical working definitions for small-city populations start at around 100,000 people. Common population definitions for an urban area (city or town) range between 1,500 and 50,000 people, with most U.S. states using a minimum between 1,500 and 5,000 inhabitants. Some jurisdictions set no such minima. In the United Kingdom, city status is awarded by the Crown and then remains permanent. (Historically, the qualifying factor was the presence of a cathedral, resulting in some very small cities such as Wells, with a population of 12,000 , and St Davids, with a population of 1,841 .) According to the "functional definition", a city is not distinguished by size alone, but also by the role it plays within a larger political context. Cities serve as administrative, commercial, religious, and cultural hubs for their larger surrounding areas. The presence of a literate elite is often associated with cities because of the cultural diversities present in a city. A typical city has professional administrators, regulations, and some form of taxation (food and other necessities or means to trade for them) to support the government workers. (This arrangement contrasts with the more typically horizontal relationships in a tribe or village accomplishing common goals through informal agreements between neighbors, or the leadership of a chief.) The governments may be based on heredity, religion, military power, work systems such as canal-building, food distribution, land-ownership, agriculture, commerce, manufacturing, finance, or a combination of these. Societies that live in cities are often called civilizations. The degree of urbanization is a modern metric to help define what comprises a city: "a population of at least 50,000 inhabitants in contiguous dense grid cells (>1,500 inhabitants per square kilometer)". This metric was "devised over years by the European Commission, OECD, World Bank and others, and endorsed in March [2021] by the United Nations ... largely for the purpose of international statistical comparison". Etymology The word city and the related civilization come from the Latin root civitas, originally meaning 'citizenship' or 'community member' and eventually coming to correspond with urbs, meaning 'city' in a more physical sense. The Roman civitas was closely linked with the Greek polis—another common root appearing in English words such as metropolis. In toponymic terminology, names of individual cities and towns are called astionyms (from Ancient Greek ἄστυ 'city or town' and ὄνομα 'name'). Geography Urban geography deals both with cities in their larger context and with their internal structure. Cities are estimated to cover about 3% of the land surface of the Earth. Site Town siting has varied through history according to natural, technological, economic, and military contexts. Access to water has long been a major factor in city placement and growth, and despite exceptions enabled by the advent of rail transport in the nineteenth century, through the present most of the world's urban population lives near the coast or on a river. Urban areas as a rule cannot produce their own food and therefore must develop some relationship with a hinterland that sustains them. Only in special cases such as mining towns which play a vital role in long-distance trade, are cities disconnected from the countryside which feeds them. Thus, centrality within a productive region influences siting, as economic forces would, in theory, favor the creation of marketplaces in optimal mutually reachable locations. Center The vast majority of cities have a central area containing buildings with special economic, political, and religious significance. Archaeologists refer to this area by the Greek term temenos or if fortified as a citadel. These spaces historically reflect and amplify the city's centrality and importance to its wider sphere of influence. Today cities have a city center or downtown, sometimes coincident with a central business district. Public space Cities typically have public spaces where anyone can go. These include privately owned spaces open to the public as well as forms of public land such as public domain and the commons. Western philosophy since the time of the Greek agora has considered physical public space as the substrate of the symbolic public sphere. Public art adorns (or disfigures) public spaces. Parks and other natural sites within cities provide residents with relief from the hardness and regularity of typical built environments. Urban green spaces are another component of public space that provides the benefit of mitigating the urban heat island effect, especially in cities that are in warmer climates. These spaces prevent carbon imbalances, extreme habitat losses, electricity and water consumption, and human health risks. Internal structure The urban structure generally follows one or more basic patterns: geomorphic, radial, concentric, rectilinear, and curvilinear. The physical environment generally constrains the form in which a city is built. If located on a mountainside, urban structures may rely on terraces and winding roads. It may be adapted to its means of subsistence (e.g. agriculture or fishing). And it may be set up for optimal defense given the surrounding landscape. Beyond these "geomorphic" features, cities can develop internal patterns, due to natural growth or to city planning. In a radial structure, main roads converge on a central point. This form could evolve from successive growth over a long time, with concentric traces of town walls and citadels marking older city boundaries. In more recent history, such forms were supplemented by ring roads moving traffic around the outskirts of a town. Dutch cities such as Amsterdam and Haarlem are structured as a central square surrounded by concentric canals marking every expansion. In cities such as Moscow, this pattern is still clearly visible. A system of rectilinear city streets and land plots, known as the grid plan, has been used for millennia in Asia, Europe, and the Americas. The Indus Valley civilization built Mohenjo-Daro, Harappa, and other cities on a grid pattern, using ancient principles described by Kautilya, and aligned with the compass points. The ancient Greek city of Priene exemplifies a grid plan with specialized districts used across the Hellenistic Mediterranean. Urban areas The urban-type settlement extends far beyond the traditional boundaries of the city proper in a form of development sometimes described critically as urban sprawl. Decentralization and dispersal of city functions (commercial, industrial, residential, cultural, political) has transformed the very meaning of the term and has challenged geographers seeking to classify territories according to an urban-rural binary. Metropolitan areas include suburbs and exurbs organized around the needs of commuters, and sometimes edge cities characterized by a degree of economic and political independence. (In the US these are grouped into metropolitan statistical areas for purposes of demography and marketing.) Some cities are now part of a continuous urban landscape called urban agglomeration, conurbation, or megalopolis (exemplified by the BosWash corridor of the Northeastern United States.) History The emergence of cities from proto-urban settlements, such as Çatalhöyük, is a non-linear development that demonstrates the varied experiences of early urbanization. The cities of Jericho, Aleppo, Faiyum, Yerevan, Athens, Matera, Damascus, and Argos are among those laying claim to the longest continual inhabitation. Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city were both followed by the development of agriculture, which enabled the production of surplus food and thus a social division of labor (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal. Ancient times Jericho and Çatalhöyük, dated to the eighth millennium BC, are among the earliest proto-cities known to archaeologists. However, the Mesopotamian city of Uruk from the mid-fourth millennium BC (ancient Iraq) is considered by most archaeologists to be the first true city, innovating many characteristics for cities to follow, with its name attributed to the Uruk period. In the fourth and third millennium BC, complex civilizations flourished in the river valleys of Mesopotamia, India, China, and Egypt. Excavations in these areas have found the ruins of cities geared variously towards trade, politics, or religion. Some had large, dense populations, but others carried out urban activities in the realms of politics or religion without having large associated populations. Among the early Old World cities, Mohenjo-Daro of the Indus Valley civilization in present-day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system. China's planned cities were constructed according to sacred principles to act as celestial microcosms. The Ancient Egyptian cities known physically by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes. In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostered multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz. In the following centuries, independent city-states of Greece, especially Athens, developed the polis, an association of male landowning citizens who collectively constituted the city. The agora, meaning "gathering place" or "assembly", was the center of the athletic, artistic, spiritual, and political life of the polis. Rome was the first city that surpassed one million inhabitants. Under the authority of its empire, Rome transformed and founded many cities (), and with them brought its principles of urban architecture, design, and society. In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu, and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th and 18th centuries BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilizations, Mayan, Mississippians, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac. Jenné-Jeno, located in present-day Mali and dating to the third century BC, lacked monumental architecture and a distinctive elite social class—but nevertheless had specialized production and relations with a hinterland. Pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa. Other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi-Saleh the ancient capital of Ghana, and Maranda a center located on a trade route between Egypt and Gao. Middle Ages In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, the capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453. In the Holy Roman Empire, beginning in the 12th century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zurich, and Nijmegen became a privileged elite among towns having won self-governance from their local lord or having been granted self-governance by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet. By the 13th and 14th centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy, medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed considerable autonomy in late medieval Japan. In the first millennium AD, the Khmer capital of Angkor in Cambodia grew into the most extensive preindustrial settlement in the world by area, covering over 1,000 km2 and possibly supporting up to one million people. Early modern In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small. During the Spanish colonization of the Americas, the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories and were bound to several laws regarding administration, finances, and urbanism. Industrial age The growth of the modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas. Some industrialized cities were confronted with health challenges associated with overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape. Post-industrial age In the second half of the 20th century, deindustrialization (or "economic restructuring") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's "Steel Belt" became a "Rust Belt" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, China has undergone concomitant urbanization and industrialization and become the world's leading manufacturer. Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city dwellers. Some companies are building brand-new master-planned cities from scratch on greenfield sites. Urbanization Urbanization is the process of migration from rural to urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and demographic expansion. In England, the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world's population lived in cities. The cultural appeal of cities also plays a role in attracting residents. Urbanization rapidly spread across Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs reported in 2014 that for the first time, more than half of the world population lives in cities. Latin America is the most urban continent, with four-fifths of its population living in cities, including one-fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China, and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the "Global North" remain more urbanized than the less developed countries of the "Global South"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion city dwellers (and 300 million fewer country dwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa. Megacities, cities with populations in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as the relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides the rich and poor in these cities, with usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions. Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground. Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels. Government The local government of cities takes different forms including prominently the municipality (especially in England, in the United States, India, and other British colonies; legally, the municipal corporation; municipio in Spain and Portugal, and, along with municipalidad, in most former parts of the Spanish and Portuguese empires) and the commune (in France and Chile; or comune in Italy). The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city. Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas. Municipal officials may be appointed from a higher level of government or elected locally. Municipal services Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, but some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968. Finance The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradeable financial public contracts and other related rights). This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings. Governance Governance includes government but refers to a wider domain of social control functions implemented by many actors including non-governmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the "urban regime theory" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, the industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners. The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in emergent megacities, where international organizations consider existing governments inadequate for their large populations. Urban planning Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions. Government is legally the final authority on planning but in practice, the process involves both public and private elements. The legal principle of eminent domain is used by the government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation. The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems. Society Social structure Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic, and racial lines. People living relatively close together may live, work, and play in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development that surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the West, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods. Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status of factory workers which in the nineteenth century provided access to the means of production. Economics Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market. As hubs of trade, cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism. In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities; however, in very dense cities, increased crowding and waiting times may lead to some negative effects. Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, and housekeeping to grey-collar work in law, financial consulting, and administration. According to a scientific model of cities by Professor Geoffrey West, with the doubling of a city's size, salaries per capita will generally increase by 15%. Culture and communications Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves play some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, human history, and social change. Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful. Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; attract businesses, investors, residents, and tourists; and to create shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city. Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Paris, a city known for its cultural history, is the site of the next Olympics in the summer of 2024. Warfare Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities. Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people to concentrate in cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside. During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, and functionally extends modern urban crime prevention, which already uses concepts such as defensible space. Although capture is the more common objective, warfare has in some cases spelled complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombings of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "counter-value" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces. Climate change Infrastructure Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private. Infrastructure in general plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already. Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from the national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance. Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives. Utilities Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace. Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide. Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, street lights, and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications. Transportation Because cities rely on specialization and an economic system based on wage labor, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. City dwellers travel by foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas. City streets historically were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the West, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In Western cities, industrializing, expanding, and electrifying public transit systems, and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown. Since the mid-20th century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with the accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks. The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. The economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia. Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic. Housing The housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity. Homeownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Because cities generally have higher population densities than rural areas, city dwellers are more likely to reside in apartments and less likely to live in a single-family home. Ecology Urban ecosystems, influenced as they are by the density of human buildings and activities, differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in the wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species that never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions. Typical urban fauna includes insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. However, in North America, large predators such as coyotes and white-tailed deer roam in urban wildlife Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) that envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in the comparable wilderness. Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby countries. Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it). One of the main methods of improving the urban ecology is including in the cities more urban green spaces: parks, gardens, lawns, and trees. These areas improve the health and well-being of the human, animal, and plant populations of the cities. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city. A study published in Nature's Scientific Reports journal in 2019 found that people who spent at least two hours per week in nature were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits are applied to men and women of all ages, as well as across different ethnicities, socioeconomic statuses, and even those with long-term illnesses and disabilities. People who did not get at least two hours – even if they surpassed an hour per week – did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study didn't count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles of home. "Even visiting local urban green spaces seems to be a good thing," Dr. White said in a press release. "Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit." World city system As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of the international markets and other high-level elements of the world economy, as well as personal communications and mass media. Global city A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term "global city" in her 1991 work, The Global City: New York, London, Tokyo to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, must compete with each other globally to achieve prosperity. Critics of the notion point to the different realms of power and interchange. The term "global city" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is "reductive and skewed" in its focus on financial systems. Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities. Large cities have a great divide between populations of both ends of the financial spectrum. Regulations on immigration promote the exploitation of low- and high-skilled immigrant workers from poor areas. During employment, migrant workers may be subject to unfair working conditions, including working overtime, low wages, and lack of safety in workplaces. Transnational activity Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels. New urban dwellers are increasingly transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes. Global governance Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant umbrella organization for cities; regionally and nationally, Eurocities, Asian Network of Major Cities 21, the Federation of Canadian Municipalities the National League of Cities, and the United States Conference of Mayors play similar roles. UCLG took responsibility for creating Agenda 21 for culture, a program for cultural policies promoting sustainable development, and has organized various conferences and reports for its furtherance. Networks have become especially prevalent in the arena of environmentalism and specifically climate change following the adoption of Agenda 21. Environmental city networks include the C40 Cities Climate Leadership Group, the United Nations Global Compact Cities Programme, the Carbon Neutral Cities Alliance (CNCA), the Covenant of Mayors and the Compact of Mayors, ICLEI – Local Governments for Sustainability, and the Transition Towns network. Cities with world political status as meeting places for advocacy groups, non-governmental organizations, lobbyists, educational institutions, intelligence agencies, military contractors, information technology firms, and other groups with a stake in world policymaking. They are consequently also sites for symbolic protest. South Africa has one of the highest rate of protests in the world. Pretoria, a city in South Africa had a rally where 5 thousand people took part in order to advocate for increasing wages to afford living costs. United Nations System The United Nations System has been involved in a series of events and declarations dealing with the development of cities during this period of rapid urbanization. The Habitat I conference in 1976 adopted the "Vancouver Declaration on Human Settlements" which identifies urban management as a fundamental aspect of development and establishes various principles for maintaining urban habitats. Citing the Vancouver Declaration, the UN General Assembly in December 1977 authorized the United Nations Commission Human Settlements and the HABITAT Centre for Human Settlements, intended to coordinate UN activities related to housing and settlements. The 1992 Earth Summit in Rio de Janeiro resulted in a set of international agreements including Agenda 21 which establishes principles and plans for sustainable development. The Habitat II conference in 1996 called for cities to play a leading role in this program, which subsequently advanced the Millennium Development Goals and Sustainable Development Goals. In January 2002 the UN Commission on Human Settlements became an umbrella agency called the United Nations Human Settlements Programme or UN-Habitat, a member of the United Nations Development Group. The Habitat III conference of 2016 focused on implementing these goals under the banner of a "New Urban Agenda". The four mechanisms envisioned for effecting the New Urban Agenda are (1) national policies promoting integrated sustainable development, (2) stronger urban governance, (3) long-term integrated urban and territorial planning, and (4) effective financing frameworks. Just before this conference, the European Union concurrently approved an "Urban Agenda for the European Union" known as the Pact of Amsterdam. UN-Habitat coordinates the U.N. urban agenda, working with the UN Environmental Programme, the UN Development Programme, the Office of the High Commissioner for Human Rights, the World Health Organization, and the World Bank. The World Bank, a U.N. specialized agency, has been a primary force in promoting the Habitat conferences, and since the first Habitat conference has used their declarations as a framework for issuing loans for urban infrastructure. The bank's structural adjustment programs contributed to urbanization in the Third World by creating incentives to move to cities. The World Bank and UN-Habitat in 1999 jointly established the Cities Alliance (based at the World Bank headquarters in Washington, D.C.) to guide policymaking, knowledge sharing, and grant distribution around the issue of urban poverty. (UN-Habitat plays an advisory role in evaluating the quality of a locality's governance.) The Bank's policies have tended to focus on bolstering real estate markets through credit and technical assistance. The United Nations Educational, Scientific and Cultural Organization, UNESCO has increasingly focused on cities as key sites for influencing cultural governance. It has developed various city networks including the International Coalition of Cities against Racism and the Creative Cities Network. UNESCO's capacity to select World Heritage Sites gives the organization significant influence over cultural capital, tourism, and historic preservation funding. Representation in culture Cities figure prominently in traditional Western culture, appearing in the Bible in both evil and holy forms, symbolized by Babylon and Jerusalem. Cain and Nimrod are the first city builders in the Book of Genesis. In Sumerian mythology Gilgamesh built the walls of Uruk. Cities can be perceived in terms of extremes or opposites: at once liberating and oppressive, wealthy and poor, organized and chaotic. The name anti-urbanism refers to various types of ideological opposition to cities, whether because of their culture or their political relationship with the country. Such opposition may result from identification of cities with oppression and the ruling elite. This and other political ideologies strongly influence narratives and themes in discourse about cities. In turn, cities symbolize their home societies. Writers, painters, and filmmakers have produced innumerable works of art concerning the urban experience. Classical and medieval literature includes a genre of descriptiones which treat of city features and history. Modern authors such as Charles Dickens and James Joyce are famous for evocative descriptions of their home cities. Fritz Lang conceived the idea for his influential 1927 film Metropolis while visiting Times Square and marveling at the nighttime neon lighting. Other early cinematic representations of cities in the twentieth century generally depicted them as technologically efficient spaces with smoothly functioning systems of automobile transport. By the 1960s, however, traffic congestion began to appear in such films as The Fast Lady (1962) and Playtime (1967). Literature, film, and other forms of popular culture have supplied visions of future cities both utopian and dystopian. The prospect of expanding, communicating, and increasingly interdependent world cities has given rise to images such as Nylonkong (New York, London, Hong Kong) and visions of a single world-encompassing ecumenopolis. See also Lists of cities List of adjectivals and demonyms for cities Lost city Metropolis Compact city Megacity Settlement hierarchy Urbanization Notes References Bibliography Abrahamson, Mark (2004). Global Cities. Oxford University Press. Ashworth, G.J. War and the City. London & New York: Routledge, 1991. . Bridge, Gary, and Sophie Watson, eds. (2000). A Companion to the City. Malden, MA: Blackwell, 2000/2003. Brighenti, Andrea Mubi, ed. (2013). Urban Interstices: The Aesthetics and the Politics of the In-between. Farnham: Ashgate Publishing. . Carter, Harold (1995). The Study of Urban Geography. 4th ed. London: Arnold. Clark, Peter (ed.) (2013). The Oxford Handbook of Cities in World History. Oxford University Press. Curtis, Simon (2016). Global Cities and Global Order. Oxford University Press. Ellul, Jacques (1970). The Meaning of the City. Translated by Dennis Pardee. Grand Rapids, Michigan: Eerdmans, 1970. ; French original (written earlier, published later as): Sans feu ni lieu : Signification biblique de la Grande Ville; Paris: Gallimard, 1975. Republished 2003 with Gupta, Joyetta, Karin Pfeffer, Hebe Verrest, & Mirjam Ros-Tonen, eds. (2015). Geographies of Urban Governance: Advanced Theories, Methods and Practices. Springer, 2015. . Hahn, Harlan, & Charles Levine (1980). Urban Politics: Past, Present, & Future. New York & London: Longman. Hanson, Royce (ed.). Perspectives on Urban Infrastructure. Committee on National Urban Policy, Commission on Behavioral and Social Sciences and Education, National Research Council. Washington: National Academy Press, 1984. Herrschel, Tassilo & Peter Newman (2017). Cities as International Actors: Urban and Regional Governance Beyond the Nation State. Palgrave Macmillan (Springer Nature). Grava, Sigurd (2003). Urban Transportation Systems: Choices for Communities. McGraw Hill, e-book. Kaplan, David H.; James O. Wheeler; Steven R. Holloway; & Thomas W. Hodler, cartographer (2004). Urban Geography. John Wiley & Sons, Inc. Kavaratzis, Mihalis, Gary Warnaby, & Gregory J. Ashworth, eds. (2015). Rethinking Place Branding: Comprehensive Brand Development for Cities and Regions. Springer. . Kraas, Frauke, Surinder Aggarwal, Martin Coy, & Günter Mertins, eds. (2014). Megacities: Our Global Urban Future. United Nations "International Year of Planet Earth" book series. Springer. . Latham, Alan, Derek McCormack, Kim McNamara, & Donald McNeil (2009). Key Concepts in Urban Geography. London: SAGE. . Leach, William (1993). Land of Desire: Merchants, Power, and the Rise of a New American Culture. New York: Vintage Books (Random House), 1994. . Levy, John M. (2017). Contemporary Urban Planning. 11th ed. New York: Routledge (Taylor & Francis). Magnusson, Warren. Politics of Urbanism: Seeing like a city. London & New York: Routledge, 2011. . Marshall, John U. (1989). The Structure of Urban Systems. University of Toronto Press. . Marzluff, John M., Eric Schulenberger, Wilfried Endlicher, Marina Alberti, Gordon Bradley, Clre Ryan, Craig ZumBrunne, & Ute Simon (2008). Urban Ecology: An International Perspective on the Interaction Between Humans and Nature. New York: Springer Science+Business Media. . McQuillan, Eugene. The Law of Municipal Corporations, 3rd ed. 1987 revised volume by Charles R.P. Keating, Esq. Wilmette, Illinois: Callaghan & Company. Moholy-Nagy, Sibyl (1968). Matrix of Man: An Illustrated History of Urban Environment. New York: Frederick A Praeger. Mumford, Lewis (1961). The City in History: Its Origins, Its Transformations, and Its Prospects. New York: Harcourt, Brace & World. Paddison, Ronan, ed. (2001). Handbook of Urban Studies. London; Thousand Oaks, California; and New Delhi: Sage Publications. . Rybczynski, W., City Life: Urban Expectations in a New World, (1995) Smith, Michael E. (2002) The Earliest Cities. In Urban Life: Readings in Urban Anthropology, edited by George Gmelch and Walter Zenner, pp. 3–19. 4th ed. Waveland Press, Prospect Heights, IL. Southall, Aidan (1998). The City in Time and Space. Cambridge University Press. Wellman, Kath & Marcus Spiller, eds. (2012). Urban Infrastructure: Finance and Management. Chichester, UK: Wiley-Blackwell. . Further reading Berger, Alan S., The City: Urban Communities and Their Problems, Dubuque, Iowa : William C. Brown, 1978. Chandler, T. Four Thousand Years of Urban Growth: An Historical Census. Lewiston, NY: Edwin Mellen Press, 1987. Geddes, Patrick, City Development (1904) Kemp, Roger L. Managing America's Cities: A Handbook for Local Government Productivity, McFarland and Company, Inc., Publisher, Jefferson, North Carolina and London, 2007. (). Kemp, Roger L. How American Governments Work: A Handbook of City, County, Regional, State, and Federal Operations, McFarland and Company, Inc., Publisher, Jefferson, North Carolina and London. (). Kemp, Roger L. "City and Gown Relations: A Handbook of Best Practices", McFarland and Company, Inc., Publisher, Jefferson, North Carolina, US, and London, (2013). (). Monti, Daniel J. Jr., The American City: A Social and Cultural History. Oxford, England and Malden, Massachusetts: Blackwell Publishers, 1999. 391 pp. . Reader, John (2005) Cities. Vintage, New York. Robson, W.A., and Regan, D.E., ed., Great Cities of the World, (3d ed., 2 vol., 1972) Smethurst, Paul (2015). The Bicycle – Towards a Global History. Palgrave Macmillan. . Smith, L. Monica (2020) Cities: The First 6,000 Years. Penguin Books. Thernstrom, S., and Sennett, R., ed., Nineteenth-Century Cities (1969) Toynbee, Arnold J. (ed), Cities of Destiny, New York: McGraw-Hill, 1967. Pan historical/geographical essays, many images. Starts with "Athens", ends with "The Coming World City-Ecumenopolis". Weber, Max, The City, 1921. (tr. 1958) External links World Urbanization Prospects, Website of the United Nations Population Division (archived 10 July 2017) Urban population (% of total) – World Bank website based on UN data. Degree of urbanization (percentage of urban population in total population) by continent in 2016 – Statista, based on Population Reference Bureau data. Cities Populated places by type Types of populated places Urban geography
https://en.wikipedia.org/wiki/Capricornus
Capricornus is one of the constellations of the zodiac. Its name is Latin for "horned goat" or "goat horn" or "having horns like a goat's", and it is commonly represented in the form of a sea goat: a mythical creature that is half goat, half fish. Capricornus is one of the 88 modern constellations, and was also one of the 48 constellations listed by the 2nd century astronomer Claudius Ptolemy. Its old astronomical symbol is (♑︎). Under its modern boundaries it is bordered by Aquila, Sagittarius, Microscopium, Piscis Austrinus, and Aquarius. The constellation is located in an area of sky called the Sea or the Water, consisting of many water-related constellations such as Aquarius, Pisces and Eridanus. It is the smallest constellation in the zodiac. Notable features Stars Capricornus is a faint constellation, with only one star above magnitude 3; its alpha star has a magnitude of only 3.6. The brightest star in Capricornus is δ Capricorni, also called Deneb Algedi, with a magnitude of 2.9, located 39 light-years from Earth. Like several other stars such as Denebola and Deneb, it is named for the Arabic word for "tail" (deneb); its traditional name means "the tail of the goat". Deneb Algedi is a Beta Lyrae variable star (a type of eclipsing binary). It ranges by about 0.2 magnitudes with a period of 24.5 hours. The other bright stars in Capricornus range in magnitude from 3.1 to 5.1. α Capricorni is a multiple star. The primary (α2 Cap), 109 light-years from Earth, is a yellow-hued giant star of magnitude 3.6; the secondary (α1 Cap), 690 light-years from Earth, is a yellow-hued supergiant star of magnitude 4.3. The two stars are distinguishable by the naked eye, and both are themselves multiple stars. α1 Capricorni is accompanied by a star of magnitude 9.2; α2 Capricornus is accompanied by a star of magnitude 11.0; this faint star is itself a binary star with two components of magnitude 11. Also called Algedi or Giedi, the traditional names of α Capricorni come from the Arabic word for "the kid", which references the constellation's mythology. β Capricorni is a double star also known as Dabih. It is a yellow-hued giant star of magnitude 3.1, 340 light-years from Earth. The secondary is a blue-white hued star of magnitude 6.1. The two stars are distinguishable in binoculars. β Capricorni's traditional name comes from the Arabic phrase for "the lucky stars of the slaughterer," a reference to ritual sacrifices performed by ancient Arabs at the heliacal rising of Capricornus. Another star visible to the naked eye is γ Capricorni, sometimes called Nashira ("bringing good tidings"); it is a white-hued giant star of magnitude 3.7, 139 light-years from Earth. π Capricorni is a double star with a blue-white hued primary of magnitude 5.1 and a white-hued secondary of magnitude 8.3. It is 670 light-years from Earth and the components are distinguishable in a small telescope. Deep-sky objects Several galaxies and star clusters are contained within Capricornus. Messier 30 is a globular cluster located 1 degree south of the galaxy group that contains NGC 7103. The constellation also harbors the wide spiral galaxy NGC 6907. Messier 30 (NGC 7099) is a centrally-condensed globular cluster of magnitude 7.5 . At a distance of 30,000 light-years, it has chains of stars extending to the north that are resolvable in small amateur telescopes. One galaxy group located in Capricornus is HCG 87, a group of at least three galaxies located 400 million light-years from Earth (redshift 0.0296). It contains a large elliptical galaxy, a face-on spiral galaxy, and an edge-on spiral galaxy. The face-on spiral galaxy is experiencing abnormally high rates of star formation, indicating that it is interacting with one or both members of the group. Furthermore, the large elliptical galaxy and the edge-on spiral galaxy, both of which have active nuclei, are connected by a stream of stars and dust, indicating that they too are interacting. Astronomers predict that the three galaxies may merge millions of years in the future to form a giant elliptical galaxy. History The constellation was first attested in depictions on a cylinder-seal from around the 21st century BCE, it was explicitly recorded in the Babylonian star catalogues before 1000 BCE. In the Early Bronze Age the winter solstice occurred in the constellation, but due to the precession of the equinoxes, the December solstice now takes place in the constellation Sagittarius. The Sun is now in the constellation Capricorn (as distinct from the astrological sign) from late January through mid-February. Although the solstice during the northern hemisphere's winter no longer takes place while the sun is in the constellation Capricornus, as it did until 130 BCE, the astrological sign called Capricorn is still used to denote the position of the solstice, and the latitude of the sun's most southerly position continues to be called the Tropic of Capricorn, a term which also applies to the line on the Earth at which the sun is directly overhead at local noon on the day of the December solstice. The planet Neptune was discovered by German astronomer Johann Galle, near Deneb Algedi (δ Capricorni) on 23 September 1846, as Capricornus can be seen best from Europe at 4:00 in September (although, by modern constellation boundaries established in the early 20th century CE, Neptune lay within the confines of Aquarius at the time of its discovery). Mythology Despite its faintness, the constellation Capricornus has one of the oldest mythological associations, having been consistently represented as a hybrid of a goat and a fish since the Middle Bronze Age, when the Babylonians used "The Goat-Fish" as a symbol of their god Ea. In Greek mythology, the constellation is sometimes identified as Amalthea, the goat that suckled the infant Zeus after his mother, Rhea, saved him from being devoured by his father, Cronos. Amalthea's broken horn was transformed into the cornucopia or "horn of plenty". Capricornus is also sometimes identified as Pan, the god with a goat's horns and legs, who saved himself from the monster Typhon by giving himself a fish's tail and diving into a river. Visualizations Capricornus's brighter stars are found on a triangle whose vertices are α2 Capricorni (Giedi), δ Capricorni (Deneb Algiedi), and ω Capricorni. Ptolemy's method of connecting the stars of Capricornus has been influential. Capricornus is usually drawn as a goat with the tail of a fish. H. A. Rey has suggested an alternative visualization, which graphically shows a goat. The goat's head is formed by the triangle of stars ι Cap, θ Cap, and ζ Cap. The goat's horn sticks out with stars γ Cap and δ Cap. Star δ Cap, at the tip of the horn, is of the third magnitude. The goat's tail consists of stars β Cap and α2 Cap: star β Cap being of the third magnitude. The goat's hind foot consists of stars ψ Cap and ω Cap. Both of these stars are of the fourth magnitude. Equivalents In Chinese astronomy, constellation Capricornus lies in The Black Tortoise of the North (). The Nakh peoples called this constellation Roofing Towers (). In the Society Islands, the figure of Capricornus was called Rua-o-Mere, "Cavern of parental yearnings". In Indian astronomy and Indian astrology, it is called Makara, the crocodile. See also Capricornus in Chinese astronomy Hippocampus (mythology), the mythological sea horse IC 1337, galaxy Citations Citations References External links The Deep Photographic Guide to the Constellations: Capricornus Ian Ridpath's Star Tales – Capricornus Warburg Institute Iconographic Database (medieval and early modern images of Capricornus) Constellations Southern constellations Constellations listed by Ptolemy
https://en.wikipedia.org/wiki/Canal
Canals or artificial waterways are waterways or engineered channels built for drainage management (e.g. flood control and irrigation) or for conveyancing water transport vehicles (e.g. water taxi). They carry free, calm surface flow under atmospheric pressure, and can be thought of as artificial rivers. In most cases, a canal has a series of dams and locks that create reservoirs of low speed current flow. These reservoirs are referred to as slack water levels, often just called levels. A canal can be called a navigation canal when it parallels a natural river and shares part of the latter's discharges and drainage basin, and leverages its resources by building dams and locks to increase and lengthen its stretches of slack water levels while staying in its valley. A canal can cut across a drainage divide atop a ridge, generally requiring an external water source above the highest elevation. The best-known example of such a canal is the Panama Canal. Many canals have been built at elevations, above valleys and other waterways. Canals with sources of water at a higher level can deliver water to a destination such as a city where water is needed. The Roman Empire's aqueducts were such water supply canals. The term was once used to describe linear features seen on the surface of Mars, Martian canals, an optical illusion. Types of artificial waterways A navigation is a series of channels that run roughly parallel to the valley and stream bed of an unimproved river. A navigation always shares the drainage basin of the river. A vessel uses the calm parts of the river itself as well as improvements, traversing the same changes in height. A true canal is a channel that cuts across a drainage divide, making a navigable channel connecting two different drainage basins. Structures used in artificial waterways Both navigations and canals use engineered structures to improve navigation: weirs and dams to raise river water levels to usable depths; looping descents to create a longer and gentler channel around a stretch of rapids or falls; locks to allow ships and barges to ascend/descend. Since they cut across drainage divides, canals are more difficult to construct and often need additional improvements, like viaducts and aqueducts to bridge waters over streams and roads, and ways to keep water in the channel. Types of canals There are two broad types of canal: Waterways: canals and navigations used for carrying vessels transporting goods and people. These can be subdivided into two kinds: Those connecting existing lakes, rivers, other canals or seas and oceans. Those connected in a city network: such as the Canal Grande and others of Venice; the grachten of Amsterdam or Utrecht, and the waterways of Bangkok. Aqueducts: water supply canals that are used for the conveyance and delivery of potable water, municipal uses, hydro power canals and agriculture irrigation. Importance Historically, canals were of immense importance to commerce and the development, growth and vitality of a civilization. In 1855 the Lehigh Canal carried over 1.2 million tons of anthracite coal; by the 1930s the company which built and operated it over a century pulled the plug. The few canals still in operation in our modern age are a fraction of the numbers that once fueled and enabled economic growth, indeed were practically a prerequisite to further urbanization and industrialization. For the movement of bulk raw materials such as coal and ores are difficult and marginally affordable without water transport. Such raw materials fueled the industrial developments and new metallurgy resulting of the spiral of increasing mechanization during 17th–20th century, leading to new research disciplines, new industries and economies of scale, raising the standard of living for any industrialized society. The surviving canals Most ship canals today primarily service bulk cargo and large ship transportation industries, whereas the once critical smaller inland waterways conceived and engineered as boat and barge canals have largely been supplanted and filled in, abandoned and left to deteriorate, or kept in service and staffed by state employees, where dams and locks are maintained for flood control or pleasure boating. Their replacement was gradual, beginning first in the United States in the mid-1850s where canal shipping was first augmented by, then began being replaced by using much faster, less geographically constrained & limited, and generally cheaper to maintain railways. By the early 1880s, canals which had little ability to economically compete with rail transport, were off the map. In the next couple of decades, coal was increasingly diminished as the heating fuel of choice by oil, and growth of coal shipments leveled off. Later, after World War I when motor-trucks came into their own, the last small U.S. barge canals saw a steady decline in cargo ton-miles alongside many railways, the flexibility and steep slope climbing capability of lorries taking over cargo hauling increasingly as road networks were improved, and which also had the freedom to make deliveries well away from rail lined road beds or ditches in the dirt which could not operate in the winter. The longest extant canal today, the Grand Canal in northern China, still remains in heavy use, especially the portion south of the Yellow River. It stretches from Beijing to Hangzhou at 1,794 kilometres (1,115 miles). Construction Canals are built in one of three ways, or a combination of the three, depending on available water and available path: Human made streams A canal can be created where no stream presently exists. Either the body of the canal is dug or the sides of the canal are created by making dykes or levees by piling dirt, stone, concrete or other building materials. The finished shape of the canal as seen in cross section is known as the canal prism. The water for the canal must be provided from an external source, like streams or reservoirs. Where the new waterway must change elevation engineering works like locks, lifts or elevators are constructed to raise and lower vessels. Examples include canals that connect valleys over a higher body of land, like Canal du Midi, Canal de Briare and the Panama Canal. A canal can be constructed by dredging a channel in the bottom of an existing lake. When the channel is complete, the lake is drained and the channel becomes a new canal, serving both drainage of the surrounding polder and providing transport there. Examples include the . One can also build two parallel dikes in an existing lake, forming the new canal in between, and then drain the remaining parts of the lake. The eastern and central parts of the North Sea Canal were constructed in this way. In both cases pumping stations are required to keep the land surrounding the canal dry, either pumping water from the canal into surrounding waters, or pumping it from the land into the canal. Canalization and navigations A stream can be canalized to make its navigable path more predictable and easier to maneuver. Canalization modifies the stream to carry traffic more safely by controlling the flow of the stream by dredging, damming and modifying its path. This frequently includes the incorporation of locks and spillways, that make the river a navigation. Examples include the Lehigh Canal in Northeastern Pennsylvania's coal Region, Basse Saône, Canal de Mines de Fer de la Moselle, and canal Aisne. Riparian zone restoration may be required. Lateral canals When a stream is too difficult to modify with canalization, a second stream can be created next to or at least near the existing stream. This is called a lateral canal, and may meander in a large horseshoe bend or series of curves some distance from the source waters stream bed lengthening the effective length in order to lower the ratio of rise over run (slope or pitch). The existing stream usually acts as the water source and the landscape around its banks provide a path for the new body. Examples include the Chesapeake and Ohio Canal, Canal latéral à la Loire, Garonne Lateral Canal, Welland Canal and Juliana Canal. Smaller transportation canals can carry barges or narrowboats, while ship canals allow seagoing ships to travel to an inland port (e.g., Manchester Ship Canal), or from one sea or ocean to another (e.g., Caledonian Canal, Panama Canal). Features At their simplest, canals consist of a trench filled with water. Depending on the stratum the canal passes through, it may be necessary to line the cut with some form of watertight material such as clay or concrete. When this is done with clay, it is known as puddling. Canals need to be level, and while small irregularities in the lie of the land can be dealt with through cuttings and embankments, for larger deviations other approaches have been adopted. The most common is the pound lock, which consists of a chamber within which the water level can be raised or lowered connecting either two pieces of canal at a different level or the canal with a river or the sea. When there is a hill to be climbed, flights of many locks in short succession may be used. Prior to the development of the pound lock in 984 AD in China by Chhaio Wei-Yo and later in Europe in the 15th century, either flash locks consisting of a single gate were used or ramps, sometimes equipped with rollers, were used to change the level. Flash locks were only practical where there was plenty of water available. Locks use a lot of water, so builders have adopted other approaches for situations where little water is available. These include boat lifts, such as the Falkirk Wheel, which use a caisson of water in which boats float while being moved between two levels; and inclined planes where a caisson is hauled up a steep railway. To cross a stream, road or valley (where the delay caused by a flight of locks at either side would be unacceptable) the valley can be spanned by a navigable aqueduct – a famous example in Wales is the Pontcysyllte Aqueduct (now a UNESCO World Heritage Site) across the valley of the River Dee. Another option for dealing with hills is to tunnel through them. An example of this approach is the Harecastle Tunnel on the Trent and Mersey Canal. Tunnels are only practical for smaller canals. Some canals attempted to keep changes in level down to a minimum. These canals known as contour canals would take longer, winding routes, along which the land was a uniform altitude. Other, generally later, canals took more direct routes requiring the use of various methods to deal with the change in level. Canals have various features to tackle the problem of water supply. In cases, like the Suez Canal, the canal is open to the sea. Where the canal is not at sea level, a number of approaches have been adopted. Taking water from existing rivers or springs was an option in some cases, sometimes supplemented by other methods to deal with seasonal variations in flow. Where such sources were unavailable, reservoirs – either separate from the canal or built into its course – and back pumping were used to provide the required water. In other cases, water pumped from mines was used to feed the canal. In certain cases, extensive "feeder canals" were built to bring water from sources located far from the canal. Where large amounts of goods are loaded or unloaded such as at the end of a canal, a canal basin may be built. This would normally be a section of water wider than the general canal. In some cases, the canal basins contain wharfs and cranes to assist with movement of goods. When a section of the canal needs to be sealed off so it can be drained for maintenance stop planks are frequently used. These consist of planks of wood placed across the canal to form a dam. They are generally placed in pre-existing grooves in the canal bank. On more modern canals, "guard locks" or gates were sometimes placed to allow a section of the canal to be quickly closed off, either for maintenance, or to prevent a major loss of water due to a canal breach. Canal falls A canal fall, or canal drop, is a vertical drop in the canal bed. These are built when the natural ground slope is steeper than the desired canal gradient. They are constructed so the falling water's kinetic energy is dissipated in order to prevent it from scouring the bed and sides of the canal. A canal fall is constructed by cut and fill. It may be combined with a regulator, bridge, or other structure to save costs. There are various types of canal falls, based on their shape. One type is the ogee fall, where the drop follows an s-shaped curve to create a smooth transition and reduce turbulence. However, this smooth transition does not dissipate the water's kinetic energy, which leads to heavy scouring. As a result, the canal needs to be reinforced with concrete or masonry to protect it from eroding. Another type of canal fall is the vertical fall, which is "simple and economical". These feature a "cistern", or depressed area just downstream from the fall, to "cushion" the water by providing a deep pool for its kinetic energy to be diffused in. Vertical falls work for drops of up to 1.5 m in height, and for discharge of up to 15 cubic meters per second. History The transport capacity of pack animals and carts is limited. A mule can carry an eighth-ton [] maximum load over a journey measured in days and weeks, though much more for shorter distances and periods with appropriate rest. Besides, carts need roads. Transport over water is much more efficient and cost-effective for large cargoes. Ancient canals The oldest known canals were irrigation canals, built in Mesopotamia circa 4000 BC, in what is now Iraq. The Indus Valley civilization of ancient India (circa 3000 BC) had sophisticated irrigation and storage systems developed, including the reservoirs built at Girnar in 3000 BC. This is the first time that such planned civil project had taken place in the ancient world. In Egypt, canals date back at least to the time of Pepi I Meryre (reigned 2332–2283 BC), who ordered a canal built to bypass the cataract on the Nile near Aswan. In ancient China, large canals for river transport were established as far back as the Spring and Autumn Period (8th–5th centuries BC), the longest one of that period being the Hong Gou (Canal of the Wild Geese), which according to the ancient historian Sima Qian connected the old states of Song, Zhang, Chen, Cai, Cao, and Wei. The Caoyun System of canals was essential for imperial taxation, which was largely assessed in kind and involved enormous shipments of rice and other grains. By far the longest canal was the Grand Canal of China, still the longest canal in the world today and the oldest extant one. It is long and was built to carry the Emperor Yang Guang between Zhuodu (Beijing) and Yuhang (Hangzhou). The project began in 605 and was completed in 609, although much of the work combined older canals, the oldest section of the canal existing since at least 486 BC. Even in its narrowest urban sections it is rarely less than wide. In the 5th century BC, Achaemenid king Xerxes I of Persia ordered the construction of the Xerxes Canal through the base of Mount Athos peninsula, Chalkidiki, northern Greece. It was constructed as part of his preparations for the Second Persian invasion of Greece, a part of the Greco-Persian Wars. It is one of the few monuments left by the Persian Empire in Europe. Greek engineers were also among the first to use canal locks, by which they regulated the water flow in the Ancient Suez Canal as early as the 3rd century BC. There was little experience moving bulk loads by carts, while a pack-horse would [i.e. 'could'] carry only an eighth of a ton. On a soft road a horse might be able to draw 5/8ths of a ton. But if the load were carried by a barge on a waterway, then up to 30 tons could be drawn by the same horse.— technology historian Ronald W. Clark referring to transport realities before the industrial revolution and the Canal age. Hohokam was a society in the North American Southwest in what is now part of Arizona, United States, and Sonora, Mexico. Their irrigation systems supported the largest population in the Southwest by 1300 CE. Archaeologists working at a major archaeological dig in the 1990s in the Tucson Basin, along the Santa Cruz River, identified a culture and people that may have been the ancestors of the Hohokam. This prehistoric group occupied southern Arizona as early as 2000 BCE, and in the Early Agricultural Period grew corn, lived year-round in sedentary villages, and developed sophisticated irrigation canals. The large-scale Hohokam irrigation network in the Phoenix metropolitan area was the most complex in ancient North America. A portion of the ancient canals has been renovated for the Salt River Project and now helps to supply the city's water. Middle Ages In the Middle Ages, water transport was several times cheaper and faster than transport overland. Overland transport by animal drawn conveyances was used around settled areas, but unimproved roads required pack animal trains, usually of mules to carry any degree of mass, and while a mule could carry an eighth ton, it also needed teamsters to tend it and one man could only tend perhaps five mules, meaning overland bulk transport was also expensive, as men expect compensation in the form of wages, room and board. This was because long-haul roads were unpaved, more often than not too narrow for carts, much less wagons, and in poor condition, wending their way through forests, marshy or muddy quagmires as often as unimproved but dry footing. In that era, as today, greater cargoes, especially bulk goods and raw materials, could be transported by ship far more economically than by land; in the pre-railroad days of the industrial revolution, water transport was the gold standard of fast transportation. The first artificial canal in Western Europe was the Fossa Carolina built at the end of the 8th century under personal supervision of Charlemagne. In Britain, the Glastonbury Canal  is believed to be the first post-Roman canal and was built in the middle of the 10th century to link the River Brue at Northover with Glastonbury Abbey, a distance of about . Its initial purpose is believed to be the transport of building stone for the abbey, but later it was used for delivering produce, including grain, wine and fish, from the abbey's outlying properties. It remained in use until at least the 14th century, but possibly as late as the mid-16th century.More lasting and of more economic impact were canals like the Naviglio Grande built between 1127 and 1257 to connect Milan with the river Ticino. The Naviglio Grande is the most important of the lombard "navigli" and the oldest functioning canal in Europe.Later, canals were built in the Netherlands and Flanders to drain the polders and assist transportation of goods and people. Canal building was revived in this age because of commercial expansion from the 12th century. River navigations were improved progressively by the use of single, or flash locks. Taking boats through these used large amounts of water leading to conflicts with watermill owners and to correct this, the pound or chamber lock first appeared, in the 10th century in China and in Europe in 1373 in Vreeswijk, Netherlands. Another important development was the mitre gate, which was, it is presumed, introduced in Italy by Bertola da Novate in the 16th century. This allowed wider gates and also removed the height restriction of guillotine locks. To break out of the limitations caused by river valleys, the first summit level canals were developed with the Grand Canal of China in 581–617 AD whilst in Europe the first, also using single locks, was the Stecknitz Canal in Germany in 1398. Africa In the Songhai Empire of West Africa, several canals were constructed under Sunni Ali and Askia Muhammad I between Kabara and Timbuktu in the 15th century. These were used primarily for irrigation and transport. Sunni Ali also attempted to construct a canal from the Niger River to Walata to facilitate conquest of the city but his progress was halted when he went to war with the Mossi Kingdoms. Early modern period Around 1500–1800 the first summit level canal to use pound locks in Europe was the Briare Canal connecting the Loire and Seine (1642), followed by the more ambitious Canal du Midi (1683) connecting the Atlantic to the Mediterranean. This included a staircase of 8 locks at Béziers, a tunnel, and three major aqueducts. Canal building progressed steadily in Germany in the 17th and 18th centuries with three great rivers, the Elbe, Oder and Weser being linked by canals. In post-Roman Britain, the first early modern period canal built appears to have been the Exeter Canal, which was surveyed in 1563, and open in 1566. The oldest canal in the European settlements of North America, technically a mill race built for industrial purposes, is Mother Brook between the Boston, Massachusetts neighbourhoods of Dedham and Hyde Park connecting the higher waters of the Charles River and the mouth of the Neponset River and the sea. It was constructed in 1639 to provide water power for mills. In Russia, the Volga–Baltic Waterway, a nationwide canal system connecting the Baltic Sea and Caspian Sea via the Neva and Volga rivers, was opened in 1718. Industrial Revolution See also: History of the British canal system See also: History of turnpikes and canals in the United States The modern canal system was mainly a product of the 18th century and early 19th century. It came into being because the Industrial Revolution (which began in Britain during the mid-18th century) demanded an economic and reliable way to transport goods and commodities in large quantities. By the early 18th century, river navigations such as the Aire and Calder Navigation were becoming quite sophisticated, with pound locks and longer and longer "cuts" (some with intermediate locks) to avoid circuitous or difficult stretches of river. Eventually, the experience of building long multi-level cuts with their own locks gave rise to the idea of building a "pure" canal, a waterway designed on the basis of where goods needed to go, not where a river happened to be. The claim for the first pure canal in Great Britain is debated between "Sankey" and "Bridgewater" supporters. The first true canal in what is now the United Kingdom was the Newry Canal in Northern Ireland constructed by Thomas Steers in 1741. The Sankey Brook Navigation, which connected St Helens with the River Mersey, is often claimed as the first modern "purely artificial" canal because although originally a scheme to make the Sankey Brook navigable, it included an entirely new artificial channel that was effectively a canal along the Sankey Brook valley. However, "Bridgewater" supporters point out that the last quarter-mile of the navigation is indeed a canalized stretch of the Brook, and that it was the Bridgewater Canal (less obviously associated with an existing river) that captured the popular imagination and inspired further canals. In the mid-eighteenth century the 3rd Duke of Bridgewater, who owned a number of coal mines in northern England, wanted a reliable way to transport his coal to the rapidly industrializing city of Manchester. He commissioned the engineer James Brindley to build a canal for that purpose. Brindley's design included an aqueduct carrying the canal over the River Irwell. This was an engineering wonder which immediately attracted tourists. The construction of this canal was funded entirely by the Duke and was called the Bridgewater Canal. It opened in 1761 and was the first major British canal. The new canals proved highly successful. The boats on the canal were horse-drawn with a towpath alongside the canal for the horse to walk along. This horse-drawn system proved to be highly economical and became standard across the British canal network. Commercial horse-drawn canal boats could be seen on the UK's canals until as late as the 1950s, although by then diesel-powered boats, often towing a second unpowered boat, had become standard. The canal boats could carry thirty tons at a time with only one horse pulling – more than ten times the amount of cargo per horse that was possible with a cart. Because of this huge increase in supply, the Bridgewater canal reduced the price of coal in Manchester by nearly two-thirds within just a year of its opening. The Bridgewater was also a huge financial success, with it earning what had been spent on its construction within just a few years. This success proved the viability of canal transport, and soon industrialists in many other parts of the country wanted canals. After the Bridgewater canal, early canals were built by groups of private individuals with an interest in improving communications. In Staffordshire the famous potter Josiah Wedgwood saw an opportunity to bring bulky cargoes of clay to his factory doors and to transport his fragile finished goods to market in Manchester, Birmingham or further away, by water, minimizing breakages. Within just a few years of the Bridgewater's opening, an embryonic national canal network came into being, with the construction of canals such as the Oxford Canal and the Trent & Mersey Canal. The new canal system was both cause and effect of the rapid industrialization of The Midlands and the north. The period between the 1770s and the 1830s is often referred to as the "Golden Age" of British canals. For each canal, an Act of Parliament was necessary to authorize construction, and as people saw the high incomes achieved from canal tolls, canal proposals came to be put forward by investors interested in profiting from dividends, at least as much as by people whose businesses would profit from cheaper transport of raw materials and finished goods. In a further development, there was often out-and-out speculation, where people would try to buy shares in a newly floated company to sell them on for an immediate profit, regardless of whether the canal was ever profitable, or even built. During this period of "canal mania", huge sums were invested in canal building, and although many schemes came to nothing, the canal system rapidly expanded to nearly 4,000 miles (over 6,400 kilometres) in length. Many rival canal companies were formed and competition was rampant. Perhaps the best example was Worcester Bar in Birmingham, a point where the Worcester and Birmingham Canal and the Birmingham Canal Navigations Main Line were only seven feet apart. For many years, a dispute about tolls meant that goods travelling through Birmingham had to be portaged from boats in one canal to boats in the other. Canal companies were initially chartered by individual states in the United States. These early canals were constructed, owned, and operated by private joint-stock companies. Four were completed when the War of 1812 broke out; these were the South Hadley Canal (opened 1795) in Massachusetts, Santee Canal (opened 1800) in South Carolina, the Middlesex Canal (opened 1802) also in Massachusetts, and the Dismal Swamp Canal (opened 1805) in Virginia. The Erie Canal (opened 1825) was chartered and owned by the state of New York and financed by bonds bought by private investors. The Erie canal runs about from Albany, New York, on the Hudson River to Buffalo, New York, at Lake Erie. The Hudson River connects Albany to the Atlantic port of New York City and the Erie Canal completed a navigable water route from the Atlantic Ocean to the Great Lakes. The canal contains 36 locks and encompasses a total elevation differential of around 565 ft. (169 m). The Erie Canal with its easy connections to most of the U.S. mid-west and New York City soon quickly paid back all its invested capital (US$7 million) and started turning a profit. By cutting transportation costs in half or more it became a large profit center for Albany and New York City as it allowed the cheap transportation of many of the agricultural products grown in the mid west of the United States to the rest of the world. From New York City these agricultural products could easily be shipped to other U.S. states or overseas. Assured of a market for their farm products the settlement of the U.S. mid-west was greatly accelerated by the Erie Canal. The profits generated by the Erie Canal project started a canal building boom in the United States that lasted until about 1850 when railroads started becoming seriously competitive in price and convenience. The Blackstone Canal (finished in 1828) in Massachusetts and Rhode Island fulfilled a similar role in the early industrial revolution between 1828 and 1848. The Blackstone Valley was a major contributor of the American Industrial Revolution where Samuel Slater built his first textile mill. Power canals See also: Power canal A power canal refers to a canal used for hydraulic power generation, rather than for transport. Nowadays power canals are built almost exclusively as parts of hydroelectric power stations. Parts of the United States, particularly in the Northeast, had enough fast-flowing rivers that water power was the primary means of powering factories (usually textile mills) until after the American Civil War. For example, Lowell, Massachusetts, considered to be "The Cradle of the American Industrial Revolution," has of canals, built from around 1790 to 1850, that provided water power and a means of transportation for the city. The output of the system is estimated at 10,000 horsepower. Other cities with extensive power canal systems include Lawrence, Massachusetts, Holyoke, Massachusetts, Manchester, New Hampshire, and Augusta, Georgia. The most notable power canal was built in 1862 for the Niagara Falls Hydraulic Power and Manufacturing Company. 19th century Competition, from railways from the 1830s and roads in the 20th century, made the smaller canals obsolete for most commercial transport, and many of the British canals fell into decay. Only the Manchester Ship Canal and the Aire and Calder Canal bucked this trend. Yet in other countries canals grew in size as construction techniques improved. During the 19th century in the US, the length of canals grew from to over 4,000, with a complex network making the Great Lakes navigable, in conjunction with Canada, although some canals were later drained and used as railroad rights-of-way. In the United States, navigable canals reached into isolated areas and brought them in touch with the world beyond. By 1825 the Erie Canal, long with 36 locks, opened up a connection from the populated Northeast to the Great Lakes. Settlers flooded into regions serviced by such canals, since access to markets was available. The Erie Canal (as well as other canals) was instrumental in lowering the differences in commodity prices between these various markets across America. The canals caused price convergence between different regions because of their reduction in transportation costs, which allowed Americans to ship and buy goods from farther distances much cheaper. Ohio built many miles of canal, Indiana had working canals for a few decades, and the Illinois and Michigan Canal connected the Great Lakes to the Mississippi River system until replaced by a channelized river waterway. Three major canals with very different purposes were built in what is now Canada. The first Welland Canal, which opened in 1829 between Lake Ontario and Lake Erie, bypassing Niagara Falls and the Lachine Canal (1825), which allowed ships to skirt the nearly impassable rapids on the St. Lawrence River at Montreal, were built for commerce. The Rideau Canal, completed in 1832, connects Ottawa on the Ottawa River to Kingston, Ontario on Lake Ontario. The Rideau Canal was built as a result of the War of 1812 to provide military transportation between the British colonies of Upper Canada and Lower Canada as an alternative to part of the St. Lawrence River, which was susceptible to blockade by the United States. In France, a steady linking of all the river systems – Rhine, Rhône, Saône and Seine – and the North Sea was boosted in 1879 by the establishment of the Freycinet gauge, which specified the minimum size of locks. Canal traffic doubled in the first decades of the 20th century. Many notable sea canals were completed in this period, starting with the Suez Canal (1869) – which carries tonnage many times that of most other canals – and the Kiel Canal (1897), though the Panama Canal was not opened until 1914. In the 19th century, a number of canals were built in Japan including the Biwako canal and the Tone canal. These canals were partially built with the help of engineers from the Netherlands and other countries. A major question was how to connect the Atlantic and the Pacific with a canal through narrow Central America. (The Panama Railroad opened in 1855.) The original proposal was for a sea-level canal through what is today Nicaragua, taking advantage of the relatively large Lake Nicaragua. This canal has never been built in part because of political instability, which scared off potential investors. It remains an active project (the geography has not changed), and in the 2010s Chinese involvement was developing. The second choice for a Central American canal was a Panama canal. The De Lessups company, which ran the Suez Canal, first attempted to build a Panama Canal in the 1880s. The difficulty of the terrain and weather (rain) encountered caused the company to go bankrupt. High worker mortality from disease also discouraged further investment in the project. DeLessup's abandoned excavating equipment sits, isolated decaying machines, today tourist attractions. Twenty years later, an expansionist United States, that just acquired colonies after defeating Spain in the 1898 Spanish–American War, and whose Navy became more important, decided to reactivate the project. The United States and Colombia did not reach agreement on the terms of a canal treaty (see Hay–Herrán Treaty). Panama, which did not have (and still does not have) a land connection with the rest of Colombia, was already thinking of independence. In 1903 the United States, with support from Panamanians who expected the canal to provide substantial wages, revenues, and markets for local goods and services, took Panama province away from Colombia, and set up a puppet republic (Panama). Its currency, the Balboa – a name that suggests the country began as a way to get from one hemisphere to the other – was a replica of the US dollar. The US dollar was and remains legal tender (used as currency). A U.S. military zone, the Canal Zone, wide, with U.S. military stationed there (bases, 2 TV stations, channels 8 and 10, Pxs, a U.S.-style high school), split Panama in half. The Canal – a major engineering project – was built. The U.S. did not feel that conditions were stable enough to withdraw until 1979. The withdrawal from Panama contributed to President Jimmy Carter's defeat in 1980. Modern uses Large-scale ship canals such as the Panama Canal and Suez Canal continue to operate for cargo transportation, as do European barge canals. Due to globalization, they are becoming increasingly important, resulting in expansion projects such as the Panama Canal expansion project. The expanded canal began commercial operation on 26 June 2016. The new set of locks allow transit of larger, Post-Panamax and New Panamax ships. The narrow early industrial canals, however, have ceased to carry significant amounts of trade and many have been abandoned to navigation, but may still be used as a system for transportation of untreated water. In some cases railways have been built along the canal route, an example being the Croydon Canal. A movement that began in Britain and France to use the early industrial canals for pleasure boats, such as hotel barges, has spurred rehabilitation of stretches of historic canals. In some cases, abandoned canals such as the Kennet and Avon Canal have been restored and are now used by pleasure boaters. In Britain, canalside housing has also proven popular in recent years. The Seine–Nord Europe Canal is being developed into a major transportation waterway, linking France with Belgium, Germany, and the Netherlands. Canals have found another use in the 21st century, as easements for the installation of fibre optic telecommunications network cabling, avoiding having them buried in roadways while facilitating access and reducing the hazard of being damaged from digging equipment. Canals are still used to provide water for agriculture. An extensive canal system exists within the Imperial Valley in the Southern California desert to provide irrigation to agriculture within the area. Cities on water Canals are so deeply identified with Venice that many canal cities have been nicknamed "the Venice of…". The city is built on marshy islands, with wooden piles supporting the buildings, so that the land is man-made rather than the waterways. The islands have a long history of settlement; by the 12th century, Venice was a powerful city state. Amsterdam was built in a similar way, with buildings on wooden piles. It became a city around 1300. Many Amsterdam canals were built as part of fortifications. They became grachten when the city was enlarged and houses were built alongside the water. Its nickname as the "Venice of the North" is shared with Hamburg of Germany, St. Petersburg of Russia and Bruges of Belgium. Suzhou was dubbed the "Venice of the East" by Marco Polo during his travels there in the 13th century, with its modern canalside Pingjiang Road and Shantang Street becoming major tourist attractions. Other nearby cities including Nanjing, Shanghai, Wuxi, Jiaxing, Huzhou, Nantong, Taizhou, Yangzhou, and Changzhou are located along the lower mouth of the Yangtze River and Lake Tai, yet another source of small rivers and creeks, which have been canalized and developed for centuries. Other cities with extensive canal networks include: Alkmaar, Amersfoort, Bolsward, Brielle, Delft, Den Bosch, Dokkum, Dordrecht, Enkhuizen, Franeker, Gouda, Haarlem, Harlingen, Leeuwarden, Leiden, Sneek and Utrecht in the Netherlands; Brugge and Gent in Flanders, Belgium; Birmingham in England; Saint Petersburg in Russia; Bydgoszcz, Gdańsk, Szczecin and Wrocław in Poland; Aveiro in Portugal; Hamburg and Berlin in Germany; Fort Lauderdale and Cape Coral in Florida, United States, Wenzhou in China, Cần Thơ in Vietnam, Bangkok in Thailand, and Lahore in Pakistan. Liverpool Maritime Mercantile City was a UNESCO World Heritage Site near the centre of Liverpool, England, where a system of intertwining waterways and docks is now being developed for mainly residential and leisure use. Canal estates (sometimes known as bayous in the United States) are a form of subdivision popular in cities like Miami, Florida, Texas City, Texas and the Gold Coast, Queensland; the Gold Coast has over 890 km of residential canals. Wetlands are difficult areas upon which to build housing estates, so dredging part of the wetland down to a navigable channel provides fill to build up another part of the wetland above the flood level for houses. Land is built up in a finger pattern that provides a suburban street layout of waterfront housing blocks. Boats Inland canals have often had boats specifically built for them. An example of this is the British narrowboat, which is up to long and wide and was primarily built for British Midland canals. In this case the limiting factor was the size of the locks. This is also the limiting factor on the Panama canal where Panamax ships were limited to a length of and a beam of until 26 June 2016 when the opening of larger locks allowed for the passage of larger New Panamax ships. For the lockless Suez Canal the limiting factor for Suezmax ships is generally draft, which is limited to . At the other end of the scale, tub-boat canals such as the Bude Canal were limited to boats of under 10 tons for much of their length due to the capacity of their inclined planes or boat lifts. Most canals have a limit on height imposed either by bridges or by tunnels. Lists of canals Africa Bahr Yussef El Salam Canal Egypt Ibrahimiya Canal Egypt Mahmoudiyah Canal Egypt Suez Canal Egypt Asia see List of canals in India see List of canals in Pakistan see History of canals in China Europe Danube–Black Sea Canal (Romania) North Crimean Canal (Ukraine) Canals of France Canals of Amsterdam Canals of Germany Canals of Ireland Canals of Russia Canals of the United Kingdom List of canals in the United Kingdom Great Bačka Canal (Serbia) North America Canals of Canada Canals of the United States Lists of proposed canals Eurasia Canal Istanbul Canal Nicaragua Canal Salwa Canal Thai Canal Sulawesi Canal Two Seas Canal Northern river reversal Balkan Canal or Danube–Morava–Vardar–Aegean Canal Iranrud See also Barges of all types Beaver, a non-human animal also known for canal building Canal elevator Calle canal Canal & River Trust Canal tunnel Channel Ditch Environment Agency History of the British canal system Horse-drawn boat Infrastructure Irrigation district Lists of canals List of navigation authorities in the United Kingdom List of waterways List of waterway societies in the United Kingdom Lock Mooring Navigable aqueduct Navigation authority Narrowboat Power canal Proposed canals River Ship canal Tow path Roman canals – (Torksey) Volumetric flow rate Water bridge Waterscape Water transportation Waterway Waterway restoration Waterways in the United Kingdom Weigh lock References Notes Bibliography External links British Waterways' leisure website – Britain's official guide to canals, rivers and lakes Leeds Liverpool Canal Photographic Guide Information and Boater's Guide to the New York State Canal System "Canals and Navigable Rivers" by James S. Aber, Emporia State University National Canal Museum (US) London Canal Museum (UK) Canals in Amsterdam Canal du Midi Canal des Deux Mers Canal flow measurement using a sensor. Coastal construction Water transport infrastructure Artificial bodies of water Infrastructure
https://en.wikipedia.org/wiki/Combustion
Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining. Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure): 2H_2(g){+}O_2(g)\rightarrow 2H_2O\uparrow Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law. Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous. Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process. Types Complete and incomplete Complete In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated species) form when the air is the oxidative. Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. species appear in significant amounts above about , and more is produced at higher temperatures. The amount of is also a function of oxygen excess. In most industrial applications and in fires, air is the source of oxygen (). In the air, each mole of oxygen is mixed with approximately of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to (mostly , with much smaller amounts of ). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel. The amount of air required for complete combustion is known as the "theoretical air" or "stoichiometric air". The amount of air above this value actually needed for optimal combustion is known as the "excess air", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine. Incomplete Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide. For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide. The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards. The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today. Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen. Problems associated with incomplete combustion Environmental problems These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog. Human health problems Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body. Smoldering Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires. Spontaneous Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition. For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion. Turbulent Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer. Micro-gravity The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others). Micro-combustion Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers. Chemical equations Stoichiometric combustion of a hydrocarbon in oxygen Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is: C_\mathit{x}H_\mathit{y}{} + \mathit{z}O2 -> \mathit{x}CO2{} + \frac{\mathit{y}}{2}H2O where . For example, the stoichiometric burning of propane in oxygen is: \underset{propane\atop (fuel)}{C3H8} + \underset{oxygen}{5O2} -> \underset{carbon\ dioxide}{3CO2} + \underset{water}{4H2O} Stoichiometric combustion of a hydrocarbon in air If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% - O2%) / O2% where O2% is 20.95% vol: where . For example, the stoichiometric combustion of propane (C3H8) in air is: The stoichiometric composition of propane in air is 1 / (1 + 5 + 18.87) = 4.02% vol. The stoichiometric combustion reaction for CHO in air: The stoichiometric combustion reaction for CHOS: The stoichiometric combustion reaction for CHONS: The stoichiometric combustion reaction for CHOF: Trace combustion products Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of . For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% . Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid). Incomplete combustion of a hydrocarbon in oxygen The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is: \underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2} When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable. The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are: Carbon: Hydrogen: Oxygen: These three equations are insufficient in themselves to calculate the combustion gas composition. However, at the equilibrium position, the water-gas shift reaction gives another equation: CO + H2O -> CO2 + H2; For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% . Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc. Liquid fuels Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion. Gaseous fuels Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity. Solid fuels The act of combustion consists of three relatively distinct but overlapping phases: Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation. Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours. Charcoal phase or solid phase, when the output of flammable gases from the material is too low for the persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders. Combustion management Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss. In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required. The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest. Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen. Reaction mechanism Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue. Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas. Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke. The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s). Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions. The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers. Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by: The Relaxation Redistribution Method (RRM) The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments The invariant-constrained equilibrium edge preimage curve method. A few variational approaches The Computational Singular perturbation (CSP) method and further developments. The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach. The G-Scheme. The Method of Invariant Grids (MIG). Kinetic modelling The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis. Temperature Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas). In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following: the heating value; the stoichiometric air to fuel ratio ; the specific heat capacity of fuel and air; the air and fuel inlet temperatures. The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one. Most commonly, the adiabatic combustion temperatures for coals are around (for inlet air and fuel at ambient temperatures and for ), around for oil and for natural gas. In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used. Instabilities Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the emissions; however, running the combustion lean makes it very susceptible to combustion instability. The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability where q' is the heat release rate perturbation and p' is the pressure fluctuation. When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index. See also Related concepts Air–fuel ratio Autoignition temperature Chemical looping combustion Deflagration Detonation Explosion Fire Flame Heterogeneous combustion Markstein number Phlogiston theory (historical) Spontaneous combustion Machines and equipment Boiler Bunsen burner External combustion engine Furnace Gas turbine Internal combustion engine Rocket engine Scientific and engineering societies International Flame Research Foundation The Combustion Institute Other List of light sources References Further reading Chemical reactions
https://en.wikipedia.org/wiki/Consciousness
Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked. Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain. Etymology In the late 20th century, philosophers like Hamlyn, Rorty, and Wilkes have disagreed with Kahn, Hardie and Modrak as to whether Aristotle even had a concept of consciousness. Aristotle does not use any single word or terminology to name the phenomenon; it is used only much later, especially by John Locke. Caston contends that for Aristotle, perceptual awareness was somewhat the same as what modern philosophers call consciousness. The origin of the modern concept of consciousness is often attributed to Locke's Essay Concerning Human Understanding, published in 1690. Locke defined consciousness as "the perception of what passes in a man's own mind". His essay influenced the 18th-century view of consciousness, and his definition appeared in Samuel Johnson's celebrated Dictionary (1755). "Consciousness" (French: conscience) is also defined in the 1753 volume of Diderot and d'Alembert's Encyclopédie, as "the opinion or internal feeling that we ourselves have from what we do". The earliest English language uses of "conscious" and "consciousness" date back, however, to the 1500s. The English word "conscious" originally derived from the Latin conscius (con- "together" and scio "to know"), but the Latin word did not have the same meaning as the English word—it meant "knowing with", in other words, "having joint or common knowledge with another". There were, however, many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase had the figurative meaning of "knowing that one knows", as the modern English word "conscious" does. In its earliest uses in the 1500s, the English word "conscious" retained the meaning of the Latin conscius. For example, Thomas Hobbes in Leviathan wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another." The Latin phrase conscius sibi, whose meaning was more closely related to the current concept of consciousness, was rendered in English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness". Locke's definition from 1690 illustrates that a gradual shift in meaning had taken place. A related word was conscientia, which primarily means moral conscience. In the literal sense, "conscientia" means knowledge-with, that is, shared knowledge. The word first appears in Latin juridical texts by writers such as Cicero. Here, conscientia is the knowledge that a witness has of the deed of someone else. René Descartes (1596–1650) is generally taken to be the first philosopher to use conscientia in a way that does not fit this traditional meaning. Descartes used conscientia the way modern speakers would use "conscience". In Search after Truth (, Amsterdam 1701) he says "conscience or internal testimony" (conscientiâ, vel interno testimonio). The problem of definition The dictionary definitions of the word consciousness extend through several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between 'inward awareness' and 'perception' of the physical world, or the distinction between 'conscious' and 'unconscious', or the notion of a "mental entity" or "mental activity" that is not physical. The common usage definitions of consciousness in Webster's Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows: awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in one's inner self inward awareness of an external object, state, or fact concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness] the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS waking life (as that to which one returns after sleep, trance, fever) wherein all one's mental powers have returned . . . the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUS The Cambridge Dictionary defines consciousness as "the state of understanding and realizing something." The Oxford Living Dictionary defines consciousness as "The state of being aware of and responsive to one's surroundings.", "A person's awareness or perception of something." and "The fact of awareness by the mind of itself and the world." Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows: Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland expressed a skeptical attitude more than a definition: A partisan definition such as Sutherland's can hugely affect researchers' assumptions and the direction of their work: Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it. Others, though, have argued that the level of disagreement about the meaning of the word indicates that it either means different things to different people (for instance, the objective versus subjective aspects of consciousness), that it encompasses a variety of distinct meanings with no simple element in common, or that we should eliminate this concept from our understanding of the mind, a position known as consciousness semanticism. Inter-disciplinary perspectives Western philosophers since the time of Descartes and Locke have struggled to comprehend the nature of consciousness and how it fits into a larger picture of the world. These questions remain central to both continental and analytic philosophy, in phenomenology and the philosophy of mind, respectively. Consciousness has also become a significant topic of interdisciplinary research in cognitive science, involving fields such as psychology, linguistics, anthropology, neuropsychology and neuroscience. The primary focus is on understanding what it means biologically and psychologically for information to be present in consciousness—that is, on determining the neural and psychological correlates of consciousness. In medicine, consciousness is assessed by observing a patient's arousal and responsiveness, and can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the presence of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale. Philosophy of mind Most writers on the philosophy of consciousness have been concerned with defending a particular point of view, and have organized their material accordingly. For surveys, the most common approach is to follow a historical path by associating stances with the philosophers who are most strongly associated with them, for example, Descartes, Locke, Kant, etc. An alternative is to organize philosophical stances according to basic issues. Coherence of the concept Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is false, either because the concept of consciousness is intrinsically incoherent, or because our intuitions about it are based in illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of individuals, or persons, acting in the world. Thus, by speaking of "consciousness" we end up misleading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings. Types Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness. Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms. There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility." Distinguishing consciousness from its contents Sam Harris observes: "At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents". Seen in this way, consciousness is a subjectively experienced, ever-present field in which things (the contents of consciousness) come and go. Christopher Tricker argues that this field of consciousness is symbolized by the mythical bird that opens the Daoist classic the Zhuangzi. This bird’s name is Of a Flock (peng 鵬), yet its back is countless thousands of miles across and its wings are like clouds arcing across the heavens. "Like Of a Flock, whose wings arc across the heavens, the wings of your consciousness span to the horizon. At the same time, the wings of every other being’s consciousness span to the horizon. You are of a flock, one bird among kin." Mind–body problem Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown. The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland. Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes's rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought. Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness. A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing. Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum. Problem of other minds Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not? This is called the problem of other minds. It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at the University of Pittsburgh) regarding the literature and research studying artificial intelligence in androids. The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in a research paper titled "The Unimagined Preposterousness of Zombies", argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences. Scientific study For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies. Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it. Measurement Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness. For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation). Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness. Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains. Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, orcas, pigeons, European magpies and elephants have all been observed to pass this test. Neural correlates A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies. Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations. A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with the brain's internal model of the visual world. Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia. In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states. Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologs can be identified? The general conclusion from the study by Butler, et al., is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologs have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homolog/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists. Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans. A study in 2016 looked at lesions in specific areas of the brainstem that were associated with coma and vegetative states. A small region of the rostral dorsolateral pontine tegmentum in the brainstem was suggested to drive consciousness through functional connectivity with two cortical regions, the left ventral anterior insular cortex, and the pregenual anterior cingulate cortex. These three regions may work together as a triad to maintain consciousness. Models A wide range of empirical theories of consciousness have been proposed. Adrian Doerig and colleagues list 13 notable theories, while Anil Seth and Tim Bayne list 22 notable theories. Global workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. This theater integrates inputs from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit "audience"). The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene and Lionel Naccache. Integrated information theory (IIT) postulates that consciousness resides in the information being processed and arises once the information reaches a certain level of complexity. Proponents of this model suggest that it may provide a physical grounding for consciousness in neurons, as they provide the mechanism by which information is integrated. Orchestrated objective reduction (Orch OR) postulates that consciousness originates at the quantum level inside neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. However the details of the mechanism would go beyond current quantum theory. In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X. The entropic brain is a theory of conscious states informed by neuroimaging research with psychedelic drugs. The theory suggests that the brain in primary states such as rapid eye movement (REM) sleep, early psychosis and under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possible metacognitive functions such as internal self-administered reality testing and self-awareness. Criticism has included questioning whether the theory has been adequately tested. In 2017, work by David Rudrauf and colleagues, including Karl Friston, applied the active inference paradigm to consciousness, a model of how sensory data is integrated with priors in a process of projective transformation. The authors argue that, while their model identifies a key relationship between computation and phenomenology, it does not completely solve the hard problem of consciousness or completely close the explanatory gap. Biological function and evolution Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. Some argue that consciousness is a byproduct of evolution. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness. Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops a similar evolutionary argument in the book The Self and Its Brain. Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyzes, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of Ezequiel Morsella. As noted earlier, even among writers who consider consciousness to be well-defined, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends. Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above). Altered states There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance. The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed. Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention. A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role. There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness. The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts. Medical aspects The medical approach to consciousness is scientifically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. The medical approach focuses mostly on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end. Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma. Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as the Karen Ann Quinlan case, while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works. Assessment In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment. There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious. The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simpler pediatric version of the scale, for children too young to be able to use language. In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity. Disorders Medical conditions that inhibit consciousness are considered disorders of consciousness. This category generally includes minimally conscious state and persistent vegetative state, but sometimes also includes the less severe locked-in syndrome and more severe chronic coma. Differential diagnosis of these disorders is an active area of biomedical research. Finally, brain death results in possible irreversible disruption of consciousness. While other conditions may cause a moderate deterioration (e.g., dementia and delirium) or transient interruption (e.g., grand mal and petit mal seizures) of consciousness, they are not included in this category. Medical experts increasingly view anosognosia as a disorder of consciousness. Anosognosia is a Greek-derived term meaning "unawareness of disease". This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them. The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the left side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that does not make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary. Outside human adults In children Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection." In a 2020 paper, Katherine Nelson and Robyn Fivush use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness." Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind," calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts." They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age." In animals The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed. Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence. On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey: "We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society." "Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors." In artificial intelligence The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote: One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness. In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped. In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition. In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on a machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness. Stream of consciousness William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890. According to James, the "stream of thought" is governed by five characteristics: Every thought tends to be part of a personal consciousness. Within each personal consciousness thought is always changing. Within each personal consciousness thought is sensibly continuous. It always appears to deal with objects independent of itself. It is interested in some parts of these objects to the exclusion of others. A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or "mental continuum". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing. The teachings list six triggers that can result in the generation of different mental events. These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyzes various phenomena in the world, or analyzes the material body including the organ brain. The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws. The purpose of the Buddhist practice of mindfulness is to understand the inherent nature of the consciousness and its characteristics. Narrative form In the West, the primary impact of the idea has been on literature rather than science: "stream of consciousness as a narrative mode" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologs of Shakespeare's plays and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers. Here, for example, is a passage from Joyce's Ulysses about the thoughts of Molly Bloom: Spiritual approaches To most philosophers, the word "consciousness" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world. The mystical psychiatrist Richard Maurice Bucke, author of the 1901 book Cosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who are enlightened. Many more examples could be given, such as the various levels of spiritual consciousness presented by Prem Saran Satsangi and Stuart Hameroff. Another thorough account of the spiritual approach is Ken Wilber's 1977 book The Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels. See also Chaitanya (consciousness): Pure consciousness in Hindu philosophy. Models of consciousness: Ideas for a scientific mechanism underlying consciousness. Plant perception (paranormal): A pseudoscientific theory. Sakshi (Witness): Pure awareness in Hindu philosophy. Vertiginous question: On the uniqueness of a person's consciousness. Reality References Further reading External links Cognitive neuroscience Cognitive psychology Concepts in epistemology Metaphysical properties Concepts in the philosophy of mind Concepts in the philosophy of science Emergence Mental processes Metaphysics of mind Neuropsychological assessment Ontology Phenomenology Theory of mind
https://en.wikipedia.org/wiki/Chlorine
Chlorine is a chemical element with the symbol Cl and atomic number 17. The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Chlorine is a yellow-green gas at room temperature. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity on the revised Pauling scale, behind only oxygen and fluorine. Chlorine played an important role in the experiments conducted by medieval alchemists, which commonly involved the heating of chloride salts like ammonium chloride (sal ammoniac) and sodium chloride (common salt), producing various chemical substances containing chlorine such as hydrogen chloride, mercury(II) chloride (corrosive sublimate), and hydrochloric acid (in the form of ). However, the nature of free chlorine gas as a separate substance was only recognised around 1630 by Jan Baptist van Helmont. Carl Wilhelm Scheele wrote a description of chlorine gas in 1774, supposing it to be an oxide of a new element. In 1809, chemists suggested that the gas might be a pure element, and this was confirmed by Sir Humphry Davy in 1810, who named it after the Ancient Greek (, "pale green") because of its colour. Because of its great reactivity, all chlorine in the Earth's crust is in the form of ionic chloride compounds, which includes table salt. It is the second-most abundant halogen (after fluorine) and twenty-first most abundant chemical element in Earth's crust. These crustal deposits are nevertheless dwarfed by the huge reserves of chloride in seawater. Elemental chlorine is commercially produced from brine by electrolysis, predominantly in the chlor-alkali process. The high oxidising potential of elemental chlorine led to the development of commercial bleaches and disinfectants, and a reagent for many processes in the chemical industry. Chlorine is used in the manufacture of a wide range of consumer products, about two-thirds of them organic chemicals such as polyvinyl chloride (PVC), many intermediates for the production of plastics, and other end products which do not contain the element. As a common disinfectant, elemental chlorine and chlorine-generating compounds are used more directly in swimming pools to keep them sanitary. Elemental chlorine at high concentration is extremely dangerous, and poisonous to most living organisms. As a chemical warfare agent, chlorine was first used in World War I as a poison gas weapon. In the form of chloride ions, chlorine is necessary to all known species of life. Other types of chlorine compounds are rare in living organisms, and artificially produced chlorinated organics range from inert to toxic. In the upper atmosphere, chlorine-containing organic molecules such as chlorofluorocarbons have been implicated in ozone depletion. Small quantities of elemental chlorine are generated by oxidation of chloride ions in neutrophils as part of an immune system response against bacteria. History The most common compound of chlorine, sodium chloride, has been known since ancient times; archaeologists have found evidence that rock salt was used as early as 3000 BC and brine as early as 6000 BC. Early discoveries Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi ( 865–925, Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride. However, it appears that in these early experiments with chloride salts, the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use. One of the first such uses was the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the De aluminibus et salibus ("On Alums and Salts", an eleventh- or twelfth century Arabic text falsely attributed to Abu Bakr al-Razi and translated into Latin in the second half of the twelfth century by Gerard of Cremona, 1144–1187). Another important development was the discovery by pseudo-Geber (in the De inventione veritatis, "On the Discovery of Truth", after c. 1300) that by adding ammonium chloride to nitric acid, a strong solvent capable of dissolving gold (i.e., aqua regia) could be produced. Although aqua regia is an unstable mixture that continually gives off fumes containing free chlorine gas, this chlorine gas appears to have been ignored until c. 1630, when its nature as a separate gaseous substance was recognised by the Brabantian chemist and physician Jan Baptist van Helmont. Isolation The element was first studied in detail in 1774 by Swedish chemist Carl Wilhelm Scheele, and he is credited with the discovery. Scheele produced chlorine by reacting MnO2 (as the mineral pyrolusite) with HCl: 4 HCl + MnO2 → MnCl2 + 2 H2O + Cl2 Scheele observed several of the properties of chlorine: the bleaching effect on litmus, the deadly effect on insects, the yellow-green color, and the smell similar to aqua regia. He called it "dephlogisticated muriatic acid air" since it is a gas (then called "airs") and it came from hydrochloric acid (then known as "muriatic acid"). He failed to establish chlorine as an element. Common chemical theory at that time held that an acid is a compound that contains oxygen (remnants of this survive in the German and Dutch names of oxygen: sauerstoff or zuurstof, both translating into English as acid substance), so a number of chemists, including Claude Berthollet, suggested that Scheele's dephlogisticated muriatic acid air must be a combination of oxygen and the yet undiscovered element, muriaticum. In 1809, Joseph Louis Gay-Lussac and Louis-Jacques Thénard tried to decompose dephlogisticated muriatic acid air by reacting it with charcoal to release the free element muriaticum (and carbon dioxide). They did not succeed and published a report in which they considered the possibility that dephlogisticated muriatic acid air is an element, but were not convinced. In 1810, Sir Humphry Davy tried the same experiment again, and concluded that the substance was an element, and not a compound. He announced his results to the Royal Society on 15 November that year. At that time, he named this new element "chlorine", from the Greek word χλωρος (chlōros, "green-yellow"), in reference to its color. The name "halogen", meaning "salt producer", was originally used for chlorine in 1811 by Johann Salomo Christoph Schweigger. This term was later used as a generic term to describe all the elements in the chlorine family (fluorine, bromine, iodine), after a suggestion by Jöns Jakob Berzelius in 1826. In 1823, Michael Faraday liquefied chlorine for the first time, and demonstrated that what was then known as "solid chlorine" had a structure of chlorine hydrate (Cl2·H2O). Later uses Chlorine gas was first used by French chemist Claude Berthollet to bleach textiles in 1785. Modern bleaches resulted from further work by Berthollet, who first produced sodium hypochlorite in 1789 in his laboratory in the town of Javel (now part of Paris, France), by passing chlorine gas through a solution of sodium carbonate. The resulting liquid, known as "Eau de Javel" ("Javel water"), was a weak solution of sodium hypochlorite. This process was not very efficient, and alternative production methods were sought. Scottish chemist and industrialist Charles Tennant first produced a solution of calcium hypochlorite ("chlorinated lime"), then solid calcium hypochlorite (bleaching powder). These compounds produced low levels of elemental chlorine and could be more efficiently transported than sodium hypochlorite, which remained as dilute solutions because when purified to eliminate water, it became a dangerously powerful and unstable oxidizer. Near the end of the nineteenth century, E. S. Smith patented a method of sodium hypochlorite production involving electrolysis of brine to produce sodium hydroxide and chlorine gas, which then mixed to form sodium hypochlorite. This is known as the chloralkali process, first introduced on an industrial scale in 1892, and now the source of most elemental chlorine and sodium hydroxide. In 1884 Chemischen Fabrik Griesheim of Germany developed another chloralkali process which entered commercial production in 1888. Elemental chlorine solutions dissolved in chemically basic water (sodium and calcium hypochlorite) were first used as anti-putrefaction agents and disinfectants in the 1820s, in France, long before the establishment of the germ theory of disease. This practice was pioneered by Antoine-Germain Labarraque, who adapted Berthollet's "Javel water" bleach and other chlorine preparations. Elemental chlorine has since served a continuous function in topical antisepsis (wound irrigation solutions and the like) and public sanitation, particularly in swimming and drinking water. Chlorine gas was first used as a weapon on April 22, 1915 at the Second Battle of Ypres by the German Army. The effect on the allies was devastating because the existing gas masks were difficult to deploy and had not been broadly distributed. Properties Chlorine is the second halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to fluorine, bromine, and iodine, and are largely intermediate between those of the first two. Chlorine has the electron configuration [Ne]3s23p5, with the seven electrons in the third and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between fluorine and bromine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than fluorine and more reactive than bromine. It is also a weaker oxidising agent than fluorine, but a stronger one than bromine. Conversely, the chloride ion is a weaker reducing agent than bromide, but a stronger one than fluoride. It is intermediate in atomic radius between fluorine and bromine, and this leads to many of its atomic properties similarly continuing the trend from iodine to bromine upward, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X2 molecule (X = Cl, Br, I), ionic radius, and X–X bond length. (Fluorine is anomalous due to its small size.) All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of chlorine are intermediate between those of fluorine and bromine: chlorine melts at −101.0 °C and boils at −34.0 °C. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of chlorine are again intermediate between those of bromine and fluorine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: thus, while fluorine is a pale yellow gas, chlorine is distinctly yellow-green. This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as chlorine, results from the electron transition between the highest occupied antibonding πg molecular orbital and the lowest vacant antibonding σu molecular orbital. The colour fades at low temperatures, so that solid chlorine at −195 °C is almost colourless. Like solid bromine and iodine, solid chlorine crystallises in the orthorhombic crystal system, in a layered lattice of Cl2 molecules. The Cl–Cl distance is 198 pm (close to the gaseous Cl–Cl distance of 199 pm) and the Cl···Cl distance between molecules is 332 pm within a layer and 382 pm between layers (compare the van der Waals radius of chlorine, 180 pm). This structure means that chlorine is a very poor conductor of electricity, and indeed its conductivity is so low as to be practically unmeasurable. Isotopes Chlorine has two stable isotopes, 35Cl and 37Cl. These are its only two natural isotopes occurring in quantity, with 35Cl making up 76% of natural chlorine and 37Cl making up the remaining 24%. Both are synthesised in stars in the oxygen-burning and silicon-burning processes. Both have nuclear spin 3/2+ and thus may be used for nuclear magnetic resonance, although the spin magnitude being greater than 1/2 results in non-spherical nuclear charge distribution and thus resonance broadening as a result of a nonzero nuclear quadrupole moment and resultant quadrupolar relaxation. The other chlorine isotopes are all radioactive, with half-lives too short to occur in nature primordially. Of these, the most commonly used in the laboratory are 36Cl (t1/2 = 3.0×105 y) and 38Cl (t1/2 = 37.2 min), which may be produced from the neutron activation of natural chlorine. The most stable chlorine radioisotope is 36Cl. The primary decay mode of isotopes lighter than 35Cl is electron capture to isotopes of sulfur; that of isotopes heavier than 37Cl is beta decay to isotopes of argon; and 36Cl may decay by either mode to stable 36S or 36Ar. 36Cl occurs in trace quantities in nature as a cosmogenic nuclide in a ratio of about (7–10) × 10−13 to 1 with stable chlorine isotopes: it is produced in the atmosphere by spallation of 36Ar by interactions with cosmic ray protons. In the top meter of the lithosphere, 36Cl is generated primarily by thermal neutron activation of 35Cl and spallation of 39K and 40Ca. In the subsurface environment, muon capture by 40Ca becomes more important as a way to generate 36Cl. Chemistry and compounds Chlorine is intermediate in reactivity between fluorine and bromine, and is one of the most reactive elements. Chlorine is a weaker oxidising agent than fluorine but a stronger one than bromine or iodine. This can be seen from the standard electrode potentials of the X2/X− couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3  V). However, this trend is not shown in the bond energies because fluorine is singular due to its small size, low polarisability, and inability to show hypervalence. As another difference, chlorine has a significant chemistry in positive oxidation states while fluorine does not. Chlorination often leads to higher oxidation states than bromination or iodination but lower oxidation states than fluorination. Chlorine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Cl bonds. Given that E°(O2/H2O) = +1.229 V, which is less than +1.395 V, it would be expected that chlorine should be able to oxidise water to oxygen and hydrochloric acid. However, the kinetics of this reaction are unfavorable, and there is also a bubble overpotential effect to consider, so that electrolysis of aqueous chloride solutions evolves chlorine gas and not oxygen gas, a fact that is very useful for the industrial production of chlorine. Hydrogen chloride The simplest chlorine compound is hydrogen chloride, HCl, a major chemical in industry as well as in the laboratory, both as a gas and dissolved in water as hydrochloric acid. It is often produced by burning hydrogen gas in chlorine gas, or as a byproduct of chlorinating hydrocarbons. Another approach is to treat sodium chloride with concentrated sulfuric acid to produce hydrochloric acid, also known as the "salt-cake" process: NaCl + H2SO4 NaHSO4 + HCl NaCl + NaHSO4 Na2SO4 + HCl In the laboratory, hydrogen chloride gas may be made by drying the acid with concentrated sulfuric acid. Deuterium chloride, DCl, may be produced by reacting benzoyl chloride with heavy water (D2O). At room temperature, hydrogen chloride is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the larger electronegative chlorine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen chloride at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Hydrochloric acid is a strong acid (pKa = −7) because the hydrogen bonds to chlorine are too weak to inhibit dissociation. The HCl/H2O system has many hydrates HCl·nH2O for n = 1, 2, 3, 4, and 6. Beyond a 1:1 mixture of HCl and H2O, the system separates completely into two separate liquid phases. Hydrochloric acid forms an azeotrope with boiling point 108.58 °C at 20.22 g HCl per 100 g solution; thus hydrochloric acid cannot be concentrated beyond this point by distillation. Unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Cl+ and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as Cs+ and (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. It readily protonates electrophiles containing lone-pairs or π bonds. Solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution: Ph3SnCl + HCl ⟶ Ph2SnCl2 + PhH (solvolysis) Ph3COH + 3 HCl ⟶ + H3O+Cl− (solvolysis) + BCl3 ⟶ + HCl (ligand replacement) PCl3 + Cl2 + HCl ⟶ (oxidation) Other binary chlorides Nearly all elements in the periodic table form binary chlorides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the highly unstable XeCl2 and XeCl4); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than chlorine's (oxygen and fluorine) so that the resultant binary compounds are formally not chlorides but rather oxides or fluorides of chlorine. Even though nitrogen in NCl3 is bearing a negative charge, the compound is usually called nitrogen trichloride. Chlorination of metals with Cl2 usually leads to a higher oxidation state than bromination with Br2 when multiple oxidation states are available, such as in MoCl5 and MoBr3. Chlorides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrochloric acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen chloride gas. These methods work best when the chloride product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative chlorination of the element with chlorine or hydrogen chloride, high-temperature chlorination of a metal oxide or other halide by chlorine, a volatile metal chloride, carbon tetrachloride, or an organic chloride. For instance, zirconium dioxide reacts with chlorine at standard conditions to produce zirconium tetrachloride, and uranium trioxide reacts with hexachloropropene when heated under reflux to give uranium tetrachloride. The second example also involves a reduction in oxidation state, which can also be achieved by reducing a higher chloride using hydrogen or a metal as a reducing agent. This may also be achieved by thermal decomposition or disproportionation as follows: EuCl3 + H2 ⟶ EuCl2 + HCl ReCl5 ReCl3 + Cl2 AuCl3 AuCl + Cl2 Most metal chlorides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular chlorides, as do metals in high oxidation states from +3 and above. Both ionic and covalent chlorides are known for metals in oxidation state +3 (e.g. scandium chloride is mostly ionic, but aluminium chloride is not). Silver chloride is very insoluble in water and is thus often used as a qualitative test for chlorine. Polychlorine compounds Although dichlorine is a strong oxidising agent with a high first ionisation energy, it may be oxidised under extreme conditions to form the cation. This is very unstable and has only been characterised by its electronic band spectrum when produced in a low-pressure discharge tube. The yellow cation is more stable and may be produced as follows: This reaction is conducted in the oxidising solvent arsenic pentafluoride. The trichloride anion, , has also been characterised; it is analogous to triiodide. Chlorine fluorides The three fluorides of chlorine form a subset of the interhalogen compounds, all of which are diamagnetic. Some cationic and anionic derivatives are known, such as , , , and Cl2F+. Some pseudohalides of chlorine are also known, such as cyanogen chloride (ClCN, linear), chlorine cyanate (ClNCO), chlorine thiocyanate (ClSCN, unlike its oxygen counterpart), and chlorine azide (ClN3). Chlorine monofluoride (ClF) is extremely thermally stable, and is sold commercially in 500-gram steel lecture bottles. It is a colourless gas that melts at −155.6 °C and boils at −100.1 °C. It may be produced by the reaction of its elements at 225 °C, though it must then be separated and purified from chlorine trifluoride and its reactants. Its properties are mostly intermediate between those of chlorine and fluorine. It will react with many metals and nonmetals from room temperature and above, fluorinating them and liberating chlorine. It will also act as a chlorofluorinating agent, adding chlorine and fluorine across a multiple bond or by oxidation: for example, it will attack carbon monoxide to form carbonyl chlorofluoride, COFCl. It will react analogously with hexafluoroacetone, (CF3)2CO, with a potassium fluoride catalyst to produce heptafluoroisopropyl hypochlorite, (CF3)2CFOCl; with nitriles RCN to produce RCF2NCl2; and with the sulfur oxides SO2 and SO3 to produce ClSO2F and ClOSO2F respectively. It will also react exothermically with compounds containing –OH and –NH groups, such as water: H2O + 2 ClF ⟶ 2 HF + Cl2O Chlorine trifluoride (ClF3) is a volatile colourless molecular liquid which melts at −76.3 °C and boils at 11.8  °C. It may be formed by directly fluorinating gaseous chlorine or chlorine monofluoride at 200–300 °C. One of the most reactive chemical compounds known, the list of elements it sets on fire is diverse, containing hydrogen, potassium, phosphorus, arsenic, antimony, sulfur, selenium, tellurium, bromine, iodine, and powdered molybdenum, tungsten, rhodium, iridium, and iron. It will also ignite water, along with many substances which in ordinary circumstances would be considered chemically inert such as asbestos, concrete, glass, and sand. When heated, it will even corrode noble metals as palladium, platinum, and gold, and even the noble gases xenon and radon do not escape fluorination. An impermeable fluoride layer is formed by sodium, magnesium, aluminium, zinc, tin, and silver, which may be removed by heating. Nickel, copper, and steel containers are usually used due to their great resistance to attack by chlorine trifluoride, stemming from the formation of an unreactive layer of metal fluoride. Its reaction with hydrazine to form hydrogen fluoride, nitrogen, and chlorine gases was used in experimental rocket engine, but has problems largely stemming from its extreme hypergolicity resulting in ignition without any measurable delay. Today, it is mostly used in nuclear fuel processing, to oxidise uranium to uranium hexafluoride for its enriching and to separate it from plutonium, as well as in the semiconductor industry, where it is used to clean chemical vapor deposition chambers. It can act as a fluoride ion donor or acceptor (Lewis base or acid), although it does not dissociate appreciably into and ions. Chlorine pentafluoride (ClF5) is made on a large scale by direct fluorination of chlorine with excess fluorine gas at 350 °C and 250 atm, and on a small scale by reacting metal chlorides with fluorine gas at 100–300  °C. It melts at −103 °C and boils at −13.1 °C. It is a very strong fluorinating agent, although it is still not as effective as chlorine trifluoride. Only a few specific stoichiometric reactions have been characterised. Arsenic pentafluoride and antimony pentafluoride form ionic adducts of the form [ClF4]+[MF6]− (M = As, Sb) and water reacts vigorously as follows: 2 H2O + ClF5 ⟶ 4 HF + FClO2 The product, chloryl fluoride, is one of the five known chlorine oxide fluorides. These range from the thermally unstable FClO to the chemically unreactive perchloryl fluoride (FClO3), the other three being FClO2, F3ClO, and F3ClO2. All five behave similarly to the chlorine fluorides, both structurally and chemically, and may act as Lewis acids or bases by gaining or losing fluoride ions respectively or as very strong oxidising and fluorinating agents. Chlorine oxides The chlorine oxides are well-studied in spite of their instability (all of them are endothermic compounds). They are important because they are produced when chlorofluorocarbons undergo photolysis in the upper atmosphere and cause the destruction of the ozone layer. None of them can be made from directly reacting the elements. Dichlorine monoxide (Cl2O) is a brownish-yellow gas (red-brown when solid or liquid) which may be obtained by reacting chlorine gas with yellow mercury(II) oxide. It is very soluble in water, in which it is in equilibrium with hypochlorous acid (HOCl), of which it is the anhydride. It is thus an effective bleach and is mostly used to make hypochlorites. It explodes on heating or sparking or in the presence of ammonia gas. Chlorine dioxide (ClO2) was the first chlorine oxide to be discovered in 1811 by Humphry Davy. It is a yellow paramagnetic gas (deep-red as a solid or liquid), as expected from its having an odd number of electrons: it is stable towards dimerisation due to the delocalisation of the unpaired electron. It explodes above −40 °C as a liquid and under pressure as a gas and therefore must be made at low concentrations for wood-pulp bleaching and water treatment. It is usually prepared by reducing a chlorate as follows: + Cl− + 2 H+ ⟶ ClO2 + Cl2 + H2O Its production is thus intimately linked to the redox reactions of the chlorine oxoacids. It is a strong oxidising agent, reacting with sulfur, phosphorus, phosphorus halides, and potassium borohydride. It dissolves exothermically in water to form dark-green solutions that very slowly decompose in the dark. Crystalline clathrate hydrates ClO2·nH2O (n ≈ 6–10) separate out at low temperatures. However, in the presence of light, these solutions rapidly photodecompose to form a mixture of chloric and hydrochloric acids. Photolysis of individual ClO2 molecules result in the radicals ClO and ClOO, while at room temperature mostly chlorine, oxygen, and some ClO3 and Cl2O6 are produced. Cl2O3 is also produced when photolysing the solid at −78 °C: it is a dark brown solid that explodes below 0 °C. The ClO radical leads to the depletion of atmospheric ozone and is thus environmentally important as follows: Cl• + O3 ⟶ ClO• + O2 ClO• + O• ⟶ Cl• + O2 Chlorine perchlorate (ClOClO3) is a pale yellow liquid that is less stable than ClO2 and decomposes at room temperature to form chlorine, oxygen, and dichlorine hexoxide (Cl2O6). Chlorine perchlorate may also be considered a chlorine derivative of perchloric acid (HOClO3), similar to the thermally unstable chlorine derivatives of other oxoacids: examples include chlorine nitrate (ClONO2, vigorously reactive and explosive), and chlorine fluorosulfate (ClOSO2F, more stable but still moisture-sensitive and highly reactive). Dichlorine hexoxide is a dark-red liquid that freezes to form a solid which turns yellow at −180 °C: it is usually made by reaction of chlorine dioxide with oxygen. Despite attempts to rationalise it as the dimer of ClO3, it reacts more as though it were chloryl perchlorate, [ClO2]+[ClO4]−, which has been confirmed to be the correct structure of the solid. It hydrolyses in water to give a mixture of chloric and perchloric acids: the analogous reaction with anhydrous hydrogen fluoride does not proceed to completion. Dichlorine heptoxide (Cl2O7) is the anhydride of perchloric acid (HClO4) and can readily be obtained from it by dehydrating it with phosphoric acid at −10 °C and then distilling the product at −35 °C and 1 mmHg. It is a shock-sensitive, colourless oily liquid. It is the least reactive of the chlorine oxides, being the only one to not set organic materials on fire at room temperature. It may be dissolved in water to regenerate perchloric acid or in aqueous alkalis to regenerate perchlorates. However, it thermally decomposes explosively by breaking one of the central Cl–O bonds, producing the radicals ClO3 and ClO4 which immediately decompose to the elements through intermediate oxides. Chlorine oxoacids and oxyanions Chlorine forms four oxoacids: hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2), and perchloric acid (HOClO3). As can be seen from the redox potentials given in the adjacent table, chlorine is much more stable towards disproportionation in acidic solutions than in alkaline solutions: {| |- | Cl2 + H2O || HOCl + H+ + Cl− || Kac = 4.2 × 10−4 mol2 l−2 |- | Cl2 + 2 OH− || OCl− + H2O + Cl− || Kalk = 7.5 × 1015 mol−1 l |} The hypochlorite ions also disproportionate further to produce chloride and chlorate (3 ClO− 2 Cl− + ) but this reaction is quite slow at temperatures below 70 °C in spite of the very favourable equilibrium constant of 1027. The chlorate ions may themselves disproportionate to form chloride and perchlorate (4 Cl− + 3 ) but this is still very slow even at 100 °C despite the very favourable equilibrium constant of 1020. The rates of reaction for the chlorine oxyanions increases as the oxidation state of chlorine decreases. The strengths of the chlorine oxyacids increase very quickly as the oxidation state of chlorine increases due to the increasing delocalisation of charge over more and more oxygen atoms in their conjugate bases. Most of the chlorine oxoacids may be produced by exploiting these disproportionation reactions. Hypochlorous acid (HOCl) is highly reactive and quite unstable; its salts are mostly used for their bleaching and sterilising abilities. They are very strong oxidising agents, transferring an oxygen atom to most inorganic species. Chlorous acid (HOClO) is even more unstable and cannot be isolated or concentrated without decomposition: it is known from the decomposition of aqueous chlorine dioxide. However, sodium chlorite is a stable salt and is useful for bleaching and stripping textiles, as an oxidising agent, and as a source of chlorine dioxide. Chloric acid (HOClO2) is a strong acid that is quite stable in cold water up to 30% concentration, but on warming gives chlorine and chlorine dioxide. Evaporation under reduced pressure allows it to be concentrated further to about 40%, but then it decomposes to perchloric acid, chlorine, oxygen, water, and chlorine dioxide. Its most important salt is sodium chlorate, mostly used to make chlorine dioxide to bleach paper pulp. The decomposition of chlorate to chloride and oxygen is a common way to produce oxygen in the laboratory on a small scale. Chloride and chlorate may comproportionate to form chlorine as follows: + 5 Cl− + 6 H+ ⟶ 3 Cl2 + 3 H2O Perchlorates and perchloric acid (HOClO3) are the most stable oxo-compounds of chlorine, in keeping with the fact that chlorine compounds are most stable when the chlorine atom is in its lowest (−1) or highest (+7) possible oxidation states. Perchloric acid and aqueous perchlorates are vigorous and sometimes violent oxidising agents when heated, in stark contrast to their mostly inactive nature at room temperature due to the high activation energies for these reactions for kinetic reasons. Perchlorates are made by electrolytically oxidising sodium chlorate, and perchloric acid is made by reacting anhydrous sodium perchlorate or barium perchlorate with concentrated hydrochloric acid, filtering away the chloride precipitated and distilling the filtrate to concentrate it. Anhydrous perchloric acid is a colourless mobile liquid that is sensitive to shock that explodes on contact with most organic compounds, sets hydrogen iodide and thionyl chloride on fire and even oxidises silver and gold. Although it is a weak ligand, weaker than water, a few compounds involving coordinated are known. Organochlorine compounds Like the other carbon–halogen bonds, the C–Cl bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the chloride anion. Due to the difference of electronegativity between chlorine (3.16) and carbon (2.55), the carbon in a C–Cl bond is electron-deficient and thus electrophilic. Chlorination modifies the physical properties of hydrocarbons in several ways: chlorocarbons are typically denser than water due to the higher atomic weight of chlorine versus hydrogen, and aliphatic organochlorides are alkylating agents because chloride is a leaving group. Alkanes and aryl alkanes may be chlorinated under free-radical conditions, with UV light. However, the extent of chlorination is difficult to control: the reaction is not regioselective and often results in a mixture of various isomers with different degrees of chlorination, though this may be permissible if the products are easily separated. Aryl chlorides may be prepared by the Friedel-Crafts halogenation, using chlorine and a Lewis acid catalyst. The haloform reaction, using chlorine and sodium hydroxide, is also able to generate alkyl halides from methyl ketones, and related compounds. Chlorine adds to the multiple bonds on alkenes and alkynes as well, giving di- or tetrachloro compounds. However, due to the expense and reactivity of chlorine, organochlorine compounds are more commonly produced by using hydrogen chloride, or with chlorinating agents such as phosphorus pentachloride (PCl5) or thionyl chloride (SOCl2). The last is very convenient in the laboratory because all side products are gaseous and do not have to be distilled out. Many organochlorine compounds have been isolated from natural sources ranging from bacteria to humans. Chlorinated organic compounds are found in nearly every class of biomolecules including alkaloids, terpenes, amino acids, flavonoids, steroids, and fatty acids. Organochlorides, including dioxins, are produced in the high temperature environment of forest fires, and dioxins have been found in the preserved ashes of lightning-ignited fires that predate synthetic dioxins. In addition, a variety of simple chlorinated hydrocarbons including dichloromethane, chloroform, and carbon tetrachloride have been isolated from marine algae. A majority of the chloromethane in the environment is produced naturally by biological decomposition, forest fires, and volcanoes. Some types of organochlorides, though not all, have significant toxicity to plants or animals, including humans. Dioxins, produced when organic matter is burned in the presence of chlorine, and some insecticides, such as DDT, are persistent organic pollutants which pose dangers when they are released into the environment. For example, DDT, which was widely used to control insects in the mid 20th century, also accumulates in food chains, and causes reproductive problems (e.g., eggshell thinning) in certain bird species. Due to the ready homolytic fission of the C–Cl bond to create chlorine radicals in the upper atmosphere, chlorofluorocarbons have been phased out due to the harm they do to the ozone layer. Occurrence and production Chlorine is too reactive to occur as the free element in nature but is very abundant in the form of its chloride salts. It is the twenty-first most abundant element in Earth's crust and makes up 126 parts per million of it, through the large deposits of chloride minerals, especially sodium chloride, that have been evaporated from water bodies. All of these pale in comparison to the reserves of chloride ions in seawater: smaller amounts at higher concentrations occur in some inland seas and underground brine wells, such as the Great Salt Lake in Utah and the Dead Sea in Israel. Small batches of chlorine gas are prepared in the laboratory by combining hydrochloric acid and manganese dioxide, but the need rarely arises due to its ready availability. In industry, elemental chlorine is usually produced by the electrolysis of sodium chloride dissolved in water. This method, the chloralkali process industrialized in 1892, now provides most industrial chlorine gas. Along with chlorine, the method yields hydrogen gas and sodium hydroxide, which is the most valuable product. The process proceeds according to the following chemical equation: 2 NaCl + 2 H2O → Cl2 + H2 + 2 NaOH The electrolysis of chloride solutions all proceed according to the following equations: Cathode: 2 H2O + 2 e− → H2 + 2 OH− Anode: 2 Cl− → Cl2 + 2 e− In diaphragm cell electrolysis, an asbestos (or polymer-fiber) diaphragm separates a cathode and an anode, preventing the chlorine forming at the anode from re-mixing with the sodium hydroxide and the hydrogen formed at the cathode. The salt solution (brine) is continuously fed to the anode compartment and flows through the diaphragm to the cathode compartment, where the caustic alkali is produced and the brine is partially depleted. Diaphragm methods produce dilute and slightly impure alkali, but they are not burdened with the problem of mercury disposal and they are more energy efficient. Membrane cell electrolysis employs permeable membrane as an ion exchanger. Saturated sodium (or potassium) chloride solution is passed through the anode compartment, leaving at a lower concentration. This method also produces very pure sodium (or potassium) hydroxide but has the disadvantage of requiring very pure brine at high concentrations. In the Deacon process, hydrogen chloride recovered from the production of organochlorine compounds is recovered as chlorine. The process relies on oxidation using oxygen: 4 HCl + O2 → 2 Cl2 + 2 H2O The reaction requires a catalyst. As introduced by Deacon, early catalysts were based on copper. Commercial processes, such as the Mitsui MT-Chlorine Process, have switched to chromium and ruthenium-based catalysts. The chlorine produced is available in cylinders from sizes ranging from 450 g to 70 kg, as well as drums (865 kg), tank wagons (15 tonnes on roads; 27–90 tonnes by rail), and barges (600–1200 tonnes). Applications Sodium chloride is the most common chlorine compound, and is the main source of chlorine for the demand by the chemical industry. About 15000 chlorine-containing compounds are commercially traded, including such diverse compounds as chlorinated methane, ethanes, vinyl chloride, polyvinyl chloride (PVC), aluminium trichloride for catalysis, the chlorides of magnesium, titanium, zirconium, and hafnium which are the precursors for producing the pure form of those elements. Quantitatively, of all elemental chlorine produced, about 63% is used in the manufacture of organic compounds, and 18% in the manufacture of inorganic chlorine compounds. About 15,000 chlorine compounds are used commercially. The remaining 19% of chlorine produced is used for bleaches and disinfection products. The most significant of organic compounds in terms of production volume are 1,2-dichloroethane and vinyl chloride, intermediates in the production of PVC. Other particularly important organochlorines are methyl chloride, methylene chloride, chloroform, vinylidene chloride, trichloroethylene, perchloroethylene, allyl chloride, epichlorohydrin, chlorobenzene, dichlorobenzenes, and trichlorobenzenes. The major inorganic compounds include HCl, Cl2O, HOCl, NaClO3, chlorinated isocyanurates, AlCl3, SiCl4, SnCl4, PCl3, PCl5, POCl3, AsCl3, SbCl3, SbCl5, BiCl3, and ZnCl2. Sanitation, disinfection, and antisepsis Combating putrefaction In France (as elsewhere), animal intestines were processed to make musical instrument strings, Goldbeater's skin and other products. This was done in "gut factories" (boyauderies), and it was an odiferous and unhealthy process. In or about 1820, the Société d'encouragement pour l'industrie nationale offered a prize for the discovery of a method, chemical or mechanical, for separating the peritoneal membrane of animal intestines without putrefaction. The prize was won by Antoine-Germain Labarraque, a 44-year-old French chemist and pharmacist who had discovered that Berthollet's chlorinated bleaching solutions ("Eau de Javel") not only destroyed the smell of putrefaction of animal tissue decomposition, but also actually retarded the decomposition. Labarraque's research resulted in the use of chlorides and hypochlorites of lime (calcium hypochlorite) and of sodium (sodium hypochlorite) in the boyauderies. The same chemicals were found to be useful in the routine disinfection and deodorization of latrines, sewers, markets, abattoirs, anatomical theatres, and morgues. They were successful in hospitals, lazarets, prisons, infirmaries (both on land and at sea), magnaneries, stables, cattle-sheds, etc.; and they were beneficial during exhumations, embalming, outbreaks of epidemic disease, fever, and blackleg in cattle. Disinfection Labarraque's chlorinated lime and soda solutions have been advocated since 1828 to prevent infection (called "contagious infection", presumed to be transmitted by "miasmas"), and to treat putrefaction of existing wounds, including septic wounds. In his 1828 work, Labarraque recommended that doctors breathe chlorine, wash their hands in chlorinated lime, and even sprinkle chlorinated lime about the patients' beds in cases of "contagious infection". In 1828, the contagion of infections was well known, even though the agency of the microbe was not discovered until more than half a century later. During the Paris cholera outbreak of 1832, large quantities of so-called chloride of lime were used to disinfect the capital. This was not simply modern calcium chloride, but chlorine gas dissolved in lime-water (dilute calcium hydroxide) to form calcium hypochlorite (chlorinated lime). Labarraque's discovery helped to remove the terrible stench of decay from hospitals and dissecting rooms, and by doing so, effectively deodorised the Latin Quarter of Paris. These "putrid miasmas" were thought by many to cause the spread of "contagion" and "infection" – both words used before the germ theory of infection. Chloride of lime was used for destroying odors and "putrid matter". One source claims chloride of lime was used by Dr. John Snow to disinfect water from the cholera-contaminated well that was feeding the Broad Street pump in 1854 London, though three other reputable sources that describe that famous cholera epidemic do not mention the incident. One reference makes it clear that chloride of lime was used to disinfect the offal and filth in the streets surrounding the Broad Street pump – a common practice in mid-nineteenth century England. Semmelweis and experiments with antisepsis Perhaps the most famous application of Labarraque's chlorine and chemical base solutions was in 1847, when Ignaz Semmelweis used chlorine-water (chlorine dissolved in pure water, which was cheaper than chlorinated lime solutions) to disinfect the hands of Austrian doctors, which Semmelweis noticed still carried the stench of decomposition from the dissection rooms to the patient examination rooms. Long before the germ theory of disease, Semmelweis theorized that "cadaveric particles" were transmitting decay from fresh medical cadavers to living patients, and he used the well-known "Labarraque's solutions" as the only known method to remove the smell of decay and tissue decomposition (which he found that soap did not). The solutions proved to be far more effective antiseptics than soap (Semmelweis was also aware of their greater efficacy, but not the reason), and this resulted in Semmelweis's celebrated success in stopping the transmission of childbed fever ("puerperal fever") in the maternity wards of Vienna General Hospital in Austria in 1847. Much later, during World War I in 1916, a standardized and diluted modification of Labarraque's solution containing hypochlorite (0.5%) and boric acid as an acidic stabilizer was developed by Henry Drysdale Dakin (who gave full credit to Labarraque's prior work in this area). Called Dakin's solution, the method of wound irrigation with chlorinated solutions allowed antiseptic treatment of a wide variety of open wounds, long before the modern antibiotic era. A modified version of this solution continues to be employed in wound irrigation in modern times, where it remains effective against bacteria that are resistant to multiple antibiotics (see Century Pharmaceuticals). Public sanitation The first continuous application of chlorination to drinking U.S. water was installed in Jersey City, New Jersey, in 1908. By 1918, the US Department of Treasury called for all drinking water to be disinfected with chlorine. Chlorine is presently an important chemical for water purification (such as in water treatment plants), in disinfectants, and in bleach. Even small water supplies are now routinely chlorinated. Chlorine is usually used (in the form of hypochlorous acid) to kill bacteria and other microbes in drinking water supplies and public swimming pools. In most private swimming pools, chlorine itself is not used, but rather sodium hypochlorite, formed from chlorine and sodium hydroxide, or solid tablets of chlorinated isocyanurates. The drawback of using chlorine in swimming pools is that the chlorine reacts with the amino acids in proteins in human hair and skin. Contrary to popular belief, the distinctive "chlorine aroma" associated with swimming pools is not the result of elemental chlorine itself, but of chloramine, a chemical compound produced by the reaction of free dissolved chlorine with amines in organic substances including those in urine and sweat. As a disinfectant in water, chlorine is more than three times as effective against Escherichia coli as bromine, and more than six times as effective as iodine. Increasingly, monochloramine itself is being directly added to drinking water for purposes of disinfection, a process known as chloramination. It is often impractical to store and use poisonous chlorine gas for water treatment, so alternative methods of adding chlorine are used. These include hypochlorite solutions, which gradually release chlorine into the water, and compounds like sodium dichloro-s-triazinetrione (dihydrate or anhydrous), sometimes referred to as "dichlor", and trichloro-s-triazinetrione, sometimes referred to as "trichlor". These compounds are stable while solid and may be used in powdered, granular, or tablet form. When added in small amounts to pool water or industrial water systems, the chlorine atoms hydrolyze from the rest of the molecule, forming hypochlorous acid (HOCl), which acts as a general biocide, killing germs, microorganisms, algae, and so on. Use as a weapon World War I Chlorine gas, also known as bertholite, was first used as a weapon in World War I by Germany on April 22, 1915, in the Second Battle of Ypres. As described by the soldiers, it had the distinctive smell of a mixture of pepper and pineapple. It also tasted metallic and stung the back of the throat and chest. Chlorine reacts with water in the mucosa of the lungs to form hydrochloric acid, destructive to living tissue and potentially lethal. Human respiratory systems can be protected from chlorine gas by gas masks with activated charcoal or other filters, which makes chlorine gas much less lethal than other chemical weapons. It was pioneered by a German scientist later to be a Nobel laureate, Fritz Haber of the Kaiser Wilhelm Institute in Berlin, in collaboration with the German chemical conglomerate IG Farben, which developed methods for discharging chlorine gas against an entrenched enemy. After its first use, both sides in the conflict used chlorine as a chemical weapon, but it was soon replaced by the more deadly phosgene and mustard gas. Middle east Chlorine gas was also used during the Iraq War in Anbar Province in 2007, with insurgents packing truck bombs with mortar shells and chlorine tanks. The attacks killed two people from the explosives and sickened more than 350. Most of the deaths were caused by the force of the explosions rather than the effects of chlorine since the toxic gas is readily dispersed and diluted in the atmosphere by the blast. In some bombings, over a hundred civilians were hospitalized due to breathing difficulties. The Iraqi authorities tightened security for elemental chlorine, which is essential for providing safe drinking water to the population. On 23 October 2014, it was reported that the Islamic State of Iraq and the Levant had used chlorine gas in the town of Duluiyah, Iraq. Laboratory analysis of clothing and soil samples confirmed the use of chlorine gas against Kurdish Peshmerga Forces in a vehicle-borne improvised explosive device attack on 23 January 2015 at the Highway 47 Kiske Junction near Mosul. Another country in the middle east, Syria, has used chlorine as a chemical weapon delivered from barrel bombs and rockets. In 2016, the OPCW-UN Joint Investigative Mechanism concluded that the Syrian government used chlorine as a chemical weapon in three separate attacks. Later investigations from the OPCW's Investigation and Identification Team concluded that the Syrian Air Force was responsible for chlorine attacks in 2017 and 2018. Biological role The chloride anion is an essential nutrient for metabolism. Chlorine is needed for the production of hydrochloric acid in the stomach and in cellular pump functions. The main dietary source is table salt, or sodium chloride. Overly low or high concentrations of chloride in the blood are examples of electrolyte disturbances. Hypochloremia (having too little chloride) rarely occurs in the absence of other abnormalities. It is sometimes associated with hypoventilation. It can be associated with chronic respiratory acidosis. Hyperchloremia (having too much chloride) usually does not produce symptoms. When symptoms do occur, they tend to resemble those of hypernatremia (having too much sodium). Reduction in blood chloride leads to cerebral dehydration; symptoms are most often caused by rapid rehydration which results in cerebral edema. Hyperchloremia can affect oxygen transport. Hazards Chlorine is a toxic gas that attacks the respiratory system, eyes, and skin. Because it is denser than air, it tends to accumulate at the bottom of poorly ventilated spaces. Chlorine gas is a strong oxidizer, which may react with flammable materials. Chlorine is detectable with measuring devices in concentrations as low as 0.2 parts per million (ppm), and by smell at 3 ppm. Coughing and vomiting may occur at 30 ppm and lung damage at 60 ppm. About 1000 ppm can be fatal after a few deep breaths of the gas. The IDLH (immediately dangerous to life and health) concentration is 10 ppm. Breathing lower concentrations can aggravate the respiratory system and exposure to the gas can irritate the eyes. When chlorine is inhaled at concentrations greater than 30 ppm, it reacts with water within the lungs, producing hydrochloric acid (HCl) and hypochlorous acid (HOCl). When used at specified levels for water disinfection, the reaction of chlorine with water is not a major concern for human health. Other materials present in the water may generate disinfection by-products that are associated with negative effects on human health. In the United States, the Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for elemental chlorine at 1 ppm, or 3 mg/m3. The National Institute for Occupational Safety and Health has designated a recommended exposure limit of 0.5 ppm over 15 minutes. In the home, accidents occur when hypochlorite bleach solutions come into contact with certain acidic drain-cleaners to produce chlorine gas. Hypochlorite bleach (a popular laundry additive) combined with ammonia (another popular laundry additive) produces chloramines, another toxic group of chemicals. Chlorine-induced cracking in structural materials Chlorine is widely used for purifying water, especially potable water supplies and water used in swimming pools. Several catastrophic collapses of swimming pool ceilings have occurred from chlorine-induced stress corrosion cracking of stainless steel suspension rods. Some polymers are also sensitive to attack, including acetal resin and polybutene. Both materials were used in hot and cold water domestic plumbing, and stress corrosion cracking caused widespread failures in the US in the 1980s and 1990s. Chlorine-iron fire The element iron can combine with chlorine at high temperatures in a strong exothermic reaction, creating a chlorine-iron fire. Chlorine-iron fires are a risk in chemical process plants, where much of the pipework that carries chlorine gas is made of steel. See also Chlorine cycle Chlorine gas poisoning Industrial gas Polymer degradation Reductive dechlorination References Explanatory notes General bibliography External links Chlorine at The Periodic Table of Videos (University of Nottingham) Agency for Toxic Substances and Disease Registry: Chlorine Electrolytic production Production and liquefaction of chlorine Chlorine Production Using Mercury, Environmental Considerations and Alternatives National Pollutant Inventory – Chlorine National Institute for Occupational Safety and Health – Chlorine Page Chlorine Institute – Trade association representing the chlorine industry Chlorine Online – the web portal of Eurochlor – the business association of the European chlor-alkali industry Chemical elements Diatomic nonmetals Gases with color Halogens Hazardous air pollutants Industrial gases Chemical hazards Oxidizing agents Pulmonary agents Reactive nonmetals Swimming pool equipment
https://en.wikipedia.org/wiki/Calcium
Calcium is a chemical element with the symbol Ca and atomic number 20. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. It is the fifth most abundant element in Earth's crust, and the third most abundant metal, after iron and aluminium. The most common calcium compound on Earth is calcium carbonate, found in limestone and the fossilised remnants of early sea life; gypsum, anhydrite, fluorite, and apatite are also sources of calcium. The name derives from Latin calx "lime", which was obtained from heating limestone. Some calcium compounds were known to the ancients, though their chemistry was unknown until the seventeenth century. Pure calcium was isolated in 1808 via electrolysis of its oxide by Humphry Davy, who named the element. Calcium compounds are widely used in many industries: in foods and pharmaceuticals for calcium supplementation, in the paper industry as bleaches, as components in cement and electrical insulators, and in the manufacture of soaps. On the other hand, the metal in pure form has few applications due to its high reactivity; still, in small quantities it is often used as an alloying component in steelmaking, and sometimes, as a calcium–lead alloy, in making automotive batteries. Calcium is the most abundant metal and the fifth-most abundant element in the human body. As electrolytes, calcium ions (Ca2+) play a vital role in the physiological and biochemical processes of organisms and cells: in signal transduction pathways where they act as a second messenger; in neurotransmitter release from neurons; in contraction of all muscle cell types; as cofactors in many enzymes; and in fertilization. Calcium ions outside cells are important for maintaining the potential difference across excitable cell membranes, protein synthesis, and bone formation. Characteristics Classification Calcium is a very ductile silvery metal (sometimes described as pale yellow) whose properties are very similar to the heavier elements in its group, strontium, barium, and radium. A calcium atom has twenty electrons, arranged in the electron configuration [Ar]4s2. Like the other elements placed in group 2 of the periodic table, calcium has two valence electrons in the outermost s-orbital, which are very easily lost in chemical reactions to form a dipositive ion with the stable electron configuration of a noble gas, in this case argon. Hence, calcium is almost always divalent in its compounds, which are usually ionic. Hypothetical univalent salts of calcium would be stable with respect to their elements, but not to disproportionation to the divalent salts and calcium metal, because the enthalpy of formation of MX2 is much higher than those of the hypothetical MX. This occurs because of the much greater lattice energy afforded by the more highly charged Ca2+ cation compared to the hypothetical Ca+ cation. Calcium, strontium, barium, and radium are always considered to be alkaline earth metals; the lighter beryllium and magnesium, also in group 2 of the periodic table, are often included as well. Nevertheless, beryllium and magnesium differ significantly from the other members of the group in their physical and chemical behaviour: they behave more like aluminium and zinc respectively and have some of the weaker metallic character of the post-transition metals, which is why the traditional definition of the term "alkaline earth metal" excludes them. Physical properties Calcium metal melts at 842 °C and boils at 1494 °C; these values are higher than those for magnesium and strontium, the neighbouring group 2 metals. It crystallises in the face-centered cubic arrangement like strontium; above 450 °C, it changes to an anisotropic hexagonal close-packed arrangement like magnesium. Its density of 1.55 g/cm3 is the lowest in its group. Calcium is harder than lead but can be cut with a knife with effort. While calcium is a poorer conductor of electricity than copper or aluminium by volume, it is a better conductor by mass than both due to its very low density. While calcium is infeasible as a conductor for most terrestrial applications as it reacts quickly with atmospheric oxygen, its use as such in space has been considered. Chemical properties The chemistry of calcium is that of a typical heavy alkaline earth metal. For example, calcium spontaneously reacts with water more quickly than magnesium and less quickly than strontium to produce calcium hydroxide and hydrogen gas. It also reacts with the oxygen and nitrogen in the air to form a mixture of calcium oxide and calcium nitride. When finely divided, it spontaneously burns in air to produce the nitride. In bulk, calcium is less reactive: it quickly forms a hydration coating in moist air, but below 30% relative humidity it may be stored indefinitely at room temperature. Besides the simple oxide CaO, the peroxide CaO2 can be made by direct oxidation of calcium metal under a high pressure of oxygen, and there is some evidence for a yellow superoxide Ca(O2)2. Calcium hydroxide, Ca(OH)2, is a strong base, though it is not as strong as the hydroxides of strontium, barium or the alkali metals. All four dihalides of calcium are known. Calcium carbonate (CaCO3) and calcium sulfate (CaSO4) are particularly abundant minerals. Like strontium and barium, as well as the alkali metals and the divalent lanthanides europium and ytterbium, calcium metal dissolves directly in liquid ammonia to give a dark blue solution. Due to the large size of the calcium ion (Ca2+), high coordination numbers are common, up to 24 in some intermetallic compounds such as CaZn13. Calcium is readily complexed by oxygen chelates such as EDTA and polyphosphates, which are useful in analytic chemistry and removing calcium ions from hard water. In the absence of steric hindrance, smaller group 2 cations tend to form stronger complexes, but when large polydentate macrocycles are involved the trend is reversed. Although calcium is in the same group as magnesium and organomagnesium compounds are very commonly used throughout chemistry, organocalcium compounds are not similarly widespread because they are more difficult to make and more reactive, although they have recently been investigated as possible catalysts. Organocalcium compounds tend to be more similar to organoytterbium compounds due to the similar ionic radii of Yb2+ (102 pm) and Ca2+ (100 pm). Most of these compounds can only be prepared at low temperatures; bulky ligands tend to favor stability. For example, calcium dicyclopentadienyl, Ca(C5H5)2, must be made by directly reacting calcium metal with mercurocene or cyclopentadiene itself; replacing the C5H5 ligand with the bulkier C5(CH3)5 ligand on the other hand increases the compound's solubility, volatility, and kinetic stability. Isotopes Natural calcium is a mixture of five stable isotopes (40Ca, 42Ca, 43Ca, 44Ca, and 46Ca) and one isotope with a half-life so long that it can be considered stable for all practical purposes (48Ca, with a half-life of about 4.3 × 1019 years). Calcium is the first (lightest) element to have six naturally occurring isotopes. By far the most common isotope of calcium in nature is 40Ca, which makes up 96.941% of all natural calcium. It is produced in the silicon-burning process from fusion of alpha particles and is the heaviest stable nuclide with equal proton and neutron numbers; its occurrence is also supplemented slowly by the decay of primordial 40K. Adding another alpha particle leads to unstable 44Ti, which quickly decays via two successive electron captures to stable 44Ca; this makes up 2.806% of all natural calcium and is the second-most common isotope. The other four natural isotopes, 42Ca, 43Ca, 46Ca, and 48Ca, are significantly rarer, each comprising less than 1% of all natural calcium. The four lighter isotopes are mainly products of the oxygen-burning and silicon-burning processes, leaving the two heavier ones to be produced via neutron capture processes. 46Ca is mostly produced in a "hot" s-process, as its formation requires a rather high neutron flux to allow short-lived 45Ca to capture a neutron. 48Ca is produced by electron capture in the r-process in type Ia supernovae, where high neutron excess and low enough entropy ensures its survival. 46Ca and 48Ca are the first "classically stable" nuclides with a six-neutron or eight-neutron excess respectively. Although extremely neutron-rich for such a light element, 48Ca is very stable because it is a doubly magic nucleus, having 20 protons and 28 neutrons arranged in closed shells. Its beta decay to 48Sc is very hindered because of the gross mismatch of nuclear spin: 48Ca has zero nuclear spin, being even–even, while 48Sc has spin 6+, so the decay is forbidden by the conservation of angular momentum. While two excited states of 48Sc are available for decay as well, they are also forbidden due to their high spins. As a result, when 48Ca does decay, it does so by double beta decay to 48Ti instead, being the lightest nuclide known to undergo double beta decay. The heavy isotope 46Ca can also theoretically undergo double beta decay to 46Ti as well, but this has never been observed. The lightest and most common isotope 40Ca is also doubly magic and could undergo double electron capture to 40Ar, but this has likewise never been observed. Calcium is the only element to have two primordial doubly magic isotopes. The experimental lower limits for the half-lives of 40Ca and 46Ca are 5.9 × 1021 years and 2.8 × 1015 years respectively. Apart from the practically stable 48Ca, the longest lived radioisotope of calcium is 41Ca. It decays by electron capture to stable 41K with a half-life of about a hundred thousand years. Its existence in the early Solar System as an extinct radionuclide has been inferred from excesses of 41K: traces of 41Ca also still exist today, as it is a cosmogenic nuclide, continuously reformed through neutron activation of natural 40Ca. Many other calcium radioisotopes are known, ranging from 35Ca to 60Ca. They are all much shorter-lived than 41Ca, the most stable among them being 45Ca (half-life 163 days) and 47Ca (half-life 4.54 days). The isotopes lighter than 42Ca usually undergo beta plus decay to isotopes of potassium, and those heavier than 44Ca usually undergo beta minus decay to isotopes of scandium, although near the nuclear drip lines, proton emission and neutron emission begin to be significant decay modes as well. Like other elements, a variety of processes alter the relative abundance of calcium isotopes. The best studied of these processes is the mass-dependent fractionation of calcium isotopes that accompanies the precipitation of calcium minerals such as calcite, aragonite and apatite from solution. Lighter isotopes are preferentially incorporated into these minerals, leaving the surrounding solution enriched in heavier isotopes at a magnitude of roughly 0.025% per atomic mass unit (amu) at room temperature. Mass-dependent differences in calcium isotope composition are conventionally expressed by the ratio of two isotopes (usually 44Ca/40Ca) in a sample compared to the same ratio in a standard reference material. 44Ca/40Ca varies by about 1% among common earth materials. History Calcium compounds were known for millennia, although their chemical makeup was not understood until the 17th century. Lime as a building material and as plaster for statues was used as far back as around 7000 BC. The first dated lime kiln dates back to 2500 BC and was found in Khafajah, Mesopotamia. At about the same time, dehydrated gypsum (CaSO4·2H2O) was being used in the Great Pyramid of Giza. This material would later be used for the plaster in the tomb of Tutankhamun. The ancient Romans instead used lime mortars made by heating limestone (CaCO3). The name "calcium" itself derives from the Latin word calx "lime". Vitruvius noted that the lime that resulted was lighter than the original limestone, attributing this to the boiling of the water. In 1755, Joseph Black proved that this was due to the loss of carbon dioxide, which as a gas had not been recognised by the ancient Romans. In 1789, Antoine Lavoisier suspected that lime might be an oxide of a fundamental chemical element. In his table of the elements, Lavoisier listed five "salifiable earths" (i.e., ores that could be made to react with acids to produce salts (salis = salt, in Latin): chaux (calcium oxide), magnésie (magnesia, magnesium oxide), baryte (barium sulfate), alumine (alumina, aluminium oxide), and silice (silica, silicon dioxide)). About these "elements", Lavoisier reasoned: Calcium, along with its congeners magnesium, strontium, and barium, was first isolated by Humphry Davy in 1808. Following the work of Jöns Jakob Berzelius and Magnus Martin af Pontin on electrolysis, Davy isolated calcium and magnesium by putting a mixture of the respective metal oxides with mercury(II) oxide on a platinum plate which was used as the anode, the cathode being a platinum wire partially submerged into mercury. Electrolysis then gave calcium–mercury and magnesium–mercury amalgams, and distilling off the mercury gave the metal. However, pure calcium cannot be prepared in bulk by this method and a workable commercial process for its production was not found until over a century later. Occurrence and production At 3%, calcium is the fifth most abundant element in the Earth's crust, and the third most abundant metal behind aluminium and iron. It is also the fourth most abundant element in the lunar highlands. Sedimentary calcium carbonate deposits pervade the Earth's surface as fossilized remains of past marine life; they occur in two forms, the rhombohedral calcite (more common) and the orthorhombic aragonite (forming in more temperate seas). Minerals of the first type include limestone, dolomite, marble, chalk, and iceland spar; aragonite beds make up the Bahamas, the Florida Keys, and the Red Sea basins. Corals, sea shells, and pearls are mostly made up of calcium carbonate. Among the other important minerals of calcium are gypsum (CaSO4·2H2O), anhydrite (CaSO4), fluorite (CaF2), and apatite ([Ca5(PO4)3X], X = OH, Cl, or F). The major producers of calcium are China (about 10000 to 12000 tonnes per year), Russia (about 6000 to 8000 tonnes per year), and the United States (about 2000 to 4000 tonnes per year). Canada and France are also among the minor producers. In 2005, about 24000 tonnes of calcium were produced; about half of the world's extracted calcium is used by the United States, with about 80% of the output used each year. In Russia and China, Davy's method of electrolysis is still used, but is instead applied to molten calcium chloride. Since calcium is less reactive than strontium or barium, the oxide–nitride coating that results in air is stable and lathe machining and other standard metallurgical techniques are suitable for calcium. In the United States and Canada, calcium is instead produced by reducing lime with aluminium at high temperatures. Geochemical cycling Calcium cycling provides a link between tectonics, climate, and the carbon cycle. In the simplest terms, uplift of mountains exposes calcium-bearing rocks such as some granites to chemical weathering and releases Ca2+ into surface water. These ions are transported to the ocean where they react with dissolved CO2 to form limestone (), which in turn settles to the sea floor where it is incorporated into new rocks. Dissolved CO2, along with carbonate and bicarbonate ions, are termed "dissolved inorganic carbon" (DIC). The actual reaction is more complicated and involves the bicarbonate ion (HCO) that forms when CO2 reacts with water at seawater pH: Ca^2+ + 2 HCO3- -> CaCO3_v + CO2 + H2O At seawater pH, most of the CO2 is immediately converted back into . The reaction results in a net transport of one molecule of CO2 from the ocean/atmosphere into the lithosphere. The result is that each Ca2+ ion released by chemical weathering ultimately removes one CO2 molecule from the surficial system (atmosphere, ocean, soils and living organisms), storing it in carbonate rocks where it is likely to stay for hundreds of millions of years. The weathering of calcium from rocks thus scrubs CO2 from the ocean and atmosphere, exerting a strong long-term effect on climate. Uses The largest use of metallic calcium is in steelmaking, due to its strong chemical affinity for oxygen and sulfur. Its oxides and sulfides, once formed, give liquid lime aluminate and sulfide inclusions in steel which float out; on treatment, these inclusions disperse throughout the steel and become small and spherical, improving castability, cleanliness and general mechanical properties. Calcium is also used in maintenance-free automotive batteries, in which the use of 0.1% calcium–lead alloys instead of the usual antimony–lead alloys leads to lower water loss and lower self-discharging. Due to the risk of expansion and cracking, aluminium is sometimes also incorporated into these alloys. These lead–calcium alloys are also used in casting, replacing lead–antimony alloys. Calcium is also used to strengthen aluminium alloys used for bearings, for the control of graphitic carbon in cast iron, and to remove bismuth impurities from lead. Calcium metal is found in some drain cleaners, where it functions to generate heat and calcium hydroxide that saponifies the fats and liquefies the proteins (for example, those in hair) that block drains. Besides metallurgy, the reactivity of calcium is exploited to remove nitrogen from high-purity argon gas and as a getter for oxygen and nitrogen. It is also used as a reducing agent in the production of chromium, zirconium, thorium, and uranium. It can also be used to store hydrogen gas, as it reacts with hydrogen to form solid calcium hydride, from which the hydrogen can easily be re-extracted. Calcium isotope fractionation during mineral formation has led to several applications of calcium isotopes. In particular, the 1997 observation by Skulan and DePaolo that calcium minerals are isotopically lighter than the solutions from which the minerals precipitate is the basis of analogous applications in medicine and in paleoceanography. In animals with skeletons mineralized with calcium, the calcium isotopic composition of soft tissues reflects the relative rate of formation and dissolution of skeletal mineral. In humans, changes in the calcium isotopic composition of urine have been shown to be related to changes in bone mineral balance. When the rate of bone formation exceeds the rate of bone resorption, the 44Ca/40Ca ratio in soft tissue rises and vice versa. Because of this relationship, calcium isotopic measurements of urine or blood may be useful in the early detection of metabolic bone diseases like osteoporosis. A similar system exists in seawater, where 44Ca/40Ca tends to rise when the rate of removal of Ca2+ by mineral precipitation exceeds the input of new calcium into the ocean. In 1997, Skulan and DePaolo presented the first evidence of change in seawater 44Ca/40Ca over geologic time, along with a theoretical explanation of these changes. More recent papers have confirmed this observation, demonstrating that seawater Ca2+ concentration is not constant, and that the ocean is never in a "steady state" with respect to calcium input and output. This has important climatological implications, as the marine calcium cycle is closely tied to the carbon cycle. Many calcium compounds are used in food, as pharmaceuticals, and in medicine, among others. For example, calcium and phosphorus are supplemented in foods through the addition of calcium lactate, calcium diphosphate, and tricalcium phosphate. The last is also used as a polishing agent in toothpaste and in antacids. Calcium lactobionate is a white powder that is used as a suspending agent for pharmaceuticals. In baking, calcium phosphate is used as a leavening agent. Calcium sulfite is used as a bleach in papermaking and as a disinfectant, calcium silicate is used as a reinforcing agent in rubber, and calcium acetate is a component of liming rosin and is used to make metallic soaps and synthetic resins. Calcium is on the World Health Organization's List of Essential Medicines. Food sources Foods rich in calcium include dairy products, such as yogurt and cheese, sardines, salmon, soy products, kale, and fortified breakfast cereals. Because of concerns for long-term adverse side effects, including calcification of arteries and kidney stones, both the U.S. Institute of Medicine (IOM) and the European Food Safety Authority (EFSA) set Tolerable Upper Intake Levels (ULs) for combined dietary and supplemental calcium. From the IOM, people of ages 9–18 years are not to exceed 3 g/day combined intake; for ages 19–50, not to exceed 2.5 g/day; for ages 51 and older, not to exceed 2 g/day. EFSA set the UL for all adults at 2.5 g/day, but decided the information for children and adolescents was not sufficient to determine ULs. Biological and pathological role Function Calcium is an essential element needed in large quantities. The Ca2+ ion acts as an electrolyte and is vital to the health of the muscular, circulatory, and digestive systems; is indispensable to the building of bone; and supports synthesis and function of blood cells. For example, it regulates the contraction of muscles, nerve conduction, and the clotting of blood. As a result, intra- and extracellular calcium levels are tightly regulated by the body. Calcium can play this role because the Ca2+ ion forms stable coordination complexes with many organic compounds, especially proteins; it also forms compounds with a wide range of solubilities, enabling the formation of the skeleton. Binding Calcium ions may be complexed by proteins through binding the carboxyl groups of glutamic acid or aspartic acid residues; through interacting with phosphorylated serine, tyrosine, or threonine residues; or by being chelated by γ-carboxylated amino acid residues. Trypsin, a digestive enzyme, uses the first method; osteocalcin, a bone matrix protein, uses the third. Some other bone matrix proteins such as osteopontin and bone sialoprotein use both the first and the second. Direct activation of enzymes by binding calcium is common; some other enzymes are activated by noncovalent association with direct calcium-binding enzymes. Calcium also binds to the phospholipid layer of the cell membrane, anchoring proteins associated with the cell surface. Solubility As an example of the wide range of solubility of calcium compounds, monocalcium phosphate is very soluble in water, 85% of extracellular calcium is as dicalcium phosphate with a solubility of 2.00 mM, and the hydroxyapatite of bones in an organic matrix is tricalcium phosphate with a solubility of 1000 μM. Nutrition Calcium is a common constituent of multivitamin dietary supplements, but the composition of calcium complexes in supplements may affect its bioavailability which varies by solubility of the salt involved: calcium citrate, malate, and lactate are highly bioavailable, while the oxalate is less. Other calcium preparations include calcium carbonate, calcium citrate malate, and calcium gluconate. The intestine absorbs about one-third of calcium eaten as the free ion, and plasma calcium level is then regulated by the kidneys. Hormonal regulation of bone formation and serum levels Parathyroid hormone and vitamin D promote the formation of bone by allowing and enhancing the deposition of calcium ions there, allowing rapid bone turnover without affecting bone mass or mineral content. When plasma calcium levels fall, cell surface receptors are activated and the secretion of parathyroid hormone occurs; it then proceeds to stimulate the entry of calcium into the plasma pool by taking it from targeted kidney, gut, and bone cells, with the bone-forming action of parathyroid hormone being antagonised by calcitonin, whose secretion increases with increasing plasma calcium levels. Abnormal serum levels Excess intake of calcium may cause hypercalcemia. However, because calcium is absorbed rather inefficiently by the intestines, high serum calcium is more likely caused by excessive secretion of parathyroid hormone (PTH) or possibly by excessive intake of vitamin D, both of which facilitate calcium absorption. All these conditions result in excess calcium salts being deposited in the heart, blood vessels, or kidneys. Symptoms include anorexia, nausea, vomiting, memory loss, confusion, muscle weakness, increased urination, dehydration, and metabolic bone disease. Chronic hypercalcaemia typically leads to calcification of soft tissue and its serious consequences: for example, calcification can cause loss of elasticity of vascular walls and disruption of laminar blood flow—and thence to plaque rupture and thrombosis. Conversely, inadequate calcium or vitamin D intakes may result in hypocalcemia, often caused also by inadequate secretion of parathyroid hormone or defective PTH receptors in cells. Symptoms include neuromuscular excitability, which potentially causes tetany and disruption of conductivity in cardiac tissue. Bone disease As calcium is required for bone development, many bone diseases can be traced to the organic matrix or the hydroxyapatite in molecular structure or organization of bone. Osteoporosis is a reduction in mineral content of bone per unit volume, and can be treated by supplementation of calcium, vitamin D, and bisphosphonates. Inadequate amounts of calcium, vitamin D, or phosphates can lead to softening of bones, called osteomalacia. Safety Metallic calcium Because calcium reacts exothermically with water and acids, calcium metal coming into contact with bodily moisture results in severe corrosive irritation. When swallowed, calcium metal has the same effect on the mouth, oesophagus, and stomach, and can be fatal. However, long-term exposure is not known to have distinct adverse effects. References Bibliography Chemical elements Alkaline earth metals Dietary minerals Dietary supplements Reducing agents Sodium channel blockers World Health Organization essential medicines Chemical elements with face-centered cubic structure
https://en.wikipedia.org/wiki/Chromium
Chromium is a chemical element with the symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal. Chromium metal is valued for its high corrosion resistance and hardness. A major development in steel production was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. Stainless steel and chrome plating (electroplating with chromium) together comprise 85% of the commercial use. Chromium is also greatly valued as a metal that is able to be highly polished while resisting tarnishing. Polished chromium reflects almost 70% of the visible spectrum, and almost 90% of infrared light. The name of the element is derived from the Greek word χρῶμα, chrōma, meaning color, because many chromium compounds are intensely colored. Industrial production of chromium proceeds from chromite ore (mostly FeCr2O4) to produce ferrochromium, an iron-chromium alloy, by means of aluminothermic or silicothermic reactions. Ferrochromium is then used to produce alloys such as stainless steel. Pure chromium metal is produced by a different process: roasting and leaching of chromite to separate it from iron, followed by reduction with carbon and then aluminium. In the United States, trivalent chromium (Cr(III)) ion is considered an essential nutrient in humans for insulin, sugar, and lipid metabolism. However, in 2014, the European Food Safety Authority, acting for the European Union, concluded that there was insufficient evidence for chromium to be recognized as essential. While chromium metal and Cr(III) ions are considered non-toxic, hexavalent chromium, Cr(VI), is toxic and carcinogenic. According to the European Chemicals Agency (ECHA), chromium trioxide that is used in industrial electroplating processes is a "substance of very high concern" (SVHC). Abandoned chromium production sites often require environmental cleanup. Physical properties Atomic Chromium is the fourth transition metal found on the periodic table, and has an electron configuration of [Ar] 3d5 4s1. It is also the first element in the periodic table whose ground-state electron configuration violates the Aufbau principle. This occurs again later in the periodic table with other elements and their electron configurations, such as copper, niobium, and molybdenum. This occurs because electrons in the same orbital repel each other due to their like charges. In the previous elements, the energetic cost of promoting an electron to the next higher energy level is too great to compensate for that released by lessening inter-electronic repulsion. However, in the 3d transition metals, the energy gap between the 3d and the next-higher 4s subshell is very small, and because the 3d subshell is more compact than the 4s subshell, inter-electron repulsion is smaller between 4s electrons than between 3d electrons. This lowers the energetic cost of promotion and increases the energy released by it, so that the promotion becomes energetically feasible and one or even two electrons are always promoted to the 4s subshell. (Similar promotions happen for every transition metal atom but one, palladium.) Chromium is the first element in the 3d series where the 3d electrons start to sink into the nucleus; they thus contribute less to metallic bonding, and hence the melting and boiling points and the enthalpy of atomisation of chromium are lower than those of the preceding element vanadium. Chromium(VI) is a strong oxidising agent in contrast to the molybdenum(VI) and tungsten(VI) oxides. Bulk Chromium is extremely hard, and is the third hardest element behind carbon (diamond) and boron. Its Mohs hardness is 8.5, which means that it can scratch samples of quartz and topaz, but can be scratched by corundum. Chromium is highly resistant to tarnishing, which makes it useful as a metal that preserves its outermost layer from corroding, unlike other metals such as copper, magnesium, and aluminium. Chromium has a melting point of 1907 °C (3465 °F), which is relatively low compared to the majority of transition metals. However, it still has the second highest melting point out of all the Period 4 elements, being topped by vanadium by 3 °C (5 °F) at 1910 °C (3470 °F). The boiling point of 2671 °C (4840 °F), however, is comparatively lower, having the fourth lowest boiling point out of the Period 4 transition metals alone behind copper, manganese and zinc. The electrical resistivity of chromium at 20 °C is 125 nanoohm-meters. Chromium has a high specular reflection in comparison to other transition metals. In infrared, at 425 μm, chromium has a maximum reflectance of about 72%, reducing to a minimum of 62% at 750 μm before rising again to 90% at 4000 μm. When chromium is used in stainless steel alloys and polished, the specular reflection decreases with the inclusion of additional metals, yet is still high in comparison with other alloys. Between 40% and 60% of the visible spectrum is reflected from polished stainless steel. The explanation on why chromium displays such a high turnout of reflected photon waves in general, especially the 90% in infrared, can be attributed to chromium's magnetic properties. Chromium has unique magnetic properties - chromium is the only elemental solid that shows antiferromagnetic ordering at room temperature and below. Above 38 °C, its magnetic ordering becomes paramagnetic. The antiferromagnetic properties, which cause the chromium atoms to temporarily ionize and bond with themselves, are present because the body-centric cubic's magnetic properties are disproportionate to the lattice periodicity. This is due to the magnetic moments at the cube's corners and the unequal, but antiparallel, cube centers. From here, the frequency-dependent relative permittivity of chromium, deriving from Maxwell's equations and chromium's antiferromagnetism, leaves chromium with a high infrared and visible light reflectance. Passivation Chromium metal left standing in air is passivated - it forms a thin, protective, surface layer of oxide. This layer has a spinel structure a few atomic layers thick; it is very dense and inhibits the diffusion of oxygen into the underlying metal. In contrast, iron forms a more porous oxide through which oxygen can migrate, causing continued rusting. Passivation can be enhanced by short contact with oxidizing acids like nitric acid. Passivated chromium is stable against acids. Passivation can be removed with a strong reducing agent that destroys the protective oxide layer on the metal. Chromium metal treated in this way readily dissolves in weak acids. Chromium, unlike iron and nickel, does not suffer from hydrogen embrittlement. However, it does suffer from nitrogen embrittlement, reacting with nitrogen from air and forming brittle nitrides at the high temperatures necessary to work the metal parts. Isotopes Naturally occurring chromium is composed of four stable isotopes; 50Cr, 52Cr, 53Cr and 54Cr, with 52Cr being the most abundant (83.789% natural abundance). 50Cr is observationally stable, as it is theoretically capable of decaying to 50Ti via double electron capture with a half-life of no less than 1.3 years. Twenty-five radioisotopes have been characterized, ranging from 42Cr to 70Cr; the most stable radioisotope is 51Cr with a half-life of 27.7 days. All of the remaining radioactive isotopes have half-lives that are less than 24 hours and the majority less than 1 minute. Chromium also has two metastable nuclear isomers. 53Cr is the radiogenic decay product of 53Mn (half-life 3.74 million years). Chromium isotopes are typically collocated (and compounded) with manganese isotopes. This circumstance is useful in isotope geology. Manganese-chromium isotope ratios reinforce the evidence from 26Al and 107Pd concerning the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites indicate an initial 53Mn/55Mn ratio that suggests Mn-Cr isotopic composition must result from in-situ decay of 53Mn in differentiated planetary bodies. Hence 53Cr provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. The isotopes of chromium range in atomic mass from 43 u (43Cr) to 67 u (67Cr). The primary decay mode before the most abundant stable isotope, 52Cr, is electron capture and the primary mode after is beta decay. 53Cr has been posited as a proxy for atmospheric oxygen concentration. Chemistry and compounds Chromium is a member of group 6, of the transition metals. The +3 and +6 states occur most commonly within chromium compounds, followed by +2; charges of +1, +4 and +5 for chromium are rare, but do nevertheless occasionally exist. Common oxidation states Chromium(0) Many Cr(0) complexes are known. Bis(benzene)chromium and chromium hexacarbonyl are highlights in organochromium chemistry. Chromium(II) Chromium(II) compounds are uncommon, in part because they readily oxidize to chromium(III) derivatives in air. Water-stable chromium(II) chloride that can be made by reducing chromium(III) chloride with zinc. The resulting bright blue solution created from dissolving chromium(II) chloride is stable at neutral pH. Some other notable chromium(II) compounds include chromium(II) oxide , and chromium(II) sulfate . Many chromium(II) carboxylates are known. The red chromium(II) acetate (Cr2(O2CCH3)4) is somewhat famous. It features a Cr-Cr quadruple bond. Chromium(III) A large number of chromium(III) compounds are known, such as chromium(III) nitrate, chromium(III) acetate, and chromium(III) oxide. Chromium(III) can be obtained by dissolving elemental chromium in acids like hydrochloric acid or sulfuric acid, but it can also be formed through the reduction of chromium(VI) by cytochrome c7. The ion has a similar radius (63 pm) to (radius 50 pm), and they can replace each other in some compounds, such as in chrome alum and alum. Chromium(III) tends to form octahedral complexes. Commercially available chromium(III) chloride hydrate is the dark green complex [CrCl2(H2O)4]Cl. Closely related compounds are the pale green [CrCl(H2O)5]Cl2 and violet [Cr(H2O)6]Cl3. If anhydrous violet chromium(III) chloride is dissolved in water, the violet solution turns green after some time as the chloride in the inner coordination sphere is replaced by water. This kind of reaction is also observed with solutions of chrome alum and other water-soluble chromium(III) salts. A tetrahedral coordination of chromium(III) has been reported for the Cr-centered Keggin anion [α-CrW12O40]5–. Chromium(III) hydroxide (Cr(OH)3) is amphoteric, dissolving in acidic solutions to form [Cr(H2O)6]3+, and in basic solutions to form . It is dehydrated by heating to form the green chromium(III) oxide (Cr2O3), a stable oxide with a crystal structure identical to that of corundum. Chromium(VI) Chromium(VI) compounds are oxidants at low or neutral pH. Chromate anions () and dichromate (Cr2O72−) anions are the principal ions at this oxidation state. They exist at an equilibrium, determined by pH: 2 [CrO4]2− + 2 H+ [Cr2O7]2− + H2O Chromium(VI) oxyhalides are known also and include chromyl fluoride (CrO2F2) and chromyl chloride (). However, despite several erroneous claims, chromium hexafluoride (as well as all higher hexahalides) remains unknown, as of 2020. Sodium chromate is produced industrially by the oxidative roasting of chromite ore with sodium carbonate. The change in equilibrium is visible by a change from yellow (chromate) to orange (dichromate), such as when an acid is added to a neutral solution of potassium chromate. At yet lower pH values, further condensation to more complex oxyanions of chromium is possible. Both the chromate and dichromate anions are strong oxidizing reagents at low pH: + 14 + 6 e− → 2 + 21 (ε0 = 1.33 V) They are, however, only moderately oxidizing at high pH: + 4 + 3 e− → + 5 (ε0 = −0.13 V) Chromium(VI) compounds in solution can be detected by adding an acidic hydrogen peroxide solution. The unstable dark blue chromium(VI) peroxide (CrO5) is formed, which can be stabilized as an ether adduct . Chromic acid has the hypothetical formula . It is a vaguely described chemical, despite many well-defined chromates and dichromates being known. The dark red chromium(VI) oxide , the acid anhydride of chromic acid, is sold industrially as "chromic acid". It can be produced by mixing sulfuric acid with dichromate and is a strong oxidizing agent. Other oxidation states Compounds of chromium(V) are rather rare; the oxidation state +5 is only realized in few compounds but are intermediates in many reactions involving oxidations by chromate. The only binary compound is the volatile chromium(V) fluoride (CrF5). This red solid has a melting point of 30 °C and a boiling point of 117 °C. It can be prepared by treating chromium metal with fluorine at 400 °C and 200 bar pressure. The peroxochromate(V) is another example of the +5 oxidation state. Potassium peroxochromate (K3[Cr(O2)4]) is made by reacting potassium chromate with hydrogen peroxide at low temperatures. This red brown compound is stable at room temperature but decomposes spontaneously at 150–170 °C. Compounds of chromium(IV) are slightly more common than those of chromium(V). The tetrahalides, CrF4, CrCl4, and CrBr4, can be produced by treating the trihalides () with the corresponding halogen at elevated temperatures. Such compounds are susceptible to disproportionation reactions and are not stable in water. Organic compounds containing Cr(IV) state such as chromium tetra t-butoxide are also known. Most chromium(I) compounds are obtained solely by oxidation of electron-rich, octahedral chromium(0) complexes. Other chromium(I) complexes contain cyclopentadienyl ligands. As verified by X-ray diffraction, a Cr-Cr quintuple bond (length 183.51(4)  pm) has also been described. Extremely bulky monodentate ligands stabilize this compound by shielding the quintuple bond from further reactions. Occurrence Chromium is the 21st most abundant element in Earth's crust with an average concentration of 100 ppm. Chromium compounds are found in the environment from the erosion of chromium-containing rocks, and can be redistributed by volcanic eruptions. Typical background concentrations of chromium in environmental media are: atmosphere <10 ng/m3; soil <500 mg/kg; vegetation <0.5 mg/kg; freshwater <10 μg/L; seawater <1 μg/L; sediment <80 mg/kg. Chromium is mined as chromite (FeCr2O4) ore. About two-fifths of the chromite ores and concentrates in the world are produced in South Africa, about a third in Kazakhstan, while India, Russia, and Turkey are also substantial producers. Untapped chromite deposits are plentiful, but geographically concentrated in Kazakhstan and southern Africa. Although rare, deposits of native chromium exist. The Udachnaya Pipe in Russia produces samples of the native metal. This mine is a kimberlite pipe, rich in diamonds, and the reducing environment helped produce both elemental chromium and diamonds. The relation between Cr(III) and Cr(VI) strongly depends on pH and oxidative properties of the location. In most cases, Cr(III) is the dominating species, but in some areas, the ground water can contain up to 39 µg/L of total chromium, of which 30 µg/L is Cr(VI). History Early applications Chromium minerals as pigments came to the attention of the west in the eighteenth century. On 26 July 1761, Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains which he named Siberian red lead. Though misidentified as a lead compound with selenium and iron components, the mineral was in fact crocoite with a formula of PbCrO4. In 1770, Peter Simon Pallas visited the same site as Lehmann and found a red lead mineral that was discovered to possess useful properties as a pigment in paints. After Pallas, the use of Siberian red lead as a paint pigment began to develop rapidly throughout the region. Crocoite would be the principal source of chromium in pigments until the discovery of chromite many years later. In 1794, Louis Nicolas Vauquelin received samples of crocoite ore. He produced chromium trioxide (CrO3) by mixing crocoite with hydrochloric acid. In 1797, Vauquelin discovered that he could isolate metallic chromium by heating the oxide in a charcoal oven, for which he is credited as the one who truly discovered the element. Vauquelin was also able to detect traces of chromium in precious gemstones, such as ruby and emerald. During the nineteenth century, chromium was primarily used not only as a component of paints, but in tanning salts as well. For quite some time, the crocoite found in Russia was the main source for such tanning materials. In 1827, a larger chromite deposit was discovered near Baltimore, United States, which quickly met the demand for tanning salts much more adequately than the crocoite that had been used previously. This made the United States the largest producer of chromium products until the year 1848, when larger deposits of chromite were uncovered near the city of Bursa, Turkey. With the development of metallurgy and chemical industries in the Western world, the need for chromium increased. Chromium is also famous for its reflective, metallic luster when polished. It is used as a protective and decorative coating on car parts, plumbing fixtures, furniture parts and many other items, usually applied by electroplating. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924. Production Approximately 28.8 million metric tons (Mt) of marketable chromite ore was produced in 2013, and converted into 7.5 Mt of ferrochromium. According to John F. Papp, writing for the USGS, "Ferrochromium is the leading end use of chromite ore, [and] stainless steel is the leading end use of ferrochromium." The largest producers of chromium ore in 2013 have been South Africa (48%), Kazakhstan (13%), Turkey (11%), and India (10%), with several other countries producing the rest of about 18% of the world production. The two main products of chromium ore refining are ferrochromium and metallic chromium. For those products the ore smelter process differs considerably. For the production of ferrochromium, the chromite ore (FeCr2O4) is reduced in large scale in electric arc furnace or in smaller smelters with either aluminium or silicon in an aluminothermic reaction. For the production of pure chromium, the iron must be separated from the chromium in a two step roasting and leaching process. The chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms the stable Fe2O3. The subsequent leaching at higher elevated temperatures dissolves the chromates and leaves the insoluble iron oxide. The chromate is converted by sulfuric acid into the dichromate. 4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2 2 Na2CrO4 + H2SO4 → Na2Cr2O7 + Na2SO4 + H2O The dichromate is converted to the chromium(III) oxide by reduction with carbon and then reduced in an aluminothermic reaction to chromium. Na2Cr2O7 + 2 C → Cr2O3 + Na2CO3 + CO Cr2O3 + 2 Al → Al2O3 + 2 Cr Applications The creation of metal alloys account for 85% of the available chromium's usage. The remainder of chromium is used in the chemical, refractory, and foundry industries. Metallurgy The strengthening effect of forming stable metal carbides at grain boundaries, and the strong increase in corrosion resistance made chromium an important alloying material for steel. High-speed tool steels contain between 3 and 5% chromium. Stainless steel, the primary corrosion-resistant metal alloy, is formed when chromium is introduced to iron in concentrations above 11%. For stainless steel's formation, ferrochromium is added to the molten iron. Also, nickel-based alloys have increased strength due to the formation of discrete, stable, metal, carbide particles at the grain boundaries. For example, Inconel 718 contains 18.6% chromium. Because of the excellent high-temperature properties of these nickel superalloys, they are used in jet engines and gas turbines in lieu of common structural materials. ASTM B163 relies on Chromium for condenser and heat-exchanger tubes, while castings with high strength at elevated temperatures that contain Chromium are standardised with ASTM A567. AISI type 332 is used where high temperature would normally cause carburization, oxidation or corrosion. Incoloy 800 "is capable of remaining stable and maintaining its austenitic structure even after long time exposures to high temperatures". Nichrome is used as resistance wire for heating elements in things like toasters and space heaters. These uses make chromium a strategic material. Consequently, during World War II, U.S. road engineers were instructed to avoid chromium in yellow road paint, as it "may become a critical material during the emergency." The United States likewise considered chromium "essential for the German war industry" and made intense diplomatic efforts to keep it out of the hands of Nazi Germany. The high hardness and corrosion resistance of unalloyed chromium makes it a reliable metal for surface coating; it is still the most popular metal for sheet coating, with its above-average durability, compared to other coating metals. A layer of chromium is deposited on pretreated metallic surfaces by electroplating techniques. There are two deposition methods: thin, and thick. Thin deposition involves a layer of chromium below 1 µm thickness deposited by chrome plating, and is used for decorative surfaces. Thicker chromium layers are deposited if wear-resistant surfaces are needed. Both methods use acidic chromate or dichromate solutions. To prevent the energy-consuming change in oxidation state, the use of chromium(III) sulfate is under development; for most applications of chromium, the previously established process is used. In the chromate conversion coating process, the strong oxidative properties of chromates are used to deposit a protective oxide layer on metals like aluminium, zinc, and cadmium. This passivation and the self-healing properties of the chromate stored in the chromate conversion coating, which is able to migrate to local defects, are the benefits of this coating method. Because of environmental and health regulations on chromates, alternative coating methods are under development. Chromic acid anodizing (or Type I anodizing) of aluminium is another electrochemical process that does not lead to the deposition of chromium, but uses chromic acid as an electrolyte in the solution. During anodization, an oxide layer is formed on the aluminium. The use of chromic acid, instead of the normally used sulfuric acid, leads to a slight difference of these oxide layers. The high toxicity of Cr(VI) compounds, used in the established chromium electroplating process, and the strengthening of safety and environmental regulations demand a search for substitutes for chromium, or at least a change to less toxic chromium(III) compounds. Pigment The mineral crocoite (which is also lead chromate PbCrO4) was used as a yellow pigment shortly after its discovery. After a synthesis method became available starting from the more abundant chromite, chrome yellow was, together with cadmium yellow, one of the most used yellow pigments. The pigment does not photodegrade, but it tends to darken due to the formation of chromium(III) oxide. It has a strong color, and was used for school buses in the United States and for the postal services (for example, the Deutsche Post) in Europe. The use of chrome yellow has since declined due to environmental and safety concerns and was replaced by organic pigments or other alternatives that are free from lead and chromium. Other pigments that are based around chromium are, for example, the deep shade of red pigment chrome red, which is simply lead chromate with lead(II) hydroxide (PbCrO4·Pb(OH)2). A very important chromate pigment, which was used widely in metal primer formulations, was zinc chromate, now replaced by zinc phosphate. A wash primer was formulated to replace the dangerous practice of pre-treating aluminium aircraft bodies with a phosphoric acid solution. This used zinc tetroxychromate dispersed in a solution of polyvinyl butyral. An 8% solution of phosphoric acid in solvent was added just before application. It was found that an easily oxidized alcohol was an essential ingredient. A thin layer of about 10–15 µm was applied, which turned from yellow to dark green when it was cured. There is still a question as to the correct mechanism. Chrome green is a mixture of Prussian blue and chrome yellow, while the chrome oxide green is chromium(III) oxide. Chromium oxides are also used as a green pigment in the field of glassmaking and also as a glaze for ceramics. Green chromium oxide is extremely lightfast and as such is used in cladding coatings. It is also the main ingredient in infrared reflecting paints, used by the armed forces to paint vehicles and to give them the same infrared reflectance as green leaves. Other uses Chromium(III) ions present in corundum crystals (aluminium oxide) cause them to be colored red; when corundum appears as such, it is known as a ruby. If the corundum is lacking in chromium(III) ions, it is known as a sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color. Because of their toxicity, chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996. Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain between 4 and 5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and "chrome-less" or "chrome-free" tanning are practiced to better manage chromium usage. The high heat resistivity and high melting point makes chromite and chromium(III) oxide a material for high temperature refractory applications, like blast furnaces, cement kilns, molds for the firing of bricks and as foundry sands for the casting of metals. In these applications, the refractory materials are made from mixtures of chromite and magnesite. The use is declining because of the environmental regulations due to the possibility of the formation of chromium(VI). Several chromium compounds are used as catalysts for processing hydrocarbons. For example, the Phillips catalyst, prepared from chromium oxides, is used for the production of about half the world's polyethylene. Fe-Cr mixed oxides are employed as high-temperature catalysts for the water gas shift reaction. Copper chromite is a useful hydrogenation catalyst. Chromates of metals are used in humistor. Uses of compounds Chromium(IV) oxide (CrO2) is a magnetic compound. Its ideal shape anisotropy, which imparts high coercivity and remnant magnetization, made it a compound superior to γ-Fe2O3. Chromium(IV) oxide is used to manufacture magnetic tape used in high-performance audio tape and standard audio cassettes. Chromium(III) oxide (Cr2O3) is a metal polish known as green rouge. Chromic acid is a powerful oxidizing agent and is a useful compound for cleaning laboratory glassware of any trace of organic compounds. It is prepared by dissolving potassium dichromate in concentrated sulfuric acid, which is then used to wash the apparatus. Sodium dichromate is sometimes used because of its higher solubility (50 g/L versus 200 g/L respectively). The use of dichromate cleaning solutions is now phased out due to the high toxicity and environmental concerns. Modern cleaning solutions are highly effective and chromium free. Potassium dichromate is a chemical reagent, used as a titrating agent. Chromates are added to drilling muds to prevent corrosion of steel under wet conditions. Chrome alum is Chromium(III) potassium sulfate and is used as a mordant (i.e., a fixing agent) for dyes in fabric and in tanning. Biological role The biologically beneficial effects of chromium(III) are debated. Chromium is accepted by the U.S. National Institutes of Health as a trace element for its roles in the action of insulin, a hormone that mediates the metabolism and storage of carbohydrate, fat, and protein. The mechanism of its actions in the body, however, have not been defined, leaving in question the essentiality of chromium. In contrast, hexavalent chromium (Cr(VI) or Cr6+) is highly toxic and mutagenic. Ingestion of chromium(VI) in water has been linked to stomach tumors, and it may also cause allergic contact dermatitis (ACD). "Chromium deficiency", involving a lack of Cr(III) in the body, or perhaps some complex of it, such as glucose tolerance factor, is controversial. Some studies suggest that the biologically active form of chromium (III) is transported in the body via an oligopeptide called low-molecular-weight chromium-binding substance (LMWCr), which might play a role in the insulin signaling pathway. The chromium content of common foods is generally low (1–13 micrograms per serving). The chromium content of food varies widely, due to differences in soil mineral content, growing season, plant cultivar, and contamination during processing. Chromium (and nickel) leach into food cooked in stainless steel, with the effect being largest when the cookware is new. Acidic foods that are cooked for many hours also exacerbate this effect. Dietary recommendations There is disagreement on chromium's status as an essential nutrient. Governmental departments from Australia, New Zealand, India, Japan, and the United States consider chromium essential while the European Food Safety Authority (EFSA) of the European Union does not. The U.S. National Academy of Medicine (NAM) updated the Estimated Average Requirements (EARs) and the Recommended Dietary Allowances (RDAs) for chromium in 2001. For chromium, there was insufficient information to set EARs and RDAs, so its needs are described as estimates for Adequate Intakes (AIs). The current AIs of chromium for women ages 14 through 50 is 25 μg/day, and the AIs for women ages 50 and above is 20 μg/day. The AIs for women who are pregnant are 30 μg/day, and for women who are lactating, the set AIs are 45 μg/day. The AIs for men ages 14 through 50 are 35 μg/day, and the AIs for men ages 50 and above are 30 μg/day. For children ages 1 through 13, the AIs increase with age from 0.2 μg/day up to 25 μg/day. As for safety, the NAM sets Tolerable Upper Intake Levels (ULs) for vitamins and minerals when the evidence is sufficient. In the case of chromium, there is not yet enough information, hence no UL has been established. Collectively, the EARs, RDAs, AIs, and ULs are the parameters for the nutrition recommendation system known as Dietary Reference Intake (DRI). Australia and New Zealand consider chromium to be an essential nutrient, with an AI of 35 μg/day for men, 25 μg/day for women, 30 μg/day for women who are pregnant, and 45 μg/day for women who are lactating. A UL has not been set due to the lack of sufficient data. India considers chromium to be an essential nutrient, with an adult recommended intake of 33 μg/day. Japan also considers chromium to be an essential nutrient, with an AI of 10 μg/day for adults, including women who are pregnant or lactating. A UL has not been set. The EFSA of the European Union however, does not consider chromium to be an essential nutrient; chromium is the only mineral for which the United States and the European Union disagree. Labeling For U.S. food and dietary supplement labeling purposes, the amount of the substance in a serving is expressed as a percent of the Daily Value (%DV). For chromium labeling purposes, 100% of the Daily Value was 120 μg. As of May 27, 2016, the percentage of daily value was revised to 35 μg to bring the chromium intake into a consensus with the official Recommended Dietary Allowance. A table of the old and new adult daily values is provided at Reference Daily Intake. Food sources Food composition databases such as those maintained by the U.S. Department of Agriculture do not contain information on the chromium content of foods. A wide variety of animal and vegetable foods contain chromium. Content per serving is influenced by the chromium content of the soil in which the plants are grown, by foodstuffs fed to animals, and by processing methods, as chromium is leached into foods if processed or cooked in stainless steel equipment. One diet analysis study conducted in Mexico reported an average daily chromium intake of 30 micrograms. An estimated 31% of adults in the United States consume multi-vitamin/mineral dietary supplements, which often contain 25 to 60 micrograms of chromium. Supplementation Chromium is an ingredient in total parenteral nutrition (TPN), because deficiency can occur after months of intravenous feeding with chromium-free TPN. It is also added to nutritional products for preterm infants. Although the mechanism of action in biological roles for chromium is unclear, in the United States chromium-containing products are sold as non-prescription dietary supplements in amounts ranging from 50 to 1,000 μg. Lower amounts of chromium are also often incorporated into multi-vitamin/mineral supplements consumed by an estimated 31% of adults in the United States. Chemical compounds used in dietary supplements include chromium chloride, chromium citrate, chromium(III) picolinate, chromium(III) polynicotinate, and other chemical compositions. The benefit of supplements has not been proven. Approved and disapproved health claims In 2005, the U.S. Food and Drug Administration had approved a qualified health claim for chromium picolinate with a requirement for very specific label wording: "One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain." At the same time, in answer to other parts of the petition, the FDA rejected claims for chromium picolinate and cardiovascular disease, retinopathy or kidney disease caused by abnormally high blood sugar levels. In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats. The European Food Safety Authority (EFSA) approved claims in 2010 that chromium contributed to normal macronutrient metabolism and maintenance of normal blood glucose concentration, but rejected claims for maintenance or achievement of a normal body weight, or reduction of tiredness or fatigue. Given the evidence for chromium deficiency causing problems with glucose management in the context of intravenous nutrition products formulated without chromium, research interest turned to whether chromium supplementation would benefit people who have type 2 diabetes but are not chromium deficient. Looking at the results from four meta-analyses, one reported a statistically significant decrease in fasting plasma glucose levels (FPG) and a non-significant trend in lower hemoglobin A1C. A second reported the same, a third reported significant decreases for both measures, while a fourth reported no benefit for either. A review published in 2016 listed 53 randomized clinical trials that were included in one or more of six meta-analyses. It concluded that whereas there may be modest decreases in FPG and/or HbA1C that achieve statistical significance in some of these meta-analyses, few of the trials achieved decreases large enough to be expected to be relevant to clinical outcome. Two systematic reviews looked at chromium supplements as a mean of managing body weight in overweight and obese people. One, limited to chromium picolinate, a popular supplement ingredient, reported a statistically significant −1.1 kg (2.4 lb) weight loss in trials longer than 12 weeks. The other included all chromium compounds and reported a statistically significant −0.50 kg (1.1 lb) weight change. Change in percent body fat did not reach statistical significance. Authors of both reviews considered the clinical relevance of this modest weight loss as uncertain/unreliable. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim. Chromium is promoted as a sports performance dietary supplement, based on the theory that it potentiates insulin activity, with anticipated results of increased muscle mass, and faster recovery of glycogen storage during post-exercise recovery. A review of clinical trials reported that chromium supplementation did not improve exercise performance or increase muscle strength. The International Olympic Committee reviewed dietary supplements for high-performance athletes in 2018 and concluded there was no need to increase chromium intake for athletes, nor support for claims of losing body fat. Fresh-water fish Chromium is naturally present in the environment in trace amounts, but industrial use in rubber and stainless steel manufacturing, chrome plating, dyes for textiles, tanneries and other uses contaminates aquatic systems. In Bangladesh, rivers in or downstream from industrialized areas exhibit heavy metal contamination. Irrigation water standards for chromium are 0.1 mg/L, but some rivers are more than five times that amount. The standard for fish for human consumption is less than 1 mg/kg, but many tested samples were more than five times that amount. Chromium, especially hexavalent chromium, is highly toxic to fish because it is easily absorbed across the gills, readily enters blood circulation, crosses cell membranes and bioconcentrates up the food chain. In contrast, the toxicity of trivalent chromium is very low, attributed to poor membrane permeability and little biomagnification. Acute and chronic exposure to chromium(VI) affects fish behavior, physiology, reproduction and survival. Hyperactivity and erratic swimming have been reported in contaminated environments. Egg hatching and fingerling survival are affected. In adult fish there are reports of histopathological damage to liver, kidney, muscle, intestines, and gills. Mechanisms include mutagenic gene damage and disruptions of enzyme functions. There is evidence that fish may not require chromium, but benefit from a measured amount in diet. In one study, juvenile fish gained weight on a zero chromium diet, but the addition of 500 μg of chromium in the form of chromium chloride or other supplement types, per kilogram of food (dry weight), increased weight gain. At 2,000 μg/kg the weight gain was no better than with the zero chromium diet, and there were increased DNA strand breaks. Precautions Water-insoluble chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known for a long time. Because of the specific transport mechanisms, only limited amounts of chromium(III) enter the cells. Acute oral toxicity ranges between 50 and 150 mg/kg. A 2008 review suggested that moderate uptake of chromium(III) through dietary supplements poses no genetic-toxic risk. In the US, the Occupational Safety and Health Administration (OSHA) has designated an air permissible exposure limit (PEL) in the workplace as a time-weighted average (TWA) of 1 mg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m3, time-weighted average. The IDLH (immediately dangerous to life and health) value is 250 mg/m3. Chromium(VI) toxicity The acute oral toxicity for chromium(VI) ranges between 1.5 and 3.3 mg/kg. In the body, chromium(VI) is reduced by several mechanisms to chromium(III) already in the blood before it enters the cells. The chromium(III) is excreted from the body, whereas the chromate ion is transferred into the cell by a transport mechanism, by which also sulfate and phosphate ions enter the cell. The acute toxicity of chromium(VI) is due to its strong oxidant properties. After it reaches the blood stream, it damages the kidneys, the liver and blood cells through oxidation reactions. Hemolysis, renal, and liver failure result. Aggressive dialysis can be therapeutic. The carcinogenity of chromate dust has been known for a long time, and in 1890 the first publication described the elevated cancer risk of workers in a chromate dye company. Three mechanisms have been proposed to describe the genotoxicity of chromium(VI). The first mechanism includes highly reactive hydroxyl radicals and other reactive radicals which are by products of the reduction of chromium(VI) to chromium(III). The second process includes the direct binding of chromium(V), produced by reduction in the cell, and chromium(IV) compounds to the DNA. The last mechanism attributed the genotoxicity to the binding to the DNA of the end product of the chromium(III) reduction. Chromium salts (chromates) are also the cause of allergic reactions in some people. Chromates are often used to manufacture, amongst other things, leather products, paints, cement, mortar and anti-corrosives. Contact with products containing chromates can lead to allergic contact dermatitis and irritant dermatitis, resulting in ulceration of the skin, sometimes referred to as "chrome ulcers". This condition is often found in workers that have been exposed to strong chromate solutions in electroplating, tanning and chrome-producing manufacturers. Environmental issues Because chromium compounds were used in dyes, paints, and leather tanning compounds, these compounds are often found in soil and groundwater at active and abandoned industrial sites, needing environmental cleanup and remediation. Primer paint containing hexavalent chromium is still widely used for aerospace and automobile refinishing applications. In 2010, the Environmental Working Group studied the drinking water in 35 American cities in the first nationwide study. The study found measurable hexavalent chromium in the tap water of 31 of the cities sampled, with Norman, Oklahoma, at the top of list; 25 cities had levels that exceeded California's proposed limit. The more toxic hexavalent chromium form can be reduced to the less soluble trivalent oxidation state in soils by organic matter, ferrous iron, sulfides, and other reducing agents, with the rates of such reduction being faster under more acidic conditions than under more alkaline ones. In contrast, trivalent chromium can be oxidized to hexavalent chromium in soils by manganese oxides, such as Mn(III) and Mn(IV) compounds. Since the solubility and toxicity of chromium (VI) are greater that those of chromium (III), the oxidation-reduction conversions between the two oxidation states have implications for movement and bioavailability of chromium in soils, groundwater, and plants. Notes References General bibliography External links ATSDR Case Studies in Environmental Medicine: Chromium Toxicity U.S. Department of Health and Human Services IARC Monograph "Chromium and Chromium compounds" It's Elemental – The Element Chromium The Merck Manual – Mineral Deficiency and Toxicity National Institute for Occupational Safety and Health – Chromium Page Chromium at The Periodic Table of Videos (University of Nottingham) Chemical elements Dietary minerals Native element minerals Chemical hazards Chemical elements with body-centered cubic structure
https://en.wikipedia.org/wiki/Cadmium
Cadmium is a chemical element with the symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate. Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. Cadmium was used for a long time as a corrosion-resistant plating on steel, and cadmium compounds are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium use is generally decreasing because it is toxic (it is specifically listed in the European Restriction of Hazardous Substances Directive) and nickel–cadmium batteries have been replaced with nickel–metal hydride and lithium-ion batteries. One of its few new uses is in cadmium telluride solar panels. Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms. Characteristics Physical properties Cadmium is a soft, malleable, ductile, silvery-white divalent metal. It is similar in many respects to zinc but forms complex compounds. Unlike most other metals, cadmium is resistant to corrosion and is used as a protective plate on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes. Chemical properties Although cadmium usually has an oxidation state of +2, it also exists in the +1 state. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. Cadmium burns in air to form brown amorphous cadmium oxide (CdO); the crystalline form of this compound is a dark red which changes color when heated, similar to zinc oxide. Hydrochloric acid, sulfuric acid, and nitric acid dissolve cadmium by forming cadmium chloride (CdCl2), cadmium sulfate (CdSO4), or cadmium nitrate (Cd(NO3)2). The oxidation state +1 can be produced by dissolving cadmium in a mixture of cadmium chloride and aluminium chloride, forming the Cd22+ cation, which is similar to the Hg22+ cation in mercury(I) chloride. Cd + CdCl2 + 2 AlCl3 → Cd2(AlCl4)2 The structures of many cadmium complexes with nucleobases, amino acids, and vitamins have been determined. Isotopes Naturally occurring cadmium is composed of eight isotopes. Two of them are radioactive, and three are expected to decay but have not measurably done so under laboratory conditions. The two natural radioactive isotopes are 113Cd (beta decay, half-life is ) and 116Cd (two-neutrino double beta decay, half-life is ). The other three are 106Cd, 108Cd (both double electron capture), and 114Cd (double beta decay); only lower limits on these half-lives have been determined. At least three isotopes – 110Cd, 111Cd, and 112Cd – are stable. Among the isotopes that do not occur naturally, the most long-lived are 109Cd with a half-life of 462.6 days, and 115Cd with a half-life of 53.46 hours. All of the remaining radioactive isotopes have half-lives of less than 2.5 hours, and the majority have half-lives of less than 5 minutes. Cadmium has 8 known meta states, with the most stable being 113mCd (t1⁄2 = 14.1 years), 115mCd (t1⁄2 = 44.6 days), and 117mCd (t1⁄2 = 3.36 hours). The known isotopes of cadmium range in atomic mass from 94.950 u (95Cd) to 131.946 u (132Cd). For isotopes lighter than 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium). One isotope of cadmium, 113Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons. Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay. History Cadmium (Latin cadmia, Greek καδμεία meaning "calamine", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application. Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat "enlarged joints, scrofulous glands, and chilblains". In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton. After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium. The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of to total cadmium consumption was used for plating, and only 10% was used for pigments. At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel–cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006. Occurrence Cadmium makes up about 0.1 ppm of Earth's crust. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I. Metallic cadmium can be found in the Vilyuy River basin in Siberia. Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in coal fly ash. Cadmium in soil can be absorbed by crops such as rice and cocoa. Chinese ministry of agriculture measured in 2002 that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Consumer Reports tested 28 brands of dark chocolate sold in the United States in 2022, and found cadmium in all of them, with 13 exceeding the California Maximum Allowable Dose level. Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil. Typical background concentrations of cadmium do not exceed 5 ng/m3 in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/L may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes. Production Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution. The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan. Applications Cadmium is a common component of electric batteries, pigments, coatings, and electroplating. Batteries In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel–cadmium batteries. Nickel–cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver–cadmium battery. Electroplating Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition). Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium. Nuclear fission Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium. Televisions QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production. Anticancer drugs Complexes based on heavy metals have great potential for the treatment of a wide variety of cancers but their use is often limited due to toxic side effects. However, scientists are advancing in the field and new promising cadmium complex compounds with reduced toxicity have been discovered. Compounds Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums. Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%. In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal. Semiconductors Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors. Laboratory uses Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths. Cadmium selenide quantum dots emit bright luminescence under UV excitation (He–Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope. In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α. Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity. Biological role and research Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant that causes health hazard to living organisms. Administration of cadmium to cells causes oxidative stress and increases the levels of antioxidants produced by cells to protect against macro molecular damage. However a cadmium-dependent carbonic anhydrase has been found in some marine diatoms. The diatoms live in environments with very low zinc concentrations and cadmium performs the function normally carried out by zinc in other anhydrases. This was discovered with X-ray absorption near edge structure (XANES) spectroscopy. Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence. Cadmium is under research regarding its toxicity in humans, potentially elevating risks of cancer, cardiovascular disease, and osteoporosis. Environment The biogeochemistry of cadmium and its release to the environment has been the subject of review, as has the speciation of cadmium in the environment. Safety Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death. Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables. There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing, . In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors. Cadmium is one of six substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law. The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females. Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses. The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed. On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking. In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. Most plants bio-accumulate metal toxins such as cadmium and when composted to form organic fertilizers, yield a product that often can contain high amounts (e.g., over 0.5 mg) of metal toxins for every kilogram of fertilizer. Fertilizers made from animal dung (e.g., cow dung) or urban waste can contain similar amounts of cadmium. The cadmium added to the soil from fertilizers (rock phosphates or organic fertilizers) become bio-available and toxic only if the soil pH is low (i.e., acidic soils). Zinc, copper, calcium, and iron ions, and selenium with vitamin C are used to treat cadmium intoxication, though it is not easily reversed. Regulations Because of the adverse effects of cadmium on the environment and human health, the supply and use of cadmium is restricted in Europe under the REACH Regulation. The EFSA Panel on Contaminants in the Food Chain specifies that 2.5 μg/kg body weight is a tolerable weekly intake for humans. The Joint FAO/WHO Expert Committee on Food Additives has declared 7 μg/kg body weight to be the provisional tolerable weekly intake level. The state of California requires a food label to carry a warning about potential exposure to cadmium on products such as cocoa powder. The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) for cadmium at a time-weighted average (TWA) of 0.005 ppm. The National Institute for Occupational Safety and Health (NIOSH) has not set a recommended exposure limit (REL) and has designated cadmium as a known human carcinogen. The IDLH (immediately dangerous to life and health) level for cadmium is 9 mg/m3. In addition to mercury, the presence of cadmium in some batteries has led to the requirement of proper disposal (or recycling) of batteries. Product recalls In May 2006, a sale of the seats from Arsenal F.C.'s old stadium, Highbury in London, England was cancelled when the seats were discovered to contain trace amounts of cadmium. Reports of high levels of cadmium use in children's jewelry in 2010 led to a US Consumer Product Safety Commission investigation. The U.S. CPSC issued specific recall notices for cadmium content in jewelry sold by Claire's and Wal-Mart stores. In June 2010, McDonald's voluntarily recalled more than 12 million promotional Shrek Forever After 3D Collectible Drinking Glasses because of the cadmium levels in paint pigments on the glassware. The glasses were manufactured by Arc International, of Millville, New Jersey, USA. See also Red List building materials Toxic heavy metal References Further reading External links Cadmium at The Periodic Table of Videos (University of Nottingham) ATSDR Case Studies in Environmental Medicine: Cadmium Toxicity U.S. Department of Health and Human Services National Institute for Occupational Safety and Health – Cadmium Page NLM Hazardous Substances Databank – Cadmium, Elemental Chemical elements Transition metals Endocrine disruptors IARC Group 1 carcinogens Chemical hazards Soil contamination Testicular toxicants Native element minerals Chemical elements with hexagonal close-packed structure
https://en.wikipedia.org/wiki/Curium
Curium is a transuranic, radioactive chemical element with the symbol Cm and atomic number 96 and its made entirely from curry. This actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope 239Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium. Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer. All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. They mostly emit α-particles; radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the 238Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface. History Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a cyclotron. Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown. The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness). Curium-242 was made in July–August 1944 by bombarding 239Pu with α-particles to produce curium with the release of a neutron: ^{239}_{94}Pu + ^{4}_{2}He -> ^{242}_{96}Cm + ^{1}_{0}n Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay: ^{242}_{96}Cm -> ^{238}_{94}Pu + ^{4}_{2}He The half-life of this alpha decay was first measured as 150 days and then corrected to 162.8 days. Another isotope 240Cm was produced in a similar reaction in March 1945: ^{239}_{94}Pu + ^{4}_{2}He -> ^{240}_{96}Cm + 3^{1}_{0}n The α-decay half-life of 240Cm was correctly determined as 26.7 days. The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (242Cm and 240Cm), its production, and its compounds was later patented listing only Seaborg as the inventor. The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin: "As the name for the element of atomic number 96 we should like to propose "curium", with symbol Cm. The evidence indicates that element 96 contains seven 5f electrons and is thus analogous to the element gadolinium, with its seven 4f electrons in the regular rare earth series. On this basis element 96 is named after the Curies in a manner analogous to the naming of gadolinium, in which the chemist Gadolin was honored." The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 µg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium. Characteristics Physical A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm3, curium is lighter than neptunium (20.45 g/cm3) and plutonium (19.8 g/cm3), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fmm and lattice constant a = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III. Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: 244CmN and 244CmAs at 109 K, 248CmP at 73 K and 248CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering. In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium. Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state 6D7/2 and the ground state 8S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes. Chemical Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm3+ ion is colorless to pale green; Cm4+ ion is pale yellow. The optical absorption of Cm3+ ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (): this was prepared from beta decay of americium-242 in the americium(V) ion . Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm4+/Cm3+ ionization potential and the instability of Cm(V). Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry. Isotopes About 19 radioisotopes and 7 nuclear isomers, 233Cm to 251Cm, are known; none are stable. The longest half-lives are 15.6 million years (247Cm) and 348,000 years (248Cm). Other long-lived ones are 245Cm (8500 years), 250Cm (8300 years) and 246Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are 242Cm and 244Cm with the half-lives 162.8 days and 18.1 years, respectively. All isotopes 242Cm-248Cm, and 250Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes 243Cm, 245Cm and 247Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of 248Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present. The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 gram for 245Cm, 155 gram for 243Cm and 1550 gram for 247Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for 242Cm and 246Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups. Curium is not currently used as nuclear fuel due to its low availability and high price. 245Cm and 247Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons). Occurrence The longest-lived isotope, 247Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter 235U. Traces of curium may occur naturally in uranium minerals due to neutron capture and beta decay, though this has not been confirmed. Traces of 247Cm are also probably brought to Earth in cosmic rays, but again this has not been confirmed. Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular 245Cm, 246Cm and smaller quantities of 247Cm, 248Cm and 249Cm. Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils. The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Curium, and other non-primordial actinides, have also been suspected to exist in the spectrum of Przybylski's Star. Synthesis Isotope preparation Curium is made in small amounts in nuclear reactors, and by now only kilograms of 242Cm and 244Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for 242Cm and US$170/g for 244Cm. In nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu. Further neutron capture followed by β−-decay gives americium (241Am) which further becomes 242Cm: For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm: Curium-244 alpha decays to 240Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, 247Cm and 248Cm are popular in scientific research due to their long half-lives. But the production rate of 247Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of 250Cm by neutron capture is unlikely due to the short half-life of the intermediate 249Cm (64 min), which β− decays to the berkelium isotope 249Bk. The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope 252Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of 248Cm is produced thus, per year. The associated reaction produces 248Cm with isotopic purity of 97%. Another isotope, 245Cm, can be obtained for research, from α-decay of 249Cf; the latter isotope is produced in small amounts from β−-decay of 249Bk. Metal preparation Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation. Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents. Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride. Compounds and reactions Oxides Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (), nitrate (), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3: 4CmO2 ->[\Delta T] 2Cm2O3 + O2. Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen: 2CmO2 + H2 -> Cm2O3 + H2O Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium. Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well. Halides The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine: A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal). The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450°C: Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride: Chalcogenides and pnictides Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature. Organocurium compounds and biological aspects Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet. Formation of the complexes of the type (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence. Curium has no biological significance. There are a few reports on biosorption of Cm3+ by bacteria and archaea, but no evidence for incorporation of curium into them. Applications Radionuclides Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price, around 2000 USD/g. 243Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, 244Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker— of lead for a 1 kW source, compared to for 238Pu. Therefore, this use of curium is currently considered impractical. A more promising use of 242Cm is for making 238Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to 238Pu use the (n,γ) reaction of 237Np, or deuteron bombardment of uranium, though both reactions always produce 236Pu as an undesired by-product since the latter decays to 232U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yields isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the cyclotron at Berkeley: + → + Only about 5,000 atoms of californium were produced in this experiment. The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel. X-ray spectrometer The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source. An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium. Safety Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of 244Cm in soluble form is 0.3 μCi. Intravenous injection of 242Cm- and 244Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer. Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes 245Cm–248Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium. References Bibliography Holleman, Arnold F. and Wiberg, Nils Lehrbuch der Anorganischen Chemie, 102 Edition, de Gruyter, Berlin 2007, . Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960 External links Curium at The Periodic Table of Videos (University of Nottingham) NLM Hazardous Substances Databank – Curium, Radioactive Chemical elements Chemical elements with double hexagonal close-packed structure Actinides American inventions Synthetic elements Marie Curie Pierre Curie
https://en.wikipedia.org/wiki/Californium
Californium is a radioactive chemical element with the symbol Cf and atomic number 98. The element was first synthesized in 1950 at Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory), by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye (after einsteinium). The element was named after the university and the U.S. state of California. Two crystalline forms exist for californium at normal pressure: one above and one below . A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. 252Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory in the United States and Research Institute of Atomic Reactors in Russia. Californium is one of the few transuranium elements with practical applications. Most of these applications exploit the property of certain isotopes of californium to emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is employed as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. Californium can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue. Characteristics Physical properties Californium is a silvery-white actinide metal with a melting point of and an estimated boiling point of . The pure metal is malleable and is easily cut with a razor blade. Californium metal starts to vaporize above when exposed to a vacuum. Below californium metal is either ferromagnetic or ferrimagnetic (it acts like a magnet), between 48 and 66 K it is antiferromagnetic (an intermediate state), and above it is paramagnetic (external magnetic fields can make it magnetic). It forms alloys with lanthanide metals but little is known about the resulting materials. The element has two crystalline forms at standard atmospheric pressure: a double-hexagonal close-packed form dubbed alpha (α) and a face-centered cubic form designated beta (β). The α form exists below 600–800 °C with a density of 15.10 g/cm3 and the β form exists above 600–800 °C with a density of 8.74 g/cm3. At 48 GPa of pressure the β form changes into an orthorhombic crystal system due to delocalization of the atom's 5f electrons, which frees them to bond. The bulk modulus of a material is a measure of its resistance to uniform pressure. Californium's bulk modulus is , which is similar to trivalent lanthanide metals but smaller than more familiar metals, such as aluminium (70 GPa). Chemical properties and compounds Californium exhibits oxidation states of 4, 3, or 2. It typically forms eight or nine bonds to surrounding atoms or ions. Its chemical properties are predicted to be similar to other primarily 3+ valence actinide elements and the element dysprosium, which is the lanthanide above californium in the periodic table. Compounds in the +4 oxidation state are strong oxidizing agents and those in the +2 state are strong reducing agents. The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid. Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate. Isotopes Twenty isotopes of californium are known (mass number ranging from 237 to 256); the most stable are 251Cf with half-life 898 years, 249Cf with half-life 351 years, 250Cf with half-life 13.08 years, and 252Cf with half-life 2.645 years. All other isotopes have half-life shorter than a year, and most of these have half-lives less than 20 minutes. 249Cf is formed from beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Though californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross section). Californium-252 is a very strong neutron emitter, which makes it extremely radioactive and harmful. 252Cf, 96.9% of the time, alpha decays to curium-248; the other 3.1% of decays are spontaneous fission. One microgram (μg) of 252Cf emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most other isotopes of californium, alpha decay to curium (atomic number 96). History Californium was first made at University of California Radiation Laboratory, Berkeley, by physics researchers Stanley Gerald Thompson, Kenneth Street Jr., Albert Ghiorso, and Glenn T. Seaborg, about February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950. To produce californium, a microgram-size target of curium-242 () was bombarded with 35 MeV alpha particles () in the cyclotron at Berkeley, which produced californium-245 () plus one free neutron (). + → + To identify and separate out the element, ion exchange and adsorsion methods were undertaken. Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes. The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above #98 in the periodic table, dysprosium, has a name that means "hard to get at", so the researchers decided to set aside the informal naming convention. They added that "the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California". Weighable amounts of californium were first produced by the irradiation of plutonium targets at Materials Testing Reactor at National Reactor Testing Station, eastern Idaho; these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes 249Cf to 252Cf were isolated that same year from a sample of plutonium-239 that had been irradiated with neutrons in a nuclear reactor for five years. Two years later, in 1960, Burris Cunningham and James Wallman of Lawrence Radiation Laboratory of the University of California created the first californium compounds—californium trichloride, californium(III) oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid. The High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, HFIR nominally produced of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US–UK Mutual Defence Agreement was used for making californium. The Atomic Energy Commission sold 252Cf to industrial and academic customers in the early 1970s for $10 per microgram, and an average of of 252Cf were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz, who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films. Occurrence Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles. Nuclear fallout from atmospheric nuclear weapons testing prior to 1980 contributed a small amount of californium to the environment. Californium isotopes with mass numbers 249, 252, 253, and 254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities. Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of 254Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56. The transuranium elements from americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Spectral lines of californium, along with those of several other non-primordial elements, were detected in Przybylski's Star in 2008. Production Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 () with neutrons, forming berkelium-250 () via neutron capture (n,γ) which, in turn, quickly beta decays (β−) to californium-250 () in the following reaction: (n,γ) → + β− Bombardment of californium-250 with neutrons produces californium-251 and californium-252. Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of californium-252 and microgram amounts of californium-249. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce primarily californium-252 with lesser amounts of isotopes 249 to 255. Microgram quantities of californium-252 are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce californium-252: the Oak Ridge National Laboratory in the United States, and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of californium-252 per year, respectively. Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Californium-253 is at the end of a production chain that starts with uranium-238, includes several isotopes of plutonium, americium, curium, berkelium, and the californium isotopes 249 to 253 (see diagram). Applications Californium-252 has a number of specialized uses as a strong neutron emitter; it produces 139 million neutrons per microgram per minute. This property makes it useful as a startup neutron source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are used as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when Georgia Institute of Technology got a loan of 119 μg of 252Cf from the Savannah River Site. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries. Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use 252Cf to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The main uses of 252Cf in 1982 were, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994, most 252Cf was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but secondary uses. In 2021, fast neutrons from 252Cf were used for wireless data transmission. 251Cf has a very small calculated critical mass of about , high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element. In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at Joint Institute for Nuclear Research in Dubna, Russia, from bombarding 249Cf with calcium-48, making it the heaviest element ever made. The target contained about 10 mg of 249Cf deposited on a titanium foil of 32 cm2 area. Californium has also been used to produce other transuranium elements; for example, lawrencium was first synthesized in 1961 by bombarding californium with boron nuclei. Precautions Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and low concentration in the environment. Californium can enter the body from ingesting contaminated food or drinks or by breathing air with suspended particles of the element. Once in the body, only 0.05% of the californium will reach the bloodstream. About 65% of that californium will be deposited in the skeleton, 25% in the liver, and the rest in other organs, or excreted, mainly in urine. Half of the californium deposited in the skeleton and liver are gone in 50 and 20 years, respectively. Californium in the skeleton adheres to bone surfaces before slowly migrating throughout the bone. The element is most dangerous if taken into the body. In addition, californium-249 and californium-251 can cause tissue damage externally, through gamma ray emission. Ionizing radiation emitted by californium on bone and in the liver can cause cancer. Notes References Bibliography External links Californium at The Periodic Table of Videos (University of Nottingham) NuclearWeaponArchive.org – Californium Hazardous Substances Databank – Californium, Radioactive Chemical elements Chemical elements with double hexagonal close-packed structure Actinides Synthetic elements Neutron sources Ferromagnetic materials
https://en.wikipedia.org/wiki/College
A college (Latin: collegium) is an educational institution or a constituent part of one. A college may be a degree-awarding tertiary educational institution, a part of a collegiate or federal university, an institution offering vocational education, a further education institution, or a secondary school. In most of the world, a college may be a high school or secondary school, a college of further education, a training institution that awards trade qualifications, a higher-education provider that does not have university status (often without its own degree-awarding powers), or a constituent part of a university. In the United States, a college may offer undergraduate programs – either as an independent institution or as the undergraduate program of a university – or it may be a residential college of a university or a community college, referring to (primarily public) higher education institutions that aim to provide affordable and accessible education, usually limited to two-year associate degrees. The word is generally also used as a synonym for a university in the US. Colleges in countries such as France, Belgium, and Switzerland provide secondary education. Etymology The word "college" is from the Latin verb lego, legere, legi, lectum, "to collect, gather together, pick", plus the preposition cum, "with", thus meaning "selected together". Thus "colleagues" are literally "persons who have been selected to work together". In ancient Rome a collegium was a "body, guild, corporation united in colleagueship; of magistrates, praetors, tribunes, priests, augurs; a political club or trade guild". Thus a college was a form of corporation or corporate body, an artificial legal person (body/corpus) with its own legal personality, with the capacity to enter into legal contracts, to sue and be sued. In mediaeval England there were colleges of priests, for example in chantry chapels; modern survivals include the Royal College of Surgeons in England (originally the Guild of Surgeons Within the City of London), the College of Arms in London (a body of heralds enforcing heraldic law), an electoral college (to elect representatives); all groups of persons "selected in common" to perform a specified function and appointed by a monarch, founder or other person in authority. As for the modern "college of education", it was a body created for that purpose, for example Eton College was founded in 1440 by letters patent of King Henry VI for the constitution of a college of Fellows, priests, clerks, choristers, poor scholars, and old poor men, with one master or governor, whose duty it shall be to instruct these scholars and any others who may resort thither from any part of England in the knowledge of letters, and especially of grammar, without payment". Overview Higher education Within higher education, the term can be used to refer to: A constituent part of a collegiate university, for example King's College, Cambridge, or of a federal university, for example King's College London. A liberal arts college, an independent institution of higher education focusing on undergraduate education, such as Williams College or Amherst College. A liberal arts division of a university whose undergraduate program does not otherwise follow a liberal arts model, such as the Yuanpei College at Peking University. An institute providing specialised training, such as a college of further education, for example Belfast Metropolitan College, a teacher training college, or an art college. A Catholic higher education institute which includes universities, colleges, and other institutions of higher education privately run by the Catholic Church, typically by religious institutes. Those tied to the Holy See are specifically called pontifical universities. In the United States, college is sometimes but rarely a synonym for a research university, such as Dartmouth College, one of the eight universities in the Ivy League. In the United States, the undergraduate college of a university which also confers graduate degrees, such as Yale College, the undergraduate college within Yale University. Further education A sixth form college or college of further education is an educational institution in England, Wales, Northern Ireland, Belize, the Caribbean, Malta, Norway, Brunei, and Southern Africa, among others, where students aged 16 to 19 typically study for advanced school-level qualifications, such as A-levels, BTEC, HND or its equivalent and the International Baccalaureate Diploma, or school-level qualifications such as GCSEs. In Singapore and India, this is known as a junior college. The municipal government of the city of Paris uses the phrase "sixth form college" as the English name for a lycée. Secondary education In some national education systems, secondary schools may be called "colleges" or have "college" as part of their title. In Australia the term "college" is applied to any private or independent (non-government) primary and, especially, secondary school as distinct from a state school. Melbourne Grammar School, Cranbrook School, Sydney and The King's School, Parramatta are considered colleges. There has also been a recent trend to rename or create government secondary schools as "colleges". In the state of Victoria, some state high schools are referred to as secondary colleges, although the pre-eminent government secondary school for boys in Melbourne is still named Melbourne High School. In Western Australia, South Australia and the Northern Territory, "college" is used in the name of all state high schools built since the late 1990s, and also some older ones. In New South Wales, some high schools, especially multi-campus schools resulting from mergers, are known as "secondary colleges". In Queensland some newer schools which accept primary and high school students are styled state college, but state schools offering only secondary education are called "State High School". In Tasmania and the Australian Capital Territory, "college" refers to the final two years of high school (years 11 and 12), and the institutions which provide this. In this context, "college" is a system independent of the other years of high school. Here, the expression is a shorter version of matriculation college. In a number of Canadian cities, many government-run secondary schools are called "collegiates" or "collegiate institutes" (C.I.), a complicated form of the word "college" which avoids the usual "post-secondary" connotation. This is because these secondary schools have traditionally focused on academic, rather than vocational, subjects and ability levels (for example, collegiates offered Latin while vocational schools offered technical courses). Some private secondary schools (such as Upper Canada College, Vancouver College) choose to use the word "college" in their names nevertheless. Some secondary schools elsewhere in the country, particularly ones within the separate school system, may also use the word "college" or "collegiate" in their names. In New Zealand the word "college" normally refers to a secondary school for ages 13 to 17 and "college" appears as part of the name especially of private or integrated schools. "Colleges" most frequently appear in the North Island, whereas "high schools" are more common in the South Island. In the Netherlands, "college" is equivalent to HBO (Higher professional education). It is oriented towards professional training with clear occupational outlook, unlike universities which are scientifically oriented. In South Africa, some secondary schools, especially private schools on the English public school model, have "college" in their title, including six of South Africa's Elite Seven high schools. A typical example of this category would be St John's College. Private schools that specialize in improving children's marks through intensive focus on examination needs are informally called "cram-colleges". In Sri Lanka the word "college" (known as Vidyalaya in Sinhala) normally refers to a secondary school, which usually signifies above the 5th standard. During the British colonial period a limited number of exclusive secondary schools were established based on English public school model (Royal College Colombo, S. Thomas' College, Mount Lavinia, Trinity College, Kandy) these along with several Catholic schools (St. Joseph's College, Colombo, St Anthony's College) traditionally carry their name as colleges. Following the start of free education in 1931 large group of central colleges were established to educate the rural masses. Since Sri Lanka gained Independence in 1948, many schools that have been established have been named as "college". Other As well as an educational institution, the term, in accordance with its etymology, may also refer to any formal group of colleagues set up under statute or regulation; often under a Royal Charter. Examples include an electoral college, the College of Arms, a college of canons, and the College of Cardinals. Other collegiate bodies include professional associations, particularly in medicine and allied professions. In the UK these include the Royal College of Nursing and the Royal College of Physicians. Examples in the United States include the American College of Physicians, the American College of Surgeons, and the American College of Dentists. An example in Australia is the Royal Australian College of General Practitioners. College by country The different ways in which the term "College" is used to describe educational institutions in various regions of the world is listed below: Americas Canada In Canadian English, the term "college" usually refers to a trades school, applied arts/science/technology/business/health school or community college. These are post-secondary institutions granting certificates, diplomas, associate degrees and (in some cases) bachelor's degrees. The French acronym specific to public institutions within Quebec's particular system of pre-university and technical education is CEGEP (Collège d'enseignement général et professionnel, "college of general and professional education"). They are collegiate-level institutions that a student typically enrols in if they wish to continue onto university in the Quebec education system, or to learn a trade. In Ontario and Alberta, there are also institutions that are designated university colleges, which only grant undergraduate degrees. This is to differentiate between universities, which have both undergraduate and graduate programs and those that do not. In Canada, there is a strong distinction between "college" and "university". In conversation, one specifically would say either "they are going to university" (i.e., studying for a three- or four-year degree at a university) or "they are going to college" (i.e., studying at a technical/career training). Usage in a university setting The term college also applies to distinct entities that formally act as an affiliated institution of the university, formally referred to as federated college, or affiliated colleges. A university may also formally include several constituent colleges, forming a collegiate university. Examples of collegiate universities in Canada include Trent University, and the University of Toronto. These types of institutions act independently, maintaining their own endowments, and properties. However, they remain either affiliated, or federated with the overarching university, with the overarching university being the institution that formally grants the degrees. For example, Trinity College was once an independent institution, but later became federated with the University of Toronto. Several centralized universities in Canada have mimicked the collegiate university model; although constituent colleges in a centralized university remains under the authority of the central administration. Centralized universities that have adopted the collegiate model to a degree includes the University of British Columbia, with Green College and St. John's College; and the Memorial University of Newfoundland, with Sir Wilfred Grenfell College. Occasionally, "college" refers to a subject specific faculty within a university that, while distinct, are neither federated nor affiliated—College of Education, College of Medicine, College of Dentistry, College of Biological Science among others. The Royal Military College of Canada is a military college which trains officers for the Canadian Armed Forces. The institution is a full-fledged university, with the authority to issue graduate degrees, although it continues to word the term college in its name. The institution's sister schools, Royal Military College Saint-Jean also uses the term college in its name, although it academic offering is akin to a CEGEP institution in Quebec. A number of post-secondary art schools in Canada formerly used the word college in their names, despite formally being universities. However, most of these institutions were renamed, or re-branded in the early 21st century, omitting the word college from its name. Usage in secondary education The word college continues to be used in the names public separate secondary schools in Ontario. A number of independent schools across Canada also use the word college in its name. Public secular school boards in Ontario also refer to their secondary schools as collegiate institutes. However, usage of the word collegiate institute varies between school boards. Collegiate institute is the predominant name for secondary schools in Lakehead District School Board, and Toronto District School Board, although most school boards in Ontario use collegiate institute alongside high school, and secondary school in the names of their institutions. Similarly, secondary schools in Regina, and Saskatoon are referred to as Collegiate. Chile Officially, since 2009, the Pontifical Catholic University of Chile incorporated the term "college" as the name of a tertiary education program as a bachelor's degree. The program features a Bachelor of Natural Sciences and Mathematics, a Bachelor of Social Science and a Bachelor of Arts and Humanities. It has the same system as the American universities, it combines majors and minors and finally, it let the students continue a higher degree in the same university once the program it is completed. But in Chile, the term "college" is not usually used for tertiary education, but is used mainly in the name of some private bilingual schools, corresponding to levels 0, 1 and 2 of the ISCED 2011. Some examples are they Santiago College, Saint George's College, among others. United States In the United States, there were 5,916 post-secondary institutions (universities and colleges) having peaked at 7,253 in 2012–13 and fallen every year since. A "college" in the US can refer to a constituent part of a university (which can be a residential college, the sub-division of the university offering undergraduate courses, or a school of the university offering particular specialized courses), an independent institution offering bachelor's-level courses, or an institution offering instruction in a particular professional, technical or vocational field. In popular usage, the word "college" is the generic term for any post-secondary undergraduate education. Americans "go to college" after high school, regardless of whether the specific institution is formally a college or a university. Some students choose to dual-enroll, by taking college classes while still in high school. The word and its derivatives are the standard terms used to describe the institutions and experiences associated with American post-secondary undergraduate education. Students must pay for college before taking classes. Some borrow the money via loans, and some students fund their educations with cash, scholarships, grants, or some combination of these payment methods. In 2011, the state or federal government subsidized $8,000 to $100,000 for each undergraduate degree. For state-owned schools (called "public" universities), the subsidy was given to the college, with the student benefiting from lower tuition. The state subsidized on average 50% of public university tuition. Colleges vary in terms of size, degree, and length of stay. Two-year colleges, also known as junior or community colleges, usually offer an associate degree, and four-year colleges usually offer a bachelor's degree. Often, these are entirely undergraduate institutions, although some have graduate school programs. Four-year institutions in the U.S. that emphasize a liberal arts curriculum are known as liberal arts colleges. Until the 20th century, liberal arts, law, medicine, theology, and divinity were about the only form of higher education available in the United States. These schools have traditionally emphasized instruction at the undergraduate level, although advanced research may still occur at these institutions. While there is no national standard in the United States, the term "university" primarily designates institutions that provide undergraduate and graduate education. A university typically has as its core and its largest internal division an undergraduate college teaching a liberal arts curriculum, also culminating in a bachelor's degree. What often distinguishes a university is having, in addition, one or more graduate schools engaged in both teaching graduate classes and in research. Often these would be called a School of Law or School of Medicine, (but may also be called a college of law, or a faculty of law). An exception is Vincennes University, Indiana, which is styled and chartered as a "university" even though almost all of its academic programs lead only to two-year associate degrees. Some institutions, such as Dartmouth College and The College of William & Mary, have retained the term "college" in their names for historical reasons. In one unique case, Boston College and Boston University, the former located in Chestnut Hill, Massachusetts and the latter located in Boston, Massachusetts, are completely separate institutions. Usage of the terms varies among the states. In 1996, for example, Georgia changed all of its four-year institutions previously designated as colleges to universities, and all of its vocational technology schools to technical colleges. The terms "university" and "college" do not exhaust all possible titles for an American institution of higher education. Other options include "institute" (Worcester Polytechnic Institute and Massachusetts Institute of Technology), "academy" (United States Military Academy), "union" (Cooper Union), "conservatory" (New England Conservatory), and "school" (Juilliard School). In colloquial use, they are still referred to as "college" when referring to their undergraduate studies. The term college is also, as in the United Kingdom, used for a constituent semi-autonomous part of a larger university but generally organized on academic rather than residential lines. For example, at many institutions, the undergraduate portion of the university can be briefly referred to as the college (such as The College of the University of Chicago, Harvard College at Harvard, or Columbia College at Columbia) while at others, such as the University of California, Berkeley, "colleges" are collections of academic programs and other units that share some common characteristics, mission, or disciplinary focus (the "college of engineering", the "college of nursing", and so forth). There exist other variants for historical reasons, including some uses that exist because of mergers and acquisitions; for example, Duke University, which was called Trinity College until the 1920s, still calls its main undergraduate subdivision Trinity College of Arts and Sciences. Residential colleges Some American universities, such as Princeton, Rice, and Yale have established residential colleges (sometimes, as at Harvard, the first to establish such a system in the 1930s, known as houses) along the lines of Oxford or Cambridge. Unlike the Oxbridge colleges, but similarly to Durham, these residential colleges are not autonomous legal entities nor are they typically much involved in education itself, being primarily concerned with room, board, and social life. At the University of Michigan, University of California, San Diego and the University of California, Santa Cruz, each residential college teaches its own core writing courses and has its own distinctive set of graduation requirements. Many U.S. universities have placed increased emphasis on their residential colleges in recent years. This is exemplified by the creation of new colleges at Ivy League schools such as Yale University and Princeton University, and efforts to strengthen the contribution of the residential colleges to student education, including through a 2016 taskforce at Princeton on residential colleges. Origin of the U.S. usage The founders of the first institutions of higher education in the United States were graduates of the University of Oxford and the University of Cambridge. The small institutions they founded would not have seemed to them like universities – they were tiny and did not offer the higher degrees in medicine and theology. Furthermore, they were not composed of several small colleges. Instead, the new institutions felt like the Oxford and Cambridge colleges they were used to – small communities, housing and feeding their students, with instruction from residential tutors (as in the United Kingdom, described above). When the first students graduated, these "colleges" assumed the right to confer degrees upon them, usually with authority—for example, The College of William & Mary has a royal charter from the British monarchy allowing it to confer degrees while Dartmouth College has a charter permitting it to award degrees "as are usually granted in either of the universities, or any other college in our realm of Great Britain." The leaders of Harvard College (which granted America's first degrees in 1642) might have thought of their college as the first of many residential colleges that would grow up into a New Cambridge university. However, over time, few new colleges were founded there, and Harvard grew and added higher faculties. Eventually, it changed its title to university, but the term "college" had stuck and "colleges" have arisen across the United States. In U.S. usage, the word "college" not only embodies a particular type of school, but has historically been used to refer to the general concept of higher education when it is not necessary to specify a school, as in "going to college" or "college savings accounts" offered by banks. In a survey of more than 2,000 college students in 33 states and 156 different campuses, the U.S. Public Interest Research Group found the average student spends as much as $1,200 each year on textbooks and supplies alone. By comparison, the group says that's the equivalent of 39 percent of tuition and fees at a community college, and 14 percent of tuition and fees at a four-year public university. Morrill Land-Grant Act In addition to private colleges and universities, the U.S. also has a system of government funded, public universities. Many were founded under the Morrill Land-Grant Colleges Act of 1862. A movement had arisen to bring a form of more practical higher education to the masses, as "...many politicians and educators wanted to make it possible for all young Americans to receive some sort of advanced education." The Morrill Act "...made it possible for the new western states to establish colleges for the citizens." Its goal was to make higher education more easily accessible to the citizenry of the country, specifically to improve agricultural systems by providing training and scholarship in the production and sales of agricultural products, and to provide formal education in "...agriculture, home economics, mechanical arts, and other professions that seemed practical at the time." The act was eventually extended to allow all states that had remained with the Union during the American Civil War, and eventually all states, to establish such institutions. Most of the colleges established under the Morrill Act have since become full universities, and some are among the elite of the world. Benefits of college Selection of a four-year college as compared to a two-year junior college, even by marginal students such as those with a C+ grade average in high school and SAT scores in the mid 800s, increases the probability of graduation and confers substantial economic and social benefits. Asia Bangladesh In Bangladesh, educational institutions offering higher secondary (11th–12th grade) education are known as colleges. Hong Kong In Hong Kong, the term 'college' is used by tertiary institutions as either part of their names or to refer to a constituent part of the university, such as the colleges in the collegiate The Chinese University of Hong Kong; or to a residence hall of a university, such as St. John's College, University of Hong Kong. Many older secondary schools have the term 'college' as part of their names. India The modern system of education was heavily influenced by the British starting in 1835. In India, the term "college" is commonly reserved for institutions that offer high school diplomas at year 12 ("Junior College", similar to American high schools), and those that offer the bachelor's degree; some colleges, however, offer programmes up to PhD level. Generally, colleges are located in different parts of a state and all of them are affiliated to a regional university. The colleges offer programmes leading to degrees of that university. Colleges may be either Autonomous or non-autonomous. Autonomous Colleges are empowered to establish their own syllabus, and conduct and assess their own examinations; in non-autonomous colleges, examinations are conducted by the university, at the same time for all colleges under its affiliation. There are several hundred universities and each university has affiliated colleges, often a large number. The first liberal arts and sciences college in India was "Cottayam College" or the "Syrian College", Kerala in 1815. The First inter linguistic residential education institution in Asia was started at this college. At present it is a Theological seminary which is popularly known as Orthodox Theological Seminary or Old Seminary. After that, CMS College, Kottayam, established in 1817, and the Presidency College, Kolkata, also 1817, initially known as Hindu College. The first college for the study of Christian theology and ecumenical enquiry was Serampore College (1818). The first Missionary institution to impart Western style education in India was the Scottish Church College, Calcutta (1830). The first commerce and economics college in India was Sydenham College, Mumbai (1913). In India a new term has been introduced that is Autonomous Institutes & Colleges. An autonomous Colleges are colleges which need to be affiliated to a certain university. These colleges can conduct their own admission procedure, examination syllabus, fees structure etc. However, at the end of course completion, they cannot issue their own degree or diploma. The final degree or diploma is issued by the affiliated university. Also, some significant changes can pave way under the NEP (New Education Policy 2020) which may affect the present guidelines for universities and colleges. Israel In Israel, any non-university higher-learning facility is called a college. Institutions accredited by the Council for Higher Education in Israel (CHE) to confer a bachelor's degree are called "Academic Colleges" (; plural ). These colleges (at least 4 for 2012) may also offer master's degrees and act as Research facilities. There are also over twenty teacher training colleges or seminaries, most of which may award only a Bachelor of Education (BEd) degree. Academic colleges: Any educational facility that had been approved to offer at least bachelor's degree is entitled by CHE to use the term academic college in its name. Engineering academic college: Any academic facility that offer at least bachelor's degree and most of it faculties are providing an Engineering degree and Engineering license. Educational academic college: After an educational facility that had been approved for "Teachers seminar" status is then approved to provide a Bachelor of Education, its name is changed to include "Educational Academic college." Technical college: A "Technical college" () is an educational facility that is approved to allow to provide P.E degree (הנדסאי) (14'th class) or technician (טכנאי) (13'th class) diploma and licenses. Training College: A "Training College" ( or ) is an educational facility that provides basic training allowing a person to receive a working permit in a field such as alternative medicine, cooking, Art, Mechanical, Electrical and other professions. A trainee could receive the right to work in certain professions as apprentice (j. mechanic, j. Electrician etc.). After working in the training field for enough time an apprentice could have a license to operate (Mechanic, Electrician). This educational facility is mostly used to provide basic training for low tech jobs and for job seekers without any training that are provided by the nation's Employment Service (שירות התעסוקה). Macau Following the Portuguese usage, the term "college" (colégio) in Macau has traditionally been used in the names for private (and non-governmental) pre-university educational institutions, which correspond to form one to form six level tiers. Such schools are usually run by the Roman Catholic church or missionaries in Macau. Examples include Chan Sui Ki Perpetual Help College, Yuet Wah College, and Sacred Heart Canossian College. Philippines In the Philippines, colleges usually refer to institutions of learning that grant degrees but whose scholastic fields are not as diverse as that of a university (University of Santo Tomas, University of the Philippines, Ateneo de Manila University, De La Salle University, Far Eastern University, and AMA University), such as the San Beda College which specializes in law, AMA Computer College whose campuses are spread all over the Philippines which specializes in information and computing technologies, and the Mapúa Institute of Technology which specializes in engineering, or to component units within universities that do not grant degrees but rather facilitate the instruction of a particular field, such as a College of Science and College of Engineering, among many other colleges of the University of the Philippines. A state college may not have the word "college" on its name, but may have several component colleges, or departments. Thus, the Eulogio Amang Rodriguez Institute of Science and Technology is a state college by classification. Usually, the term "college" is also thought of as a hierarchical demarcation between the term "university", and quite a number of colleges seek to be recognized as universities as a sign of improvement in academic standards (Colegio de San Juan de Letran, San Beda College), and increase in the diversity of the offered degree programs (called "courses"). For private colleges, this may be done through a survey and evaluation by the Commission on Higher Education and accrediting organizations, as was the case of Urios College which is now the Fr. Saturnino Urios University. For state colleges, it is usually done by a legislation by the Congress or Senate. In common usage, "going to college" simply means attending school for an undergraduate degree, whether it's from an institution recognized as a college or a university. When it comes to referring to the level of education, college is the term more used to be synonymous to tertiary or higher education. A student who is or has studied his/her undergraduate degree at either an institution with college or university in its name is considered to be going to or have gone to college. Singapore The term "college" in Singapore is generally only used for pre-university educational institutions called "Junior Colleges", which provide the final two years of secondary education (equivalent to sixth form in British terms or grades 11–12 in the American system). Since 1 January 2005, the term also refers to the three campuses of the Institute of Technical Education with the introduction of the "collegiate system", in which the three institutions are called ITE College East, ITE College Central, and ITE College West respectively. The term "university" is used to describe higher-education institutions offering locally conferred degrees. Institutions offering diplomas are called "polytechnics", while other institutions are often referred to as "institutes" and so forth. Sri Lanka There are several professional and vocational institutions that offer post-secondary education without granting degrees that are referred to as "colleges". This includes the Sri Lanka Law College, the many Technical Colleges and Teaching Colleges. Turkey In Turkey, the term "kolej" (college) refers to a private high school, typically preceded by one year of preparatory language education. Notable Turkish colleges include Robert College, Uskudar American Academy, American Collegiate Institute and Tarsus American College. Africa South Africa Although the term "college" is hardly used in any context at any university in South Africa, some non-university tertiary institutions call themselves colleges. These include teacher training colleges, business colleges and wildlife management colleges. See: List of universities in South Africa#Private colleges and universities; List of post secondary institutions in South Africa. Zimbabwe The term college is mainly used by private or independent secondary schools with Advanced Level (Upper 6th formers) and also Polytechnic Colleges which confer diplomas only. A student can complete secondary education (International General Certificate of Secondary Education, IGCSE) at 16 years and proceed straight to a poly-technical college or they can proceed to Advanced level (16 to 19 years) and obtain a General Certificate of Education (GCE) certificate which enables them to enroll at a university, provided they have good grades. Alternatively, with lower grades, the GCE certificate holders will have an added advantage over their GCSE counterparts if they choose to enroll at a polytechnical college. Some schools in Zimbabwe choose to offer the International Baccalaureate studies as an alternative to the IGCSE and GCE. Europe Greece Kollegio (in Greek Κολλέγιο) refers to the Centers of Post-Lyceum Education (in Greek Κέντρο Μεταλυκειακής Εκπαίδευσης, abbreviated as KEME), which are principally private and belong to the Greek post-secondary education system. Some of them have links to EU or US higher education institutions or accreditation organizations, such as the NEASC. Kollegio (or Kollegia in plural) may also refer to private non-tertiary schools, such as the Athens College. Ireland In Ireland the term "college" is normally used to describe an institution of tertiary education. University students often say they attend "college" rather than "university". Until 1989, no university provided teaching or research directly; they were formally offered by a constituent college of the university. There are number of secondary education institutions that traditionally used the word "college" in their names: these are either older, private schools (such as Belvedere College, Gonzaga College, Castleknock College, and St. Michael's College) or what were formerly a particular kind of secondary school. These secondary schools, formerly known as "technical colleges," were renamed "community colleges," but remain secondary schools. The country's only ancient university is the University of Dublin. Created during the reign of Elizabeth I, it is modelled on the collegiate universities of Cambridge and Oxford. However, only one constituent college was ever founded, hence the curious position of Trinity College Dublin today; although both are usually considered one and the same, the university and college are completely distinct corporate entities with separate and parallel governing structures. Among more modern foundations, the National University of Ireland, founded in 1908, consisted of constituent colleges and recognised colleges until 1997. The former are now referred to as constituent universities – institutions that are essentially universities in their own right. The National University can trace its existence back to 1850 and the creation of the Queen's University of Ireland and the creation of the Catholic University of Ireland in 1854. From 1880, the degree awarding roles of these two universities was taken over by the Royal University of Ireland, which remained until the creation of the National University in 1908 and Queen's University Belfast. The state's two new universities, Dublin City University and University of Limerick, were initially National Institute for Higher Education institutions. These institutions offered university level academic degrees and research from the start of their existence and were awarded university status in 1989 in recognition of this. Third level technical education in the state has been carried out in the Institutes of Technology, which were established from the 1970s as Regional Technical Colleges. These institutions have delegated authority which entitles them to give degrees and diplomas from Quality and Qualifications Ireland (QQI) in their own names. A number of private colleges exist such as Dublin Business School, providing undergraduate and postgraduate courses validated by QQI and in some cases by other universities. Other types of college include colleges of education, such as the Church of Ireland College of Education. These are specialist institutions, often linked to a university, which provide both undergraduate and postgraduate academic degrees for people who want to train as teachers. A number of state-funded further education colleges exist – which offer vocational education and training in a range of areas from business studies and information and communications technology to sports injury therapy. These courses are usually one, two or less often three years in duration and are validated by QQI at Levels 5 or 6, or for the BTEC Higher National Diploma award, which is a Level 6/7 qualification, validated by Edexcel. There are numerous private colleges (particularly in Dublin and Limerick) which offer both further and higher education qualifications. These degrees and diplomas are often certified by foreign universities/international awarding bodies and are aligned to the National Framework of Qualifications at Levels 6, 7 and 8. Netherlands In the Netherlands there are 3 main educational routes after high school. MBO (middle-level applied education), which is the equivalent of junior college. Designed to prepare students for either skilled trades and technical occupations and workers in support roles in professions such as engineering, accountancy, business administration, nursing, medicine, architecture, and criminology or for additional education at another college with more advanced academic material. HBO (higher professional education), which is the equivalent of college and has a professional orientation. After HBO (typically 4–6 years), pupils can enroll in a (professional) master's program (1–2 years) or enter the job market. The HBO is taught in vocational universities (hogescholen), of which there are over 40 in the Netherlands, each of which offers a broad variety of programs, with the exception of some that specialize in arts or agriculture. Note that the hogescholen are not allowed to name themselves university in Dutch. This also stretches to English and therefore HBO institutions are known as universities of applied sciences. WO (Scientific education), which is the equivalent to university level education and has an academic orientation. HBO graduates can be awarded two titles, which are Baccalaureus (bc.) and Ingenieur (ing.). At a WO institution, many more bachelor's and master's titles can be awarded. Bachelor's degrees: Bachelor of Arts (BA), Bachelor of Science (BSc) and Bachelor of Laws (LLB). Master's degrees: Master of Arts (MA), Master of Laws (LLM) and Master of Science (MSc). The PhD title is a research degree awarded upon completion and defense of a doctoral thesis. Portugal Presently in Portugal, the term colégio (college) is normally used as a generic reference to a private (non-government) school that provides from basic to secondary education. Many of the private schools include the term colégio in their name. Some special public schools – usually of the boarding school type – also include the term in their name, with a notable example being the Colégio Militar (Military College). The term colégio interno (literally "internal college") is used specifically as a generic reference to a boarding school. Until the 19th century, a colégio was usually a secondary or pre-university school, of public or religious nature, where the students usually lived together. A model for these colleges was the Royal College of Arts and Humanities, founded in Coimbra by King John III of Portugal in 1542. United Kingdom Secondary education and further education Further education (FE) colleges and sixth form colleges are institutions providing further education to students over 16. Some of these also provide higher education courses (see below). In the context of secondary education, 'college' is used in the names of some private schools, e.g. Eton College and Winchester College. Higher education In higher education, a college is normally a provider that does not hold university status, although it can also refer to a constituent part of a collegiate or federal university or a grouping of academic faculties or departments within a university. Traditionally the distinction between colleges and universities was that colleges did not award degrees while universities did, but this is no longer the case with NCG having gained taught degree awarding powers (the same as some universities) on behalf of its colleges, and many of the colleges of the University of London holding full degree awarding powers and being effectively universities. Most colleges, however, do not hold their own degree awarding powers and continue to offer higher education courses that are validated by universities or other institutions that can award degrees. In England, , over 60% of the higher education providers directly funded by HEFCE (208/340) are sixth-form or further education colleges, often termed colleges of further and higher education, along with 17 colleges of the University of London, one university college, 100 universities, and 14 other providers (six of which use 'college' in their name). Overall, this means over two-thirds of state-supported higher education providers in England are colleges of one form or another. Many private providers are also called colleges, e.g. the New College of the Humanities and St Patrick's College, London. Colleges within universities vary immensely in their responsibilities. The large constituent colleges of the University of London are effectively universities in their own right; colleges in some universities, including those of the University of the Arts London and smaller colleges of the University of London, run their own degree courses but do not award degrees; those at the University of Roehampton provide accommodation and pastoral care as well as delivering the teaching on university courses; those at Oxford and Cambridge deliver some teaching on university courses as well as providing accommodation and pastoral care; and those in Durham, Kent, Lancaster and York provide accommodation and pastoral care but do not normally participate in formal teaching. The legal status of these colleges also varies widely, with University of London colleges being independent corporations and recognised bodies, Oxbridge colleges, colleges of the University of the Highlands and Islands (UHI) and some Durham colleges being independent corporations and listed bodies, most Durham colleges being owned by the university but still listed bodies, and those of other collegiate universities not having formal recognition. When applying for undergraduate courses through UCAS, University of London colleges are treated as independent providers, colleges of Oxford, Cambridge, Durham and UHI are treated as locations within the universities that can be selected by specifying a 'campus code' in addition to selecting the university, and colleges of other universities are not recognised. The UHI and the University of Wales Trinity Saint David (UWTSD) both include further education colleges. However, while the UHI colleges integrate FE and HE provision, UWTSD maintains a separation between the university campuses (Lampeter, Carmarthen and Swansea) and the two colleges (Coleg Sir Gâr and Coleg Ceredigion; n.b. coleg is Welsh for college), which although part of the same group are treated as separate institutions rather than colleges within the university. A university college is an independent institution with the power to award taught degrees, but which has not been granted university status. University College is a protected title that can only be used with permission, although note that University College London, University College, Oxford and University College, Durham are colleges within their respective universities and not university colleges (in the case of UCL holding full degree awarding powers that set it above a university college), while University College Birmingham is a university in its own right and also not a university college. Oceania Australia In Australia a college may be an institution of tertiary education that is smaller than a university, run independently or as part of a university. Following a reform in the 1980s many of the formerly independent colleges now belong to a larger universities. Referring to parts of a university, there are residential colleges which provide residence for students, both undergraduate and postgraduate, called university colleges. These colleges often provide additional tutorial assistance, and some host theological study. Many colleges have strong traditions and rituals, so are a combination of dormitory style accommodation and fraternity or sorority culture. Most technical and further education institutions (TAFEs), which offer certificate and diploma vocational courses, are styled "TAFE colleges" or "Colleges of TAFE". In some places, such as Tasmania, college refers to a type of school for Year 11 and 12 students, e.g. Don College. New Zealand The constituent colleges of the former University of New Zealand (such as Canterbury University College) have become independent universities. Some halls of residence associated with New Zealand universities retain the name of "college", particularly at the University of Otago (which although brought under the umbrella of the University of New Zealand, already possessed university status and degree awarding powers). The institutions formerly known as "Teacher-training colleges" now style themselves "College of education". Some universities, such as the University of Canterbury, have divided their university into constituent administrative "Colleges" – the College of Arts containing departments that teach Arts, Humanities and Social Sciences, College of Science containing Science departments, and so on. This is largely modelled on the Cambridge model, discussed above. Like the United Kingdom some professional bodies in New Zealand style themselves as "colleges", for example, the Royal Australasian College of Surgeons, the Royal Australasian College of Physicians. In some parts of the country, secondary school is often referred to as college and the term is used interchangeably with high school. This sometimes confuses people from other parts of New Zealand. But in all parts of the country many secondary schools have "College" in their name, such as Rangitoto College, New Zealand's largest secondary. Notes References External links See also Community college Residential college University college Vocational university Madrasa Ashrama (stage) Educational stages Higher education Types of university or college Youth
https://en.wikipedia.org/wiki/Cryptanalysis
Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown. In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation. Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization. Overview In encryption, confidential information (called the "plaintext") is sent securely to a recipient by the sender first converting it into an unreadable form ("ciphertext") using an encryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipient decrypts the ciphertext by applying an inverse decryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, or bits, called a cryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext. Encryption has been used throughout history to send important military, diplomatic and commercial messages, and today is very widely used in computer networking to protect email and internet communication. The goal of cryptanalysis is for a third party, a cryptanalyst, to gain as much information as possible about the original ("plaintext"), attempting to "break" the encryption to read the ciphertext and learning the secret key so future messages can be decrypted and read. A mathematical technique to do this is called a cryptographic attack. Cryptographic attacks can be characterized in a number of ways: Amount of information available to the attacker Attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon's Maxim "the enemy knows the system" – in its turn, equivalent to Kerckhoffs' principle. This is a reasonable assumption in practice – throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been broken through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes): Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or codetexts. Known-plaintext: the attacker has a set of ciphertexts to which they know the corresponding plaintext. Chosen-plaintext (chosen-ciphertext): the attacker can obtain the ciphertexts (plaintexts) corresponding to an arbitrary set of plaintexts (ciphertexts) of their own choosing. Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions, similarly to the Adaptive chosen ciphertext attack. Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit. Computational resources required Attacks can also be characterised by the resources they require. Those resources include: Time – the number of computation steps (e.g., test encryptions) which must be performed. Memory – the amount of storage required to perform the attack. Data – the quantity and type of plaintexts and ciphertexts required for a particular approach. It is sometimes difficult to predict these quantities precisely, especially when the attack is not practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, "SHA-1 collisions now 252." Bruce Schneier notes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128 encryptions; an attack requiring 2110 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised." Partial breaks The results of cryptanalysis can also vary in usefulness. Cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered: Total break – the attacker deduces the secret key. Global deduction – the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key. Instance (local) deduction – the attacker discovers additional plaintexts (or ciphertexts) not previously known. Information deduction – the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known. Distinguishing algorithm – the attacker can distinguish the cipher from a random permutation. Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions. In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system. History Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis. Classical ciphers Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods. The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains. Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis. In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis. Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes. In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system. Ciphers from World War I and World War II In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence. Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended. In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a program. Indicator With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message. Poorly designed and implemented indicator systems allowed first Polish cryptographers and then the British cryptographers at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine. Depth Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth." This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message. Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ): Plaintext ⊕ Key = Ciphertext Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext: Ciphertext ⊕ Key = Plaintext (In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts: Ciphertext1 ⊕ Ciphertext2 = Plaintext1 ⊕ Plaintext2 The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as "cribs," at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component: (Plaintext1 ⊕ Plaintext2) ⊕ Plaintext1 = Plaintext2 The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed: Plaintext1 ⊕ Ciphertext1 = Key Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them. Development of modern cryptography Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today. Even though computation was used to great effect in the cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes: Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field." However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography: The block cipher Madryga, proposed in 1984 but not widely used, was found to be susceptible to ciphertext-only attacks in 1998. FEAL-4, proposed as a replacement for the DES standard encryption algorithm but not widely used, was demolished by a spate of attacks from the academic community, many of which are entirely practical. The A5/1, A5/2, CMEA, and DECT systems used in mobile and wireless phone technology can all be broken in hours, minutes or even in real-time using widely available computing equipment. Brute-force keyspace search has broken some real-world ciphers and applications, including single-DES (see EFF DES cracker), 40-bit "export-strength" cryptography, and the DVD Content Scrambling System. In 2001, Wired Equivalent Privacy (WEP), a protocol used to secure Wi-Fi wireless networks, was shown to be breakable in practice because of a weakness in the RC4 cipher and aspects of the WEP design that made related-key attacks practical. WEP was later replaced by Wi-Fi Protected Access. In 2008, researchers conducted a proof-of-concept break of SSL using weaknesses in the MD5 hash function and certificate issuer practices that made it possible to exploit collision attacks on hash functions. The certificate issuers involved changed their practices to prevent the attack from being repeated. Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active. Symmetric ciphers Boomerang attack Brute-force attack Davies' attack Differential cryptanalysis Impossible differential cryptanalysis Improbable differential cryptanalysis Integral cryptanalysis Linear cryptanalysis Meet-in-the-middle attack Mod-n cryptanalysis Related-key attack Sandwich attack Slide attack XSL attack Asymmetric ciphers Asymmetric cryptography (or public-key cryptography) is cryptography that relies on using two (mathematically related) keys; one private, and one public. Such ciphers invariably rely on "hard" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way. Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie–Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends (in part) upon the difficulty of integer factorization – a breakthrough in factoring would impact the security of RSA. In 1980, one could factor a difficult 50-digit number at an expense of 1012 elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 1012 operations. Advances in computing technology also meant that the operations could be performed much faster, too. Moore's law predicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such as elliptic curve cryptography to be used. Another distinguishing feature of asymmetric schemes is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from the public key. Attacking cryptographic hash systems Birthday attack Hash function security summary Rainbow table Side-channel attacks Black-bag cryptanalysis Man-in-the-middle attack Power analysis Replay attack Rubber-hose cryptanalysis Timing analysis Quantum computing applications for cryptanalysis Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption. By using Grover's algorithm on a quantum computer, brute-force key search can be made quadratically faster. However, this could be countered by doubling the key length. See also Economics of security Global surveillance Information assurance, a term for information security often used in government Information security, the overarching goal of most cryptography National Cipher Challenge Security engineering, the design of applications and protocols Security vulnerability; vulnerabilities can include cryptographic or other flaws Topics in cryptography Zendian Problem Historic cryptanalysts Conel Hugh O'Donel Alexander Charles Babbage Lambros D. Callimahos Joan Clarke Alastair Denniston Agnes Meyer Driscoll Elizebeth Friedman William F. Friedman Meredith Gardner Friedrich Kasiski Al-Kindi Dilly Knox Solomon Kullback Marian Rejewski Joseph Rochefort, whose contributions affected the outcome of the Battle of Midway Frank Rowlett Abraham Sinkov Giovanni Soro, the Renaissance's first outstanding cryptanalyst John Tiltman Alan Turing William T. Tutte John Wallis – 17th-century English mathematician William Stone Weedon – worked with Fredson Bowers in World War II Herbert Yardley References Citations Sources Ibrahim A. Al-Kadi,"The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126. Friedrich L. Bauer: "Decrypted Secrets". Springer 2002. Helen Fouché Gaines, "Cryptanalysis", 1939, Dover. David Kahn, "The Codebreakers – The Story of Secret Writing", 1967. Lars R. Knudsen: Contemporary Block Ciphers. Lectures on Data Security 1998: 105–126 Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966. Christopher Swenson, Modern Cryptanalysis: Techniques for Advanced Code Breaking, Friedman, William F., Military Cryptanalysis, Part I, Friedman, William F., Military Cryptanalysis, Part II, Friedman, William F., Military Cryptanalysis, Part III, Simpler Varieties of Aperiodic Substitution Systems, Friedman, William F., Military Cryptanalysis, Part IV, Transposition and Fractionating Systems, Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 1, Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 2, Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 1, Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 2, Transcript of a lecture given by Prof. Tutte at the University of Waterloo Further reading External links Basic Cryptanalysis (files contain 5 line header, that has to be removed first) Distributed Computing Projects List of tools for cryptanalysis on modern cryptography Simon Singh's crypto corner The National Museum of Computing UltraAnvil tool for attacking simple substitution ciphers How Alan Turing Cracked The Enigma Code Imperial War Museums Cryptographic attacks Applied mathematics Arab inventions
https://en.wikipedia.org/wiki/Castrato
A castrato (Italian, : castrati) is a male singer who underwent castration before puberty in order to retain singing voice equivalent to that of a soprano, mezzo-soprano, or contralto. The voice can also occur in one who, due to an endocrinological condition, never reaches sexual maturity. Castration before puberty (or in its early stages) prevents the larynx from being transformed by the normal physiological events of puberty. As a result, the vocal range of prepubescence (shared by both sexes) is largely retained, and the voice develops into adulthood in a unique way. Prepubescent castration for this purpose diminished greatly in the late 18th century. Methods of castration used to terminate the onset of puberty varied. Methods involved using opium to medically induce a coma, then submerging the boy into an ice or milk bath where the procedure of either severing the vas deferens (similar to a vasectomy), twisting the testicles until they atrophied, or complete removal via surgical cutting was performed (however the complete removal of the testicles was not a popularly used technique). The procedure was usually done to boys around the age of 8–10; recovery time from the procedure took around two weeks. The means by which future singers were prepared could lead to premature death. To prevent the child from experiencing the intense pain of castration, many were inadvertently administered lethal doses of opium or some other narcotic, or were killed by overlong compression of the carotid artery in the neck (intended to render them unconscious during the castration procedure). The geographical locations of where these procedures took place is not known specifically. During the 18th century itself, the music historian Charles Burney was sent from pillar to post in search of places where the operation was carried out: I enquired throughout Italy at what place boys were chiefly qualified for singing by castration, but could get no certain intelligence. I was told at Milan that it was at Venice; at Venice that it was at Bologna; but at Bologna the fact was denied, and I was referred to Florence; from Florence to Rome, and from Rome I was sent to Naples ... it is said that there are shops in Naples with this inscription: 'QUI SI CASTRANO RAGAZZI' ("Here boys are castrated"); but I was utterly unable to see or hear of any such shops during my residence in that city. As the castrato's body grew, his lack of testosterone meant that his epiphyses (bone-joints) did not harden in the normal manner. Thus the limbs of the castrati often grew unusually long, as did their ribs. This, combined with intensive training, gave them unrivalled lung-power and breath capacity. Operating through small, child-sized vocal cords, their voices were also extraordinarily flexible, and quite different from the equivalent adult female voice. Their vocal range was higher than that of the uncastrated adult male. Listening to the only surviving recordings of a castrato (see below), one can hear that the lower part of the voice sounds like a "super-high" tenor, with a more falsetto-like upper register above that. Castrati were rarely referred to as such: in the 18th century, the euphemism musico (pl musici) was much more generally used, although it usually carried derogatory implications; another synonym was evirato, literally meaning "emasculated". Eunuch is a more general term since, historically, many eunuchs were castrated after puberty and thus the castration had no impact on their voices. History Castration as a means of subjugation, enslavement or other punishment has a very long history, dating back to ancient Sumer. In a Western context, eunuch singers are known to have existed from the early Byzantine Empire. In Constantinople around 400 AD, the empress Aelia Eudoxia had a eunuch choir-master, Brison, who may have established the use of castrati in Byzantine choirs, though whether Brison himself was a singer and whether he had colleagues who were eunuch singers is not certain. By the 9th century, eunuch singers were well-known (not least in the choir of Hagia Sophia) and remained so until the sack of Constantinople by the Western forces of the Fourth Crusade in 1204. Their fate from then until their reappearance in Italy more than three hundred years later is not clear. It seems likely that the Spanish tradition of soprano falsettists may have hidden castrati. Much of Spain was under Muslim rulers during the Middle Ages, and castration had a history going back to the ancient Near East. Stereotypically, eunuchs served as harem guards, but they were also valued as high-level political appointees since they could not start a dynasty which would threaten the ruler. European classical tradition Castrati first appeared in Italy in the mid-16th century, though at first the terms describing them were not always clear. The phrase soprano maschio (male soprano), which could also mean falsettist, occurs in the Due Dialoghi della Musica (Two dialogues upon music) of Luigi Dentice, an Oratorian priest, published in Rome in 1553. On 9 November 1555 Cardinal Ippolito II d'Este (famed as the builder of the Villa d'Este at Tivoli), wrote to Guglielmo Gonzaga, Duke of Mantua (1538–1587), that he has heard that the Duke was interested in his cantoretti (little singers) and offered to send him two, so that he could choose one for his own service. This is a rare term but probably does equate to castrato. The Cardinal's nephew, Alfonso II d'Este, Duke of Ferrara, was another early enthusiast, inquiring about castrati in 1556. There were certainly castrati in the Sistine Chapel choir in 1558, although not described as such: on 27 April of that year, Hernando Bustamante, a Spaniard from Palencia, was admitted (the first castrati so termed who joined the Sistine choir were Pietro Paolo Folignato and Girolamo Rossini, admitted in 1599). Surprisingly, considering the later French distaste for castrati, they certainly existed in France at this time also, being known of in Paris, Orléans, Picardy and Normandy, though they were not abundant: the King of France himself had difficulty in obtaining them. By 1574, there were castrati in the Ducal court chapel at Munich, where the Kapellmeister (music director) was the famous Orlando di Lasso. In 1589, by the bull Cum pro nostro pastorali munere, Pope Sixtus V re-organised the choir of St Peter's, Rome specifically to include castrati. Thus the castrati came to supplant both boys (whose voices broke after only a few years) and falsettists (whose voices were weaker and less reliable) from the top line in such choirs. Women were banned by the Pauline dictum mulieres in ecclesiis taceant ("let women keep silent in the churches"; see I Corinthians, ch. 14, v. 34). The Italian castrati were often rumored to have unusually long lives, but a 1993 study found that their lifespans were average. Opera Although the castrato (or musico) predates opera, there is some evidence that castrati had parts in the earliest operas. In the first performance of Monteverdi's Orfeo (1607), for example, they played subsidiary roles, including Speranza and (possibly) that of Euridice. Although female roles were performed by castrati in some of the papal states, this was increasingly rare; by 1680, they had supplanted "normal" male voices in lead roles, and retained their position as primo uomo for about a hundred years; an Italian opera not featuring at least one renowned castrato in a lead part would be doomed to fail. Because of the popularity of Italian opera throughout 18th-century Europe (except France), singers such as Ferri, Farinelli, Senesino and Pacchierotti became the first operatic superstars, earning enormous fees and hysterical public adulation. The strictly hierarchical organisation of opera seria favoured their high voices as symbols of heroic virtue, though they were frequently mocked for their strange appearance and bad acting. In his 1755 Reflections upon theatrical expression in tragedy, Roger Pickering wrote: Farinelli drew every Body to the Haymarket. What a Pipe! What Modulation! What Extasy to the Ear! But, Heavens! What Clumsiness! What Stupidity! What Offence to the Eye! Reader, if of the City, thou mayest probably have seen in the Fields of Islington or Mile-End or, If thou art in the environs of St James', thou must have observed in the Park with what Ease and Agility a cow, heavy with calf, has rose up at the command of the Milk-woman's foot: thus from the mossy bank sprang the DIVINE FARINELLI.The training of the boys was rigorous. The regimen of one singing school in Rome (c. 1700) consisted of one hour of singing difficult and awkward pieces, one hour practising trills, one hour practising ornamented passaggi, one hour of singing exercises in their teacher's presence and in front of a mirror so as to avoid unnecessary movement of the body or facial grimaces, and one hour of literary study; all this, moreover, before lunch. After, half an hour would be devoted to musical theory, another to writing counterpoint, an hour copying down the same from dictation, and another hour of literary study. During the remainder of the day, the young castrati had to find time to practice their harpsichord playing, and to compose vocal music, either sacred or secular depending on their inclination. This demanding schedule meant that, if sufficiently talented, they were able to make a debut in their mid-teens with a perfect technique and a voice of a flexibility and power no woman or ordinary male singer could match. In the 1720s and 1730s, at the height of the craze for these voices, it has been estimated that upwards of 4,000 boys were castrated annually in the service of art. Many came from poor homes and were castrated by their parents in the hope that their child might be successful and lift them from poverty (this was the case with Senesino). There are, though, records of some young boys asking to be operated on to preserve their voices (e.g. Caffarelli, who was from a wealthy family: his grandmother gave him the income from two vineyards to pay for his studies). Caffarelli was also typical of many castrati in being famous for tantrums on and off-stage, and for amorous adventures with noble ladies. Some, as described by Casanova, preferred gentlemen (noble or otherwise). Only a small percentage of boys castrated to preserve their voices had successful careers on the operatic stage; the better "also-rans" sang in cathedral or church choirs, but because of their marked appearance and the ban on their marrying, there was little room for them in society outside a musical context. The castrati came in for a great amount of scurrilous and unkind abuse, and as their fame increased, so did the hatred of them. They were often castigated as malign creatures who lured men into homosexuality. There were homosexual castrati, as Casanova's accounts of 18th-century Italy bear witness. He mentions meeting an abbé whom he took for a girl in disguise, only later discovering that "she" was a famous castrato. In Rome in 1762 he attended a performance at which the prima donna was a castrato, "the favourite pathic" of Cardinal Borghese, who dined every evening with his protector. From his behaviour on stage "it was obvious that he hoped to inspire the love of those who liked him as a man, and probably would not have done so as a woman". Decline By the late 18th century, changes in operatic taste and social attitudes spelled the end for castrati. They lingered on past the end of the ancien régime (which their style of opera parallels), and two of their number, Pacchierotti and Crescentini, performed before Napoleon. The last great operatic castrato was Giovanni Battista Velluti (1781–1861), who performed the last operatic castrato role ever written: Armando in Il crociato in Egitto by Meyerbeer (Venice, 1824). Soon after this they were replaced definitively as the first men of the operatic stage by a new breed of heroic tenor, as first incarnated by the Frenchman Gilbert-Louis Duprez, the earliest so-called "king of the high Cs". His successors have included such singers as Enrico Tamberlik, Jean de Reszke, Francesco Tamagno, Enrico Caruso, Giovanni Martinelli, Beniamino Gigli, Jussi Björling, Franco Corelli and Luciano Pavarotti, among others. After the unification of Italy in 1861, "eviration" was officially made illegal (the new Italian state had adopted the previous penal code of the Kingdom of Sardinia which expressly forbade the practice). In 1878, Pope Leo XIII prohibited the hiring of new castrati by the church: only in the Sistine Chapel and in other papal basilicas in Rome did a few castrati linger. A group photo of the Sistine Choir taken in 1898 shows that by then only six remained (plus the Direttore Perpetuo, the fine soprano castrato Domenico Mustafà), and in 1902 a ruling was extracted from Pope Leo that no further castrati should be admitted. The official end to the castrati came on St. Cecilia's Day, 22 November 1903, when the new pope, Pius X, issued his motu proprio, Tra le Sollecitudini ('Amongst the Cares'), which contained this instruction: "Whenever ... it is desirable to employ the high voices of sopranos and contraltos, these parts must be taken by boys, according to the most ancient usage of the Church." The last Sistine castrato to survive was Alessandro Moreschi, the only castrato to have made solo recordings. While an interesting historical record, these discs of his give us only a glimpse of the castrato voice – although he had been renowned as "The Angel of Rome" at the beginning of his career, some would say he was past his prime when the recordings were made in 1902 and 1904 and he never attempted to sing opera. Domenico Salvatori, a castrato who was contemporary with Moreschi, made some ensemble recordings with him but has no surviving solo recordings. The recording technology of the day was not of modern high quality. Salvatori died in 1909; Moreschi retired officially in March 1913, and died in 1922. The Catholic Church's involvement in the castrato phenomenon has long been controversial, and there have recently been calls for it to issue an official apology for its role. As early as 1748, Pope Benedict XIV tried to ban castrati from churches, but such was their popularity at the time that he realised that doing so might result in a drastic decline in church attendance. The rumours of another castrato sequestered in the Vatican for the personal delectation of the Pontiff until as recently as 1959 have been proven false. The singer in question was a pupil of Moreschi's, Domenico Mancini, such a successful imitator of his teacher's voice that even Lorenzo Perosi, Direttore Perpetuo of the Sistine Choir from 1898 to 1956 and a strenuous opponent of the practice of castrato singers, thought he was a castrato. Mancini was in fact a moderately skilful falsettist and professional double bass player. Modern castrati and similar voices A male can retain his child voice if it never changes during puberty. The retained voice can be the treble voice shared by both sexes in childhood and is the same as a boy soprano voice. But as evidence shows, many castratos, such as Senesino and Caffarelli, were actually altos (mezzo-soprano) – not sopranos. So-called "natural" or "endocrinological castrati" are born with hormonal anomalies, such as Klinefelter's syndrome and Kallmann's syndrome, or have undergone unusual physical or medical events during their early lives that reproduce the vocal effects of castration without being castrated. Jimmy Scott, Radu Marian and Javier Medina are examples of this type of high male voice via endocrinological diseases. Michael Maniaci is somewhat different, in that he has no hormonal or other anomalies, but claims that his voice did not "break" in the usual manner, leaving him still able to sing in the soprano register. Other uncastrated male adults sing soprano, generally using some form of falsetto but in a much higher range than most countertenors. Examples are Aris Christofellis, Jörg Waschinski, and Ghio Nannini. However, it is believed the castrati possessed more of a tenorial chest register (the aria "Navigante che non spera" in Leonardo Vinci's opera Il Medo, written for Farinelli, requires notes down to C3, 131 Hz). Similar low-voiced singing can be heard from the jazz vocalist Jimmy Scott, whose range matches approximately that used by female blues singers. High-pitched singer Jordan Smith has demonstrated having more of a tenorial chest register. Actor Chris Colfer has stated in interviews that when his voice began to change at puberty, he sang in a high voice "constantly" in an effort to retain his range. Actor and singer Alex Newell has soprano range. Voice actor Walter Tetley may or may not have been a castrato; Bill Scott, a co-worker of Tetley's during their later work in television, once half-jokingly quipped that Tetley's mother "had him fixed" to protect the child star's voice-acting career. Tetley did never personally divulge the exact reason for his condition, which left him with the voice of a preteen boy for his entire adult life. Botanist George Washington Carver was noted for his high voice, believed to be the result of pertussis and croup infections in his childhood that stunted his growth. Notable castrati Loreto Vittori (1604–1670) Baldassare Ferri (1610–1680) Atto Melani (1626–1714) Giovanni Grossi ("Siface") (1653–1697) Pier Francesco Tosi (1654–1732) Francesco Ceccarelli (1752–1814) Gaspare Pacchierotti (1740-1821) Nicolo Grimaldi ("Nicolini") (1673–1732) Gaetano Berenstadt (1687-1734) Carlo Mannelli (1640–1697) Antonio Bernacchi (1685–1756) Francesco Bernardi ("Senesino") (1686–1758) Valentino Urbani ("Valentini") (1690–1722) Francesco Paolo Masullo (1679-1733) Giacinto Fontana ("Farfallino") (1692–1739) Giuseppe Aprile (1731-1813) Giovanni Carestini ("Cusanino") (c. 1704–c. 1760) Carlo Broschi ("Farinelli") (1705–1782) Domenico Annibali ("Domenichino") (1705–1779) Gaetano Majorano ("Caffarelli") (1710–1783) Francesco Soto de Langa (1534-1619) Felice Salimbeni (1712–1752) Giaocchino Conti ("Gizziello") (1714–1761) Giovanni Battista Mancini (1714 –1800) Giovanni Manzuoli (1720–1782) Gaetano Guadagni (1725–1792) Giusto Fernando Tenducci (ca. 1736–1790) Giuseppe Millico ("Il Muscovita") (1737–1802) Angelo Maria Monticelli (1710–1764) Gasparo Pacchierotti (1740–1821) Venanzio Rauzzini (1746–1810) Luigi Marchesi ("Marchesini") (1754–1829) Vincenzo dal Prato (1756–1828) Girolamo Crescentini (1762–1848) Francesco Antonio Pistocchi (1659-1726) Giovanni Battista "Giambattista" Velluti (1781–1861) Domenico Mustafà (1829–1912) Giovanni Cesari (1843–1904) Domenico Salvatori (1855–1909) Alessandro Moreschi (1858–1922) See also Cry to Heaven The Alteration Farinelli (film) Sarrasine Eunuch Comprachicos References Bibliography External links All you would like to know about Castrati Castrados por amor al arte Recordings: Antonio Maria Bononcini's Vorrei pupille belle, sung by Radu Marian 1904 Recording of Alessandro Moreschi singing Bach/Gounod Ave Maria Javier Medina Avila, including an audio sample (Riccardo Broschi: Ombra fedele anch'io) Voice types Opera history Italian opera terminology Obsolete occupations Androgyny
https://en.wikipedia.org/wiki/Confucius
Kong Fuzi (), more usually Kongzi (; , – ), commonly Latinized as Confucius, was a Chinese philosopher of the Spring and Autumn period who is traditionally considered the paragon of Chinese sages. Confucius's teachings and philosophy underpin East Asian culture and society, and remain influential across China and East Asia to this day. His philosophical teachings, called Confucianism, emphasized personal and governmental morality, correctness of social relationships, justice, kindness, and sincerity, as well as an emphasis on a ruler's duty to their subjects. Confucius considered himself a transmitter for the values of earlier periods which he claimed had been abandoned in his time. The time immediately following Confucius's life saw a rich diversity of thought, and was a formative period in China's intellectual history. His ideas gained in prominence during the Warring States period, but experienced setback immediately following the Qin conquest. Under Emperor Wu of Han, Confucius's ideas received official sanction, with affiliated works becoming required reading for one of the career paths to officialdom. During the Tang and Song dynasties, Confucianism developed into a system known in the West as Neo-Confucianism, and later as New Confucianism. Confucianism became part of the Chinese social fabric and way of life. Confucius is traditionally credited with having authored or edited many of the Chinese classic texts, including all of the Five Classics, but modern scholars are cautious of attributing specific assertions to Confucius himself. At least some of the texts and philosophy he taught were already ancient. Aphorisms concerning his teachings were compiled in the Analects, but only many years after his death. Confucius's principles have commonality with Chinese tradition and belief. With filial piety, he championed strong family loyalty, ancestor veneration, and respect of elders by their children and of husbands by their wives, recommending family as a basis for ideal government. He espoused the Silver Rule, "Do not do unto others what you do not want done to yourself." Name The name "Confucius" is a Latinized form of the Mandarin Chinese (, "Master Kong"), and was coined in the late 16th century by the early Jesuit missionaries to China. Confucius's clan name was Kong () and his given name was Qiu (). His "courtesy name", a capping (guan: ) given at his coming of age ceremony, and by which he would have been known to all but his older family members, was Zhongni (), the "Zhòng" indicating that he was the second son in his family. Life Early life It is thought that Confucius was born on September 28, 551 BCE, in Zou (, in modern Shandong province). The area was notionally controlled by the kings of Zhou but effectively independent under the local lords of Lu, who ruled from the nearby city of Qufu. His father Kong He (or Shuliang He) was an elderly commandant of the local Lu garrison. His ancestry traced back through the dukes of Song to the Shang dynasty which had preceded the Zhou. Traditional accounts of Confucius's life relate that Kong He's grandfather had migrated the family from Song to Lu. Not all modern scholars accept Confucius's descent from Song nobility. Kong He died when Confucius was three years old, and Confucius was raised by his mother Yan Zhengzai () in poverty. His mother later died at less than 40 years of age. At age 19, he married Lady Qiguan (), and a year later the couple had their first child, their son Kong Li (). Qiguan and Confucius later had two daughters together, one of whom is thought to have died as a child and one was named Kong Jiao (). Confucius was educated at schools for commoners, where he studied and learned the Six Arts. Confucius was born into the class of shi (), between the aristocracy and the common people. He is said to have worked in various government jobs during his early 20s, and as a bookkeeper and a caretaker of sheep and horses, using the proceeds to give his mother a proper burial. When his mother died, Confucius (aged 23) is said to have mourned for three years, as was the tradition. Political career In Confucius's time, the state of Lu was headed by a ruling ducal house. Under the duke were three aristocratic families, whose heads bore the title of viscount and held hereditary positions in the Lu bureaucracy. The Ji family held the position "Minister over the Masses", who was also the "Prime Minister"; the Meng family held the position "Minister of Works"; and the Shu family held the position "Minister of War". In the winter of , Yang Hu—a retainer of the Ji family—rose up in rebellion and seized power from the Ji family. However, by the summer of , the three hereditary families had succeeded in expelling Yang Hu from Lu. By then, Confucius had built up a considerable reputation through his teachings, while the families came to see the value of proper conduct and righteousness, so they could achieve loyalty to a legitimate government. Thus, that year (), Confucius came to be appointed to the minor position of governor of a town. Eventually, he rose to the position of Minister of Crime. Confucius desired to return the authority of the state to the duke by dismantling the fortifications of the city—strongholds belonging to the three families. This way, he could establish a centralized government. However, Confucius relied solely on diplomacy as he had no military authority himself. In , Hou Fan—the governor of Hou—revolted against his lord of the Shu family. Although the Meng and Shu families unsuccessfully besieged Hou, a loyalist official rose up with the people of Hou and forced Hou Fan to flee to the Qi state. The situation may have been in favor for Confucius as this likely made it possible for Confucius and his disciples to convince the aristocratic families to dismantle the fortifications of their cities. Eventually, after a year and a half, Confucius and his disciples succeeded in convincing the Shu family to raze the walls of Hou, the Ji family in razing the walls of Bi, and the Meng family in razing the walls of Cheng. First, the Shu family led an army towards their city Hou and tore down its walls in . Soon thereafter, Gongshan Furao (also known as Gongshan Buniu), a retainer of the Ji family, revolted and took control of the forces at Bi. He immediately launched an attack and entered the capital Lu. Earlier, Gongshan had approached Confucius to join him, which Confucius considered as he wanted the opportunity to put his principles into practice but he gave up on the idea in the end. Confucius disapproved the use of a violent revolution by principle, even though the Ji family dominated the Lu state by force for generations and had exiled the previous duke. Creel (1949) states that, unlike the rebel Yang Hu before him, Gongshan may have sought to destroy the three hereditary families and restore the power of the duke. However, Dubs (1946) is of the view that Gongshan was encouraged by Viscount Ji Huan to invade the Lu capital in an attempt to avoid dismantling the Bi fortified walls. Whatever the situation may have been, Gongshan was considered an upright man who continued to defend the state of Lu, even after he was forced to flee. During the revolt by Gongshan, Zhong You had managed to keep the duke and the three viscounts together at the court. Zhong You was one of the disciples of Confucius and Confucius had arranged for him to be given the position of governor by the Ji family. When Confucius heard of the raid, he requested that Viscount Ji Huan allow the duke and his court to retreat to a stronghold on his palace grounds. Thereafter, the heads of the three families and the duke retreated to the Ji's palace complex and ascended the Wuzi Terrace. Confucius ordered two officers to lead an assault against the rebels. At least one of the two officers was a retainer of the Ji family, but they were unable to refuse the orders while in the presence of the duke, viscounts, and court. The rebels were pursued and defeated at Gu. Immediately after the revolt was defeated, the Ji family razed the Bi city walls to the ground. The attackers retreated after realizing that they would have to become rebels against the state and their lord. Through Confucius' actions, the Bi officials had inadvertently revolted against their own lord, thus forcing Viscount Ji Huan's hand in having to dismantle the walls of Bi (as it could have harbored such rebels) or confess to instigating the event by going against proper conduct and righteousness as an official. Dubs (1949) suggests that the incident brought to light Confucius' foresight, practical political ability, and insight into human character. When it was time to dismantle the city walls of the Meng family, the governor was reluctant to have his city walls torn down and convinced the head of the Meng family not to do so. The Zuozhuan recalls that the governor advised against razing the walls to the ground as he said that it made Cheng vulnerable to the Qi state and cause the destruction of the Meng family. Even though Viscount Meng Yi gave his word not to interfere with an attempt, he went back on his earlier promise to dismantle the walls. Later in , Duke Ding personally went with an army to lay siege to Cheng in an attempt to raze its walls to the ground, but he did not succeed. Thus, Confucius could not achieve the idealistic reforms that he wanted including restoration of the legitimate rule of the duke. He had made powerful enemies within the state, especially with Viscount Ji Huan, due to his successes so far. According to accounts in the Zuozhuan and Shiji, Confucius departed his homeland in after his support for the failed attempt of dismantling the fortified city walls of the powerful Ji, Meng, and Shu families. He left the state of Lu without resigning, remaining in self-exile and unable to return as long as Viscount Ji Huan was alive. Exile The Shiji stated that the neighboring Qi state was worried that Lu was becoming too powerful while Confucius was involved in the government of the Lu state. According to this account, Qi decided to sabotage Lu's reforms by sending 100 good horses and 80 beautiful dancing girls to the duke of Lu. The duke indulged himself in pleasure and did not attend to official duties for three days. Confucius was disappointed and resolved to leave Lu and seek better opportunities, yet to leave at once would expose the misbehavior of the duke and therefore bring public humiliation to the ruler Confucius was serving. Confucius therefore waited for the duke to make a lesser mistake. Soon after, the duke neglected to send to Confucius a portion of the sacrificial meat that was his due according to custom, and Confucius seized upon this pretext to leave both his post and the Lu state. After Confucius's resignation, he travelled around the principality states of north-east and central China including Wey, Song, Zheng, Cao, Chu, Qi, Chen, and Cai (and a failed attempt to go to Jin). At the courts of these states, he expounded his political beliefs but did not see them implemented. Return home According to the Zuozhuan, Confucius returned home to his native Lu when he was 68, after he was invited to do so by Ji Kangzi, the chief minister of Lu. The Analects depict him spending his last years teaching 72 or 77 disciples and transmitting the old wisdom via a set of texts called the Five Classics. During his return, Confucius sometimes acted as an advisor to several government officials in Lu, including Ji Kangzi, on matters including governance and crime. Burdened by the loss of both his son and his favorite disciples, he died at the age of 71 or 72 from natural causes. Confucius was buried in Kong Lin cemetery which lies in the historical part of Qufu in the Shandong Province. The original tomb erected there in memory of Confucius on the bank of the Sishui River had the shape of an axe. In addition, it has a raised brick platform at the front of the memorial for offerings such as sandalwood incense and fruit. Philosophy In the Analects, Confucius presents himself as a "transmitter who invented nothing". He puts the greatest emphasis on the importance of study, and it is the Chinese character for study () that opens the text. Far from trying to build a systematic or formalist theory, he wanted his disciples to master and internalize older classics, so that their deep thought and thorough study would allow them to relate the moral problems of the present to past political events (as recorded in the Annals) or the past expressions of commoners' feelings and noblemen's reflections (as in the poems of the Book of Odes). Although some Chinese people follow Confucianism in a religious manner, many argue that its values are secular and that it is less a religion than a secular morality. Proponents of religious Confucianism argue that despite the secular nature of Confucianism's teachings, it is based on a worldview that is religious. Confucianism discusses elements of the afterlife and views concerning Heaven, but it is relatively unconcerned with some spiritual matters often considered essential to religious thought, such as the nature of souls. Ethics One of the deepest teachings of Confucius may have been the superiority of personal exemplification over explicit rules of behavior. His moral teachings emphasized self-cultivation, emulation of moral exemplars, and the attainment of skilled judgment rather than knowledge of rules. Confucian ethics may, therefore, be considered a type of virtue ethics. His teachings rarely rely on reasoned argument, and ethical ideals and methods are conveyed indirectly, through allusion, innuendo, and even tautology. His teachings require examination and context to be understood. A good example is found in this famous anecdote: By not asking about the horses, Confucius demonstrates that the sage values human beings over property (which animals seem to represent in this example); readers are led to reflect on whether their response would follow Confucius's and to pursue self-improvement if it would not have. One of his teachings was a variant of the Golden Rule, sometimes called the "Silver Rule" owing to its negative form: Often overlooked in Confucian ethics are the virtues to the self: sincerity and the cultivation of knowledge. Virtuous action towards others begins with virtuous and sincere thought, which begins with knowledge. A virtuous disposition without knowledge is susceptible to corruption, and virtuous action without sincerity is not true righteousness. Cultivating knowledge and sincerity is also important for one's own sake; the superior person loves learning for the sake of learning and righteousness for the sake of righteousness. The Confucian theory of ethics as exemplified in lǐ () is based on three important conceptual aspects of life: (a) ceremonies associated with sacrifice to ancestors and deities of various types, (b) social and political institutions, and (c) the etiquette of daily behavior. Some believed that lǐ originated from the heavens, but Confucius stressed the development of lǐ through the actions of sage leaders in human history. His discussions of lǐ seem to redefine the term to refer to all actions committed by a person to build the ideal society, rather than those conforming with canonical standards of ceremony. In the early Confucian tradition, lǐ was doing the proper thing at the proper time; balancing between maintaining existing norms to perpetuate an ethical social fabric, and violating them in order to accomplish ethical good. Training in the lǐ of past sages, cultivates virtues in people that include ethical judgment about when lǐ must be adapted in light of situational contexts. In Confucianism, the concept of li is closely related to yì (), which is based upon the idea of reciprocity. Yì can be translated as righteousness, though it may mean what is ethically best to do in a certain context. The term contrasts with action done out of self-interest. While pursuing one's own self-interest is not necessarily bad, one would be a better, more righteous person if one's life was based upon following a path designed to enhance the greater good. Thus an outcome of yì is doing the right thing for the right reason. Just as action according to lǐ should be adapted to conform to the aspiration of adhering to yì, so yì is linked to the core value of rén (). Rén consists of five basic virtues: seriousness, generosity, sincerity, diligence, and kindness. Rén is the virtue of perfectly fulfilling one's responsibilities toward others, most often translated as "benevolence", "humaneness", or "empathy"; translator Arthur Waley calls it "Goodness" (with a capital G), and other translations that have been put forth include "authoritativeness" and "selflessness". Confucius's moral system was based upon empathy and understanding others, rather than divinely ordained rules. To develop one's spontaneous responses of rén so that these could guide action intuitively was even better than living by the rules of yì. Confucius asserts that virtue is a mean between extremes. For example, the properly generous person gives the right amount – not too much and not too little. Politics Confucius's political thought is based upon his ethical thought. He argued that the best government is one that rules through "rites" (lǐ) and people's natural morality, and not by using bribery and coercion. He explained that this is one of the most important analects: "If the people be led by laws, and uniformity sought to be given them by punishments, they will try to avoid the punishment, but have no sense of shame. If they be led by virtue, and uniformity sought to be given them by the rules of propriety, they will have the sense of the shame, and moreover will become good." (Analects 2.3, tr. Legge). This "sense of shame" is an internalisation of duty, where the punishment precedes the evil action, instead of following it in the form of laws as in Legalism. Confucius looked nostalgically upon earlier days, and urged the Chinese, particularly those with political power, to model themselves on earlier examples. In times of division, chaos, and endless wars between feudal states, he wanted to restore the Mandate of Heaven () that could unify the "world" (, "all under Heaven") and bestow peace and prosperity on the people. Because his vision of personal and social perfections was framed as a revival of the ordered society of earlier times, Confucius is often considered a great proponent of conservatism, but a closer look at what he proposes often shows that he used (and perhaps twisted) past institutions and rites to push a new political agenda of his own: a revival of a unified royal state, whose rulers would succeed to power on the basis of their moral merits instead of lineage. These would be rulers devoted to their people, striving for personal and social perfection, and such a ruler would spread his own virtues to the people instead of imposing proper behavior with laws and rules. While Confucius supported the idea of government ruling by a virtuous king, his ideas contained a number of elements to limit the power of rulers. He argued for representing truth in language, and honesty was of paramount importance. Even in facial expression, truth must always be represented. Confucius believed that if a ruler is to lead correctly, by action, that orders would be unnecessary in that others will follow the proper actions of their ruler. In discussing the relationship between a king and his subject (or a father and his son), he underlined the need to give due respect to superiors. This demanded that the subordinates must advise their superiors if the superiors are considered to be taking a course of action that is wrong. Confucius believed in ruling by example, if you lead correctly, orders by force or punishment are not necessary. Music and poetry Confucius heavily promoted the use of music with rituals or the rites order. The scholar Li Zehou argued that Confucianism is based on the idea of rites. Rites serve as the starting point for each individual and that these sacred social functions allow each person's human nature to be harmonious with reality. Given this, Confucius believed that "music is the harmonization of heaven and earth; the rites is the order of heaven and earth". Thus the application of music in rites creates the order that makes it possible for society to prosper. poetry, The Confucian approach to music was heavily inspired by the Shijing and the Classic of Music, which was said to be the sixth Confucian classic until it was lost during the Han Dynasty. The Shijing serves as one of the current Confucian classics and is a book on poetry that contains a diversified variety of poems as well as folk songs. Confucius is traditionally ascribed with compiling these classics within his school. In the Analects, Confucius described the importance of the art in the development of society: Legacy Confucius's teachings were later turned into an elaborate set of rules and practices by his numerous disciples and followers, who organized his teachings into the Analects. Confucius's disciples and his only grandson, Zisi, continued his philosophical school after his death. These efforts spread Confucian ideals to students who then became officials in many of the royal courts in China, thereby giving Confucianism the first wide-scale test of its dogma. Two of Confucius's most famous later followers emphasized radically different aspects of his teachings. In the centuries after his death, Mencius () and Xunzi () both composed important teachings elaborating in different ways on the fundamental ideas associated with Confucius. Mencius () articulated the innate goodness in human beings as a source of the ethical intuitions that guide people towards rén, yì, and lǐ, while Xunzi () underscored the realistic and materialistic aspects of Confucian thought, stressing that morality was inculcated in society through tradition and in individuals through training. In time, their writings, together with the Analects and other core texts came to constitute the philosophical corpus of Confucianism. This realignment in Confucian thought was parallel to the development of Legalism, which saw filial piety as self-interest and not a useful tool for a ruler to create an effective state. A disagreement between these two political philosophies came to a head in when the Qin state conquered all of China. Li Si, Prime Minister of the Qin dynasty, convinced Qin Shi Huang to abandon the Confucians' recommendation of awarding fiefs akin to the Zhou Dynasty before them which he saw as being against to the Legalist idea of centralizing the state around the ruler. When the Confucian advisers pressed their point, Li Si had many Confucian scholars killed and their books burned—considered a huge blow to the philosophy and Chinese scholarship. Under the succeeding Han and Tang dynasties, Confucian ideas gained even more widespread prominence. Under Wudi, the works attributed to Confucius were made the official imperial philosophy and required reading for civil service examinations in which was continued nearly unbroken until the end of the 19th century. As Mohism lost support by the time of the Han, the main philosophical contenders were Legalism, which Confucian thought somewhat absorbed, the teachings of Laozi, whose focus on more spiritual ideas kept it from direct conflict with Confucianism, and the new Buddhist religion, which gained acceptance during the Southern and Northern Dynasties era. Both Confucian ideas and Confucian-trained officials were relied upon in the Ming Dynasty and even the Yuan Dynasty, although Kublai Khan distrusted handing over provincial control to them. During the Song dynasty, the scholar Zhu Xi () added ideas from Daoism and Buddhism into Confucianism. In his life, Zhu Xi was largely ignored, but not long after his death, his ideas became the new orthodox view of what Confucian texts actually meant. Modern historians view Zhu Xi as having created something rather different and call his way of thinking Neo-Confucianism. Neo-Confucianism held sway in China, Japan, Korea, and Vietnam until the 19th century. The works of Confucius were first translated into European languages by Jesuit missionaries in the 16th century during the late Ming dynasty. The first known effort was by Michele Ruggieri, who returned to Italy in 1588 and carried on his translations while residing in Salerno. Matteo Ricci started to report on the thoughts of Confucius, and a team of Jesuits—Prospero Intorcetta, Philippe Couplet, and two others—published a translation of several Confucian works and an overview of Chinese history in Paris in 1687. François Noël, after failing to persuade ClementXI that Chinese veneration of ancestors and Confucius did not constitute idolatry, completed the Confucian canon at Prague in 1711, with more scholarly treatments of the other works and the first translation of the collected works of Mencius. It is thought that such works had considerable importance on European thinkers of the period, particularly among the Deists and other philosophical groups of the Enlightenment who were interested by the integration of the system of morality of Confucius into Western civilization. In the modern era Confucian movements, such as New Confucianism, still exist, but during the Cultural Revolution, Confucianism was frequently attacked by leading figures in the Chinese Communist Party. This was partially a continuation of the condemnations of Confucianism by intellectuals and activists in the early 20th century as a cause of the ethnocentric close-mindedness and refusal of the Qing Dynasty to modernize that led to the tragedies that befell China in the 19th century. Confucius's works are studied by scholars in many other Asian countries, particularly those in the Chinese cultural sphere, such as Korea, Japan, and Vietnam. Many of those countries still hold the traditional memorial ceremony every year. Among Tibetans, Confucius is often worshipped as a holy king and master of magic, divination and astrology. Tibetan Buddhists see him as learning divination from the Buddha Manjushri (and that knowledge subsequently reaching Tibet through Princess Wencheng), while Bon practitioners see him as being a reincarnation of Tonpa Shenrab Miwoche, the legendary founder of Bon. The Ahmadiyya Muslim Community believes Confucius was a Divine Prophet of God, as were Lao-Tzu and other eminent Chinese personages. According to the Siddhar tradition of Tamil Nadu, Confucius is one of the 18 esteemed Siddhars of yore, and is better known as Kalangi Nathar or Kamalamuni. The Thyagaraja Temple in Thiruvarur, Tamil Nadu is home to his Jeeva Samadhi. In modern times, Asteroid 7853, "Confucius", was named after the Chinese thinker. Disciples Confucius began teaching after he turned 30, and taught more than 3,000 students in his life, about 70 of whom were considered outstanding. His disciples and the early Confucian community they formed became the most influential intellectual force in the Warring States period. The Han dynasty historian Sima Qian dedicated a chapter in his Records of the Grand Historian to the biographies of Confucius's disciples, accounting for the influence they exerted in their time and afterward. Sima Qian recorded the names of 77 disciples in his collective biography, while Kongzi Jiayu, another early source, records 76, not completely overlapping. The two sources together yield the names of 96 disciples. Twenty-two of them are mentioned in the Analects, while the Mencius records 24. Confucius did not charge any tuition, and only requested a symbolic gift of a bundle of dried meat from any prospective student. According to his disciple Zigong, his master treated students like doctors treated patients and did not turn anybody away. Most of them came from Lu, Confucius's home state, with 43 recorded, but he accepted students from all over China, with six from the state of Wey (such as Zigong), three from Qin, two each from Chen and Qi, and one each from Cai, Chu, and Song. Confucius considered his students' personal background irrelevant, and accepted noblemen, commoners, and even former criminals such as Yan Zhuoju and Gongye Chang. His disciples from richer families would pay a sum commensurate with their wealth which was considered a ritual donation. Confucius's favorite disciple was Yan Hui, most probably one of the most impoverished of them all. Sima Niu, in contrast to Yan Hui, was from a hereditary noble family hailing from the Song state. Under Confucius's teachings, the disciples became well learned in the principles and methods of government. He often engaged in discussion and debate with his students and gave high importance to their studies in history, poetry, and ritual. Confucius advocated loyalty to principle rather than to individual acumen, in which reform was to be achieved by persuasion rather than violence. Even though Confucius denounced them for their practices, the aristocracy was likely attracted to the idea of having trustworthy officials who were studied in morals as the circumstances of the time made it desirable. In fact, the disciple Zilu even died defending his ruler in Wey. Yang Hu, who was a subordinate of the Ji family, had dominated the Lu government from 505 to 502 and even attempted a coup, which narrowly failed. As a likely consequence, it was after this that the first disciples of Confucius were appointed to government positions. A few of Confucius's disciples went on to attain official positions of some importance, some of which were arranged by Confucius. By the time Confucius was 50 years old, the Ji family had consolidated their power in the Lu state over the ruling ducal house. Even though the Ji family had practices with which Confucius disagreed and disapproved, they nonetheless gave Confucius's disciples many opportunities for employment. Confucius continued to remind his disciples to stay true to their principles and renounced those who did not, all the while being openly critical of the Ji family. In the West The influence of Confucius has been observed on multiple Western thinkers, including Niels Bohr, Benjamin Franklin, Allen Ginsberg, Thomas Jefferson, Gottfried Wilhelm Leibniz, Robert Cummings Neville, Alexander Pope, Ezra Pound, François Quesnay, Friedrich Schiller, Voltaire, and Christian Wolff. Visual portraits No contemporary painting or sculpture of Confucius survives, and it was only during the Han Dynasty that he was portrayed visually. Carvings often depict his legendary meeting with Laozi. Since that time there have been many portraits of Confucius as the ideal philosopher. An early verbal portrayal of Confucius is found in the chapter "External Things" () of the book Zhuangzi (), finished in about 3rd BCE, long after Confucius's death. The oldest known portrait of Confucius has been unearthed in the tomb of the Han dynasty ruler Marquis of Haihun (died ). The picture was painted on the wooden frame to a polished bronze mirror. In former times, it was customary to have a portrait in Confucius Temples; however, during the reign of Hongwu Emperor (Taizu) of the Ming dynasty, it was decided that the only proper portrait of Confucius should be in the temple in his home town, Qufu in Shandong. In other temples, Confucius is represented by a memorial tablet. In 2006, the China Confucius Foundation commissioned a standard portrait of Confucius based on the Tang dynasty portrait by Wu Daozi. The South Wall Frieze in the courtroom of the Supreme Court of the United States depicts Confucius as a teacher of harmony, learning, and virtue. Fictional portrayals There have been two film adaptations of Confucius' life: the 1940 film Confucius starring Tang Huaiqiu, and the 2010 film Confucius starring Chow Yun-fat. Memorials Soon after Confucius's death, Qufu, his home town, became a place of devotion and remembrance. The Han dynasty Records of the Grand Historian records that it had already become a place of pilgrimage for ministers. It is still a major destination for cultural tourism, and many people visit his grave and the surrounding temples. In Sinic cultures, there are many temples where representations of the Buddha, Laozi, and Confucius are found together. There are also many temples dedicated to him, which have been used for Confucian ceremonies. Followers of Confucianism have a tradition of holding spectacular memorial ceremonies of Confucius () every year, using ceremonies that supposedly derived from Zhou Li () as recorded by Confucius, on the date of Confucius's birth. In the 20th century, this tradition was interrupted for several decades in mainland China, where the official stance of the Communist Party and the State was that Confucius and Confucianism represented reactionary feudalist beliefs which held that the subservience of the people to the aristocracy is a part of the natural order. All such ceremonies and rites were therefore banned. Only after the 1990s did the ceremony resume. As it is now considered a veneration of Chinese history and tradition, even Communist Party members may be found in attendance. In Taiwan, where the Nationalist Party (Kuomintang) strongly promoted Confucian beliefs in ethics and behavior, the tradition of the memorial ceremony of Confucius () is supported by the government and has continued without interruption. While not a national holiday, it does appear on all printed calendars, much as Father's Day or Christmas Day do in the Western world. In South Korea, a grand-scale memorial ceremony called Seokjeon Daeje is held twice a year on Confucius's birthday and the anniversary of his death, at Confucian academies across the country and Sungkyunkwan in Seoul. Descendants Confucius's descendants were repeatedly identified and honored by successive imperial governments with titles of nobility and official posts. They were honored with the rank of a marquis 35 times since Gaozu of the Han dynasty, and they were promoted to the rank of duke 42 times from the Tang dynasty to the Qing dynasty. Emperor Xuanzong of Tang first bestowed the title of "Duke Wenxuan" on Kong Suizhi of the 35th generation. In 1055, Emperor Renzong of Song first bestowed the title of "Duke Yansheng" on Kong Zongyuan of the 46th generation. During the Southern Song dynasty, the Duke Yansheng Kong Duanyou fled south with the Song Emperor to Quzhou in Zhejiang, while the newly established Jin dynasty (1115–1234) in the north appointed Kong Duanyou's brother Kong Duancao who remained in Qufu as Duke Yansheng. From that time up until the Yuan dynasty, there were two Duke Yanshengs, one in the north in Qufu and the other in the south at Quzhou. An invitation to come back to Qufu was extended to the southern Duke Yansheng Kong Zhu by the Yuan-dynasty Emperor Kublai Khan. The title was taken away from the southern branch after Kong Zhu rejected the invitation, so the northern branch of the family kept the title of Duke Yansheng. The southern branch remained in Quzhou where they live to this day. Confucius's descendants in Quzhou alone number 30,000. The Hanlin Academy rank of Wujing boshi 五經博士 was awarded to the southern branch at Quzhou by a Ming Emperor while the northern branch at Qufu held the title Duke Yansheng. The leader of the southern branch was 孔祥楷 Kong Xiangkai. In 1351, during the reign of Emperor Toghon Temür of the Yuan dynasty, 54th-generation Kong Shao () moved from China to Korea during the Goryeo Dynasty, and was received courteously by Princess Noguk (the Mongolian-born queen consort of the future king Gongmin). After being naturalized as a subject of Goryeo, he changed the hanja of his name from "昭" to "紹" (both pronounced so in Korean), married a Korean woman and bore a son (Gong Yeo (), 1329–1397), therefore establishing the Changwon Gong clan (), whose ancestral seat was located in Changwon, South Gyeongsang Province. In 1794, during the reign of King Jeongjo, the clan then changed its name to Gokbu Gong clan () in honor of Confucius's birthplace Qufu (). Famous descendants include actors such as Gong Yoo (real name Gong Ji-cheol (공지철)) and Gong Hyo-jin (공효진); and artists such as male idol group B1A4 member Gongchan (real name Gong Chan-sik (공찬식)), singer-songwriter Minzy (real name Gong Min-ji (공민지)), as well as her great aunt, traditional folk dancer (공옥진). Despite repeated dynastic change in China, the title of Duke Yansheng was bestowed upon successive generations of descendants until it was abolished by the Nationalist government in 1935. The last holder of the title, Kung Te-cheng of the 77th generation, was appointed Sacrificial Official to Confucius. Kung Te-cheng died in October 2008, and his son, Kung Wei-yi, the 78th lineal descendant, died in 1989. Kung Te-cheng's grandson, Kung Tsui-chang, the 79th lineal descendant, was born in 1975; his great-grandson, Kung Yu-jen, the 80th lineal descendant, was born in Taipei on January 1, 2006. Te-cheng's sister, Kong Demao, lives in mainland China and has written a book about her experiences growing up at the family estate in Qufu. Another sister, Kong Deqi, died as a young woman. Many descendants of Confucius still live in Qufu today. A descendant of Confucius, H. H. Kung, was the Premier of the Republic of China. One of his sons, (孔令傑), married Debra Paget who gave birth to Gregory Kung (). Confucius's family, the Kongs, have the longest recorded extant pedigree in the world today. The father-to-son family tree, now in its 83rd generation, has been recorded since the death of Confucius. According to the Confucius Genealogy Compilation Committee (CGCC), he has two million known and registered descendants, and there are an estimated three million in all. Of these, several tens of thousands live outside of China. In the 14th century, a Kong descendant went to Korea, where an estimated 34,000 descendants of Confucius live today. One of the main lineages fled from the Kong ancestral home in Qufu during the Chinese Civil War in the 1940s and eventually settled in Taiwan. There are also branches of the Kong family who have converted to Islam after marrying Muslim women, in Dachuan in Gansu province in the 1800s, and in 1715 in Xuanwei in Yunnan province. Many of the Muslim Confucius descendants are descended from the marriage of Ma Jiaga (), a Muslim woman, and Kong Yanrong (), 59th generation descendant of Confucius in the year 1480, and are found among the Hui and Dongxiang peoples. The new genealogy includes the Muslims. Kong Dejun () is a prominent Islamic scholar and Arabist from Qinghai province and a 77th generation descendant of Confucius. Because of the huge interest in the Confucius family tree, there was a project in China to test the DNA of known family members of the collateral branches in mainland China. Among other things, this would allow scientists to identify a common Y chromosome in male descendants of Confucius. If the descent were truly unbroken, father-to-son, since Confucius's lifetime, the males in the family would all have the same Y chromosome as their direct male ancestor, with slight mutations due to the passage of time. The aim of the genetic test was to help members of collateral branches in China who lost their genealogical records to prove their descent. However, in 2009, many of the collateral branches decided not to agree to DNA testing. Bryan Sykes, professor of genetics at Oxford University, understands this decision: "The Confucius family tree has an enormous cultural significance ... It's not just a scientific question." The DNA testing was originally proposed to add new members, many of whose family record books were lost during 20th century upheavals, to the Confucian family tree. The main branch of the family which fled to Taiwan was never involved in the proposed DNA test at all. In 2013, a DNA test performed on multiple different families who claimed descent from Confucius found that they shared the same Y chromosome as reported by Fudan University. The fifth and most recent edition of the Confucius genealogy was printed by the CGCC. It was unveiled in a ceremony at Qufu on September 24, 2009. Women are now included for the first time. References Citations Bibliography . Further reading See and for extensive bibliographies Clements, Jonathan (2008). Confucius: A Biography. Stroud, Gloucestershire, England: Sutton Publishing. . Confucius (1997). Lun yu, (in English The Analects of Confucius). Translation and notes by Simon Leys. New York: W.W. Norton. . Confucius (2003). Confucius: Analects – With Selections from Traditional Commentaries. Translated by E. Slingerland. Indianapolis: Hackett Publishing. (Original work published c. ) . Creel, Herrlee Glessner (1949). Confucius and the Chinese Way. New York: Harper. Csikszentmihalyi, M. (2005). "Confucianism: An Overview". In Encyclopedia of Religion (Vol. C, pp. 1890–1905). Detroit: MacMillan Reference . Sterckx, Roel. Chinese Thought. From Confucius to Cook Ding. London: Penguin, 2019. Van Norden, B.W., ed. (2001). Confucius and the Analects: New Essays. New York: Oxford University Press. . External links Multilingual web site on Confucius and the Analects The Dao of Kongzi, introduction to the thought of Confucius. Confucian Analects (Project Gutenberg release of James Legge's Translation) Core philosophical passages in the Analects of Confucius. 551 BC births 479 BC deaths 6th-century BC historians 6th-century BC Chinese philosophers 6th-century BC Chinese writers 5th-century BC historians 5th-century BC Chinese philosophers 5th-century BC Chinese writers Aphorists Chinese educational theorists Chinese ethicists Chinese logicians Chinese political philosophers Classical humanists Confucianism Deified Chinese people Education theory Educators from Shandong Founders of religions Gokbu Gong clan Guqin players Historians from Shandong People from Qufu Politicians from Jining Philosophers from Lu (state) Philosophers from Shandong Philosophers of culture Philosophers of education Philosophers of law Social philosophers Writers from Jining Zhou dynasty writers Zhou dynasty philosophers Zhou dynasty government officials 5th-century BC Chinese musicians 6th-century BC Chinese musicians 6th-century BC religious leaders 5th-century BC religious leaders
https://en.wikipedia.org/wiki/Cryptozoology
Cryptozoology is a pseudoscience and subculture that searches for and studies unknown, legendary, or extinct animals whose present existence is disputed or unsubstantiated, particularly those popular in folklore, such as Bigfoot, the Loch Ness Monster, Yeti, the chupacabra, the Jersey Devil, or the Mokele-mbembe. Cryptozoologists refer to these entities as cryptids, a term coined by the subculture. Because it does not follow the scientific method, cryptozoology is considered a pseudoscience by mainstream science: it is neither a branch of zoology nor of folklore studies. It was originally founded in the 1950s by zoologists Bernard Heuvelmans and Ivan T. Sanderson. Scholars have noted that the subculture rejected mainstream approaches from an early date, and that adherents often express hostility to mainstream science. Scholars have studied cryptozoologists and their influence (including cryptozoology's association with Young Earth creationism), noted parallels in cryptozoology and other pseudosciences such as ghost hunting and ufology, and highlighted uncritical media propagation of cryptozoologist claims. Terminology, history, and approach As a field, cryptozoology originates from the works of Bernard Heuvelmans, a Belgian zoologist, and Ivan T. Sanderson, a Scottish zoologist. Notably, Heuvelmans published On the Track of Unknown Animals (French Sur la Piste des Bêtes Ignorées) in 1955, a landmark work among cryptozoologists that was followed by numerous other like works. Similarly, Sanderson published a series of books that contributed to the developing hallmarks of cryptozoology, including Abominable Snowmen: Legend Come to Life (1961). Heuvelmans himself traced cryptozoology to the work of Anthonie Cornelis Oudemans, who theorized that a large unidentified species of seal was responsible for sea serpent reports. The term cryptozoology dates from 1959 or before—Heuvelmans attributes the coinage of the term cryptozoology 'the study of hidden animals' (from Ancient Greek: κρυπτός, kryptós "hidden, secret"; Ancient Greek ζῷον, zōion "animal", and λόγος, logos, i.e. "knowledge, study") to Sanderson. Following cryptozoology, the term cryptid was coined in 1983 by cryptozoologist J. E. Wall in the summer issue of the International Society of Cryptozoology newsletter. According to Wall "[It has been] suggested that new terms be coined to replace sensational and often misleading terms like 'monster'. My suggestion is 'cryptid', meaning a living thing having the quality of being hidden or unknown ... describing those creatures which are (or may be) subjects of cryptozoological investigation." The Oxford English Dictionary defines the noun cryptid as "an animal whose existence or survival to the present day is disputed or unsubstantiated; any animal of interest to a cryptozoologist". While used by most cryptozoologists, the term cryptid is not used by academic zoologists. In a textbook aimed at undergraduates, academics Caleb W. Lack and Jacques Rousseau note that the subculture's focus on what it deems to be "cryptids" is a pseudoscientic extension of older belief in monsters and other similar entities from the folkloric record, yet with a "new, more scientific-sounding name: cryptids". While biologists regularly identify new species, cryptozoologists often focus on creatures from the folkloric record. Most famously, these include the Loch Ness Monster, Bigfoot, the chupacabra, as well as other "imposing beasts that could be labeled as monsters". In their search for these entities, cryptozoologists may employ devices such as motion-sensitive cameras, night-vision equipment, and audio-recording equipment. While there have been attempts to codify cryptozoological approaches, unlike biologists, zoologists, botanists, and other academic disciplines, however, "there are no accepted, uniform, or successful methods for pursuing cryptids". Some scholars have identified precursors to modern cryptozoology in certain medieval approaches to the folkloric record, and the psychology behind the cryptozoology approach has been the subject of academic study. Few cryptozoologists have a formal science education, and fewer still have a science background directly relevant to cryptozoology. Adherents often misrepresent the academic backgrounds of cryptozoologists. According to writer Daniel Loxton and paleontologist Donald Prothero, "[c]ryptozoologists have often promoted 'Professor Roy Mackal, PhD.' as one of their leading figures and one of the few with a legitimate doctorate in biology. What is rarely mentioned, however, is that he had no training that would qualify him to undertake competent research on exotic animals. This raises the specter of 'credential mongering', by which an individual or organization feints a person's graduate degree as proof of expertise, even though his or her training is not specifically relevant to the field under consideration." Besides Heuvelmans, Sanderson, and Mackal, other notable cryptozoologists with academic backgrounds include Grover Krantz, Karl Shuker, and Richard Greenwell. Historically, notable cryptozoologists have often identified instances featuring "irrefutable evidence" (such as Sanderson and Krantz), only for the evidence to be revealed as the product of a hoax. This may occur during a closer examination by experts or upon confession of the hoaxer. Expeditions Cryptozoologists have often led expeditions to find evidence of cryptids, to few results. Bigfoot researcher René Dahinden led unsuccessful expedition into caves to find evidence of sasquatch. Lensgrave Adam Christoffer Knuth led an expedition into Lake Tele in the Congo to find the mokele-mbembe in 2018. While they didn't find any evidence for the cryptid, they did find a new species of green algae. Mark van Roosmalen, a Dutch-Brazilian primatologist, is one of the few biologists who have discovered new species of animals to consider his work cryptozoology. Young Earth creationism A subset of cryptozoology promotes the pseudoscience of Young Earth creationism, rejecting conventional science in favor of a Biblical interpretation and promoting concepts such as "living dinosaurs". Science writer Sharon A. Hill observes that the Young Earth creationist segment of cryptozoology is "well-funded and able to conduct expeditions with a goal of finding a living dinosaur that they think would invalidate evolution". Anthropologist Jeb J. Card says that "[c]reationists have embraced cryptozoology and some cryptozoological expeditions are funded by and conducted by creationists hoping to disprove evolution." In a 2013 interview, paleontologist Donald Prothero notes an uptick in creationist cryptozoologists. He observes that "[p]eople who actively search for Loch Ness monsters or Mokele Mbembe do it entirely as creationist ministers. They think that if they found a dinosaur in the Congo it would overturn all of evolution. It wouldn't. It would just be a late-occurring dinosaur, but that's their mistaken notion of evolution." Citing a 2013 exhibit at the Petersburg, Kentucky-based Creation Museum, which claimed that dragons were once biological creatures who walked the earth alongside humanity and is broadly dedicated to Young Earth creationism, religious studies academic Justin Mullis notes that "[c]ryptozoology has a long and curious history with Young Earth Creationism, with this new exhibit being just one of the most recent examples". Academic Paul Thomas analyzes the influence and connections between cryptozoology in his 2020 study of the Creation Museum and the creationist theme park Ark Encounter. Thomas comments that, "while the Creation Museum and the Ark Encounter are flirting with pseudoarchaeology, coquettishly whispering pseudoarchaeological rhetoric, they are each fully in bed with cryptozoology" and observes that "[y]oung-earth creationists and cryptozoologists make natural bed fellows. As with pseudoarchaeology, both young-earth creationists and cryptozoologists bristle at the rejection of mainstream secular science and lament a seeming conspiracy to prevent serious consideration of their claims." Lack of critical media coverage Media outlets have often uncritically disseminated information from cryptozoologist sources, including newspapers that repeat false claims made by cryptozoologists or television shows that feature cryptozoologists as monster hunters (such as the popular and purportedly nonfiction American television show MonsterQuest, which aired from 2007 to 2010). Media coverage of purported "cryptids" often fails to provide more likely explanations, further propagating claims made by cryptozoologists. Reception and pseudoscience There is a broad consensus among academics that cryptozoology is a pseudoscience. The subculture is regularly criticized for reliance on anecdotal information and because in the course of investigating animals that most scientists believe are unlikely to have existed, cryptozoologists do not follow the scientific method. No academic course of study nor university degree program grants the status of cryptozoologist and the subculture is primarily the domain of individuals without training in the natural sciences. Anthropologist Jeb J. Card summarizes cryptozoology in a survey of pseudoscience and pseudoarchaeology: Card notes that "cryptozoologists often show their disdain and even hatred for professional scientists, including those who enthusiastically participated in cryptozoology", which he traces back to Heuvelmans's early "rage against critics of cryptozoology". He finds parallels with cryptozoology and other pseudosciences, such as ghost hunting and ufology, and compares the approach of cryptozoologists to colonial big-game hunters, and to aspects of European imperialism. According to Card, "[m]ost cryptids are framed as the subject of indigenous legends typically collected in the heyday of comparative folklore, though such legends may be heavily modified or worse. Cryptozoology's complicated mix of sympathy, interest, and appropriation of indigenous culture (or non-indigenous construction of it) is also found in New Age circles and dubious "Indian burial grounds" and other legends [...] invoked in hauntings such as the "Amityville" hoax [...]". In a 2011 foreword for The American Biology Teacher, then National Association of Biology Teachers president Dan Ward uses cryptozoology as an example of "technological pseudoscience" that may confuse students about the scientific method. Ward says that "Cryptozoology [...] is not valid science or even science at all. It is monster hunting." Historian of science Brian Regal includes an entry for cryptozoology in his Pseudoscience: A Critical Encyclopedia (2009). Regal says that "as an intellectual endeavor, cryptozoology has been studied as much as cryptozoologists have sought hidden animals". In a 1992 issue of Folklore, folklorist Véronique Campion-Vincent says: Campion-Vincent says that "four currents can be distinguished in the study of mysterious animal appearances": "Forteans" ("compiler[s] of anomalies" such as via publications like the Fortean Times), "occultists" (which she describes as related to "Forteans"), "folklorists", and "cryptozoologists". Regarding cryptozoologists, Campion-Vincent says that "this movement seems to deserve the appellation of parascience, like parapsychology: the same corpus is reviewed; many scientists participate, but for those who have an official status of university professor or researcher, the participation is a private hobby". In her Encyclopedia of American Folklore, academic Linda Watts says that "folklore concerning unreal animals or beings, sometimes called monsters, is a popular field of inquiry" and describes cryptozoology as an example of "American narrative traditions" that "feature many monsters". In his analysis of cryptozoology, folklorist Peter Dendle says that "cryptozoology devotees consciously position themselves in defiance of mainstream science" and that: In a paper published in 2013, Dendle refers to cryptozoologists as "contemporary monster hunters" that "keep alive a sense of wonder in a world that has been very thoroughly charted, mapped, and tracked, and that is largely available for close scrutiny on Google Earth and satellite imaging" and that "on the whole the devotion of substantial resources for this pursuit betrays a lack of awareness of the basis for scholarly consensus (largely ignoring, for instance, evidence of evolutionary biology and the fossil record)." According to historian Mike Dash, few scientists doubt there are thousands of unknown animals, particularly invertebrates, awaiting discovery; however, cryptozoologists are largely uninterested in researching and cataloging newly discovered species of ants or beetles, instead focusing their efforts towards "more elusive" creatures that have often defied decades of work aimed at confirming their existence. Paleontologist George Gaylord Simpson (1984) lists cryptozoology among examples of human gullibility, along with creationism: Paleontologist Donald Prothero (2007) cites cryptozoology as an example of pseudoscience and categorizes it, along with Holocaust denial and UFO abductions claims, as aspects of American culture that are "clearly baloney". In Scientifical Americans: The Culture of Amateur Paranormal Researchers (2017), Hill surveys the field and discusses aspects of the subculture, noting internal attempts at creating more scientific approaches and the involvement of Young Earth creationists and a prevalence of hoaxes. She concludes that many cryptozoologists are "passionate and sincere in their belief that mystery animals exist. As such, they give deference to every report of a sighting, often without critical questioning. As with the ghost seekers, cryptozoologists are convinced that they will be the ones to solve the mystery and make history. With the lure of mystery and money undermining diligent and ethical research, the field of cryptozoology has serious credibility problems." Cryptobotany Cryptobotany is a sub-discipline of cryptozoology researching the possible existence of plant cryptids. According to British cryptozoologist Karl Shuker's 2003 book The Beasts That Hide From Man there are unconfirmed reports, primarily from Latin America, of still-undiscovered species of large carnivorous plants. Organizations There have been several organizations, of varying types, dedicated or related to cryptozoology. These include: International Fortean Organization – a network of professional Fortean researchers and writers based in the United States International Society of Cryptozoology – an American organisation that existed from 1982 to 1998 Kosmopoisk – a Russian organisation whose interests include cryptozoology and Ufology The Centre for Fortean Zoology- an English organization centered around hunting for unknown animals Museums and exhibitions The zoological and cryptozoological collection and archive of Bernard Heuvelmans is held at the Musée Cantonal de Zoologie in Lausanne and consists of around "1,000 books, 25,000 files, 25,000 photographs, correspondence, and artifacts". In 2006, the Bates College Museum of Art held the "Cryptozoology: Out of Time Place Scale" exhibition, which compared cryptozoological creatures with recently extinct animals like the thylacine and extant taxa like the coelacanth, once thought long extinct (living fossils). The following year, the American Museum of Natural History put on a mixed exhibition of imaginary and extinct animals, including the elephant bird Aepyornis maximus and the great ape Gigantopithecus blacki, under the name "Mythic Creatures: Dragons, Unicorns and Mermaids". In 2003 cryptozoologist Loren Coleman opened the International Cryptozoology Museum in Portland, Maine. The museum houses more than 3000 cryptozoology related artifacts. See also Ethnozoology Fearsome critters, fabulous beasts that were said to inhabit the timberlands of North America Folk belief List of cryptids, a list of cryptids notable within cryptozoology List of cryptozoologists, a list of notable cryptozoologists Scientific skepticism References Sources Bartholomew, Robert E. 2012. The Untold Story of Champ: A Social History of America's Loch Ness Monster. State University of New York Press. Campion-Vincent, Véronique. 1992. "Appearances of Beasts and Mystery-cats in France". Folklore 103.2 (1992): 160–183. Card, Jeb J. 2016. "Steampunk Inquiry: A Comparative Vivisection of Discovery Pseudoscience" in Card, Jeb J. and Anderson, David S. Lost City, Found Pyramid: Understanding Alternative Archaeologies and Pseudoscientific Practices, pp. 24–25. University of Alabama Press. Church, Jill M. (2009). Cryptozoology. In H. James Birx. Encyclopedia of Time: Science, Philosophy, Theology & Culture, Volume 1. SAGE Publications. pp. 251–252. Dash, Mike. 2000. Borderlands: The Ultimate Exploration of the Unknown. Overlook Press. Dendle, Peter. 2006. "Cryptozoology in the Medieval and Modern Worlds". Folklore, Vol. 117, No. 2 (Aug., 2006), pp. 190–206. Taylor & Francis. Dendle, Peter. 2013. "Monsters and the Twenty-First Century" in The Ashgate Research Companion to Monsters and the Monstrous. Ashgate Publishing. Hill, Sharon A. 2017. Scientifical Americans: The Culture of Amateur Paranormal Researchers. McFarland. Lack, Caleb W. and Jacques Rousseau. 2016. Critical Thinking, Science, and Pseudoscience: Why We Can't Trust Our Brains. Springer. Lee, Jeffrey A. 2000. The Scientific Endeavor: A Primer on Scientific Principles and Practice. Benjamin Cummings. Loxton, Daniel and Donald Prothero. 2013. Abominable Science: Origins of the Yeti, Nessie, and other Famous Cryptids. Columbia University Press. Mullis, Justin. 2019. "Cryptofiction! Science Fiction and the Rise of Cryptozoology" in Caterine, Darryl & John W. Morehead (ed.). 2019. The Paranormal and Popular Culture: A Postmodern Religious Landscape, pp. 240–252. Routledge. . Mullis, Justin. 2021. "Thomas Jefferson: The First Cryptozoologist?". In Joseph P. Laycock & Natasha L. Mikles (eds). Religion, Culture, and the Monstrous: Of Gods and Monsters, pp. 185–197. Lexington Books. Nagel, Brian. 2009. Pseudoscience: A Critical Encyclopedia. ABC-CLIO. Paxton, C.G.M. 2011. "Putting the 'ology' into cryptozoology." Biofortean Notes. Vol. 7, pp. 7–20, 310. Prothero, Donald R. 2007. Evolution: What the Fossils Say and Why It Matters. Columbia University Press. Radford, Benjamin. 2014. "Bigfoot at 50: Evaluating a Half-Century of Bigfoot Evidence" in Farha, Bryan (ed.). Pseudoscience and Deception: The Smoke and Mirrors of Paranormal Claims. University Press of America. Regal, Brian. 2011a. "Cryptozoology" in McCormick, Charlie T. and Kim Kennedy (ed.). Folklore: An Encyclopedia of Beliefs, Customs, Tales, Music, and Art, pp. 326–329. 2nd edition. ABC-CLIO. . Regal, Brian. 2011b. Sasquatch: Crackpots, Eggheads, and Cryptozoology. Springer. . Roesch, Ben S & John L. Moore. (2002). Cryptozoology. In Michael Shermer (ed.). The Skeptic Encyclopedia of Pseudoscience: Volume One. ABC-CLIO. pp. 71–78. Shea, Rachel Hartigan. 2013. "The Science Behind Bigfoot and Other Monsters".National Geographic, September 9, 2013. Online. Shermer, Michael. 2003. "Show Me the Body" in Scientific American, issue 288 (5), p. 27. Online. Simpson, George Gaylord (1984). "Mammals and Cryptozoology". Proceedings of the American Philosophical Society. Vol. 128, No. 1 (Mar. 30, 1984), pp. 1–19. American Philosophical Society. Thomas, Paul. 2020. Storytelling the Bible at the Creation Museum, Ark Encounter, and Museum of the Bible. Bloomsbury Publishing. Uscinski, Joseph. 2020. Conspiracy Theories: A Primer. Rowman & Littlefield Publishers. Wall, J. E. 1983. The ISC Newsletter, vol. 2, issue 10, p. 10. International Society of Cryptozoology. Ward, Daniel. 2011. "From the President". The American Biology Teacher, 73.8 (2011): 440–440. Watts, Linda S. 2007. Encyclopedia of American Folklore. Facts on File. External links Forteana Pseudoscience Subcultures Young Earth creationism Zoology
https://en.wikipedia.org/wiki/Caesium
Caesium (IUPAC spelling; cesium in American English) is a chemical element with the symbol Cs and atomic number 55. It is a soft, silvery-golden alkali metal with a melting point of , which makes it one of only five elemental metals that are liquid at or near room temperature. Caesium has physical and chemical properties similar to those of rubidium and potassium. It is pyrophoric and reacts with water even at . It is the least electronegative element, with a value of 0.79 on the Pauling scale. It has only one stable isotope, caesium-133. Caesium is mined mostly from pollucite. Caesium-137, a fission product, is extracted from waste produced by nuclear reactors. It has the largest atomic radius of all elements whose radii have been measured or calculated, at about 260 picometers. The German chemist Robert Bunsen and physicist Gustav Kirchhoff discovered caesium in 1860 by the newly developed method of flame spectroscopy. The first small-scale applications for caesium were as a "getter" in vacuum tubes and in photoelectric cells. In 1967, acting on Einstein's proof that the speed of light is the most-constant dimension in the universe, the International System of Units used two specific wave counts from an emission spectrum of caesium-133 to co-define the second and the metre. Since then, caesium has been widely used in highly accurate atomic clocks. Since the 1990s, the largest application of the element has been as caesium formate for drilling fluids, but it has a range of applications in the production of electricity, in electronics, and in chemistry. The radioactive isotope caesium-137 has a half-life of about 30 years and is used in medical applications, industrial gauges, and hydrology. Nonradioactive caesium compounds are only mildly toxic, but the pure metal's tendency to react explosively with water means that caesium is considered a hazardous material, and the radioisotopes present a significant health and environmental hazard. Characteristics Physical properties Of all elements that are solid at room temperature, caesium is the softest: it has a hardness of 0.2 Mohs. It is a very ductile, pale metal, which darkens in the presence of trace amounts of oxygen. When in the presence of mineral oil (where it is best kept during transport), it loses its metallic lustre and takes on a duller, grey appearance. It has a melting point of , making it one of the few elemental metals that are liquid near room temperature. Mercury is the only stable elemental metal with a known melting point lower than caesium. In addition, the metal has a rather low boiling point, , the lowest of all metals other than mercury. Its compounds burn with a blue or violet colour. Caesium forms alloys with the other alkali metals, gold, and mercury (amalgams). At temperatures below , it does not alloy with cobalt, iron, molybdenum, nickel, platinum, tantalum, or tungsten. It forms well-defined intermetallic compounds with antimony, gallium, indium, and thorium, which are photosensitive. It mixes with all the other alkali metals (except lithium); the alloy with a molar distribution of 41% caesium, 47% potassium, and 12% sodium has the lowest melting point of any known metal alloy, at . A few amalgams have been studied: is black with a purple metallic lustre, while CsHg is golden-coloured, also with a metallic lustre. The golden colour of caesium comes from the decreasing frequency of light required to excite electrons of the alkali metals as the group is descended. For lithium through rubidium this frequency is in the ultraviolet, but for caesium it enters the blue–violet end of the spectrum; in other words, the plasmonic frequency of the alkali metals becomes lower from lithium to caesium. Thus caesium transmits and partially absorbs violet light preferentially while other colours (having lower frequency) are reflected; hence it appears yellowish. Chemical properties Caesium metal is highly reactive and pyrophoric. It ignites spontaneously in air, and reacts explosively with water even at low temperatures, more so than the other alkali metals. It reacts with ice at temperatures as low as . Because of this high reactivity, caesium metal is classified as a hazardous material. It is stored and shipped in dry, saturated hydrocarbons such as mineral oil. It can be handled only under inert gas, such as argon. However, a caesium-water explosion is often less powerful than a sodium-water explosion with a similar amount of sodium. This is because caesium explodes instantly upon contact with water, leaving little time for hydrogen to accumulate. Caesium can be stored in vacuum-sealed borosilicate glass ampoules. In quantities of more than about , caesium is shipped in hermetically sealed, stainless steel containers. The chemistry of caesium is similar to that of other alkali metals, in particular rubidium, the element above caesium in the periodic table. As expected for an alkali metal, the only common oxidation state is +1. Some slight differences arise from the fact that it has a higher atomic mass and is more electropositive than other (nonradioactive) alkali metals. Caesium is the most electropositive chemical element. The caesium ion is also larger and less "hard" than those of the lighter alkali metals. Compounds Most caesium compounds contain the element as the cation , which binds ionically to a wide variety of anions. One noteworthy exception is the caeside anion (), and others are the several suboxides (see section on oxides below). More recently, caesium is predicted to behave as a p-block element and capable of forming higher fluorides with higher oxidation states (i.e., CsFn with n > 1) under high pressure. This prediction needs to be validated by further experiments. Salts of Cs+ are usually colourless unless the anion itself is coloured. Many of the simple salts are hygroscopic, but less so than the corresponding salts of lighter alkali metals. The phosphate, acetate, carbonate, halides, oxide, nitrate, and sulfate salts are water-soluble. Its double salts are often less soluble, and the low solubility of caesium aluminium sulfate is exploited in refining Cs from ores. The double salts with antimony (such as ), bismuth, cadmium, copper, iron, and lead are also poorly soluble. Caesium hydroxide (CsOH) is hygroscopic and strongly basic. It rapidly etches the surface of semiconductors such as silicon. CsOH has been previously regarded by chemists as the "strongest base", reflecting the relatively weak attraction between the large Cs+ ion and OH−; it is indeed the strongest Arrhenius base; however, a number of compounds such as n-butyllithium, sodium amide, sodium hydride, caesium hydride, etc., which cannot be dissolved in water as reacting violently with it but rather only used in some anhydrous polar aprotic solvents, are far more basic on the basis of the Brønsted–Lowry acid–base theory. A stoichiometric mixture of caesium and gold will react to form yellow caesium auride (Cs+Au−) upon heating. The auride anion here behaves as a pseudohalogen. The compound reacts violently with water, yielding caesium hydroxide, metallic gold, and hydrogen gas; in liquid ammonia it can be reacted with a caesium-specific ion exchange resin to produce tetramethylammonium auride. The analogous platinum compound, red caesium platinide (), contains the platinide ion that behaves as a . Complexes Like all metal cations, Cs+ forms complexes with Lewis bases in solution. Because of its large size, Cs+ usually adopts coordination numbers greater than 6, the number typical for the smaller alkali metal cations. This difference is apparent in the 8-coordination of CsCl. This high coordination number and softness (tendency to form covalent bonds) are properties exploited in separating Cs+ from other cations in the remediation of nuclear wastes, where 137Cs+ must be separated from large amounts of nonradioactive K+. Halides Caesium fluoride (CsF) is a hygroscopic white solid that is widely used in organofluorine chemistry as a source of fluoride anions. Caesium fluoride has the halite structure, which means that the Cs+ and F− pack in a cubic closest packed array as do Na+ and Cl− in sodium chloride. Notably, caesium and fluorine have the lowest and highest electronegativities, respectively, among all the known elements. Caesium chloride (CsCl) crystallizes in the simple cubic crystal system. Also called the "caesium chloride structure", this structural motif is composed of a primitive cubic lattice with a two-atom basis, each with an eightfold coordination; the chloride atoms lie upon the lattice points at the edges of the cube, while the caesium atoms lie in the holes in the centre of the cubes. This structure is shared with CsBr and CsI, and many other compounds that do not contain Cs. In contrast, most other alkaline halides have the sodium chloride (NaCl) structure. The CsCl structure is preferred because Cs+ has an ionic radius of 174 pm and 181 pm. Oxides More so than the other alkali metals, caesium forms numerous binary compounds with oxygen. When caesium burns in air, the superoxide is the main product. The "normal" caesium oxide () forms yellow-orange hexagonal crystals, and is the only oxide of the anti- type. It vaporizes at , and decomposes to caesium metal and the peroxide at temperatures above . In addition to the superoxide and the ozonide , several brightly coloured suboxides have also been studied. These include , , , (dark-green), CsO, , as well as . The latter may be heated in a vacuum to generate . Binary compounds with sulfur, selenium, and tellurium also exist. Isotopes Caesium has 40 known isotopes, ranging in mass number (i.e. number of nucleons in the nucleus) from 112 to 151. Several of these are synthesized from lighter elements by the slow neutron capture process (S-process) inside old stars and by the R-process in supernova explosions. The only stable caesium isotope is 133Cs, with 78 neutrons. Although it has a large nuclear spin (+), nuclear magnetic resonance studies can use this isotope at a resonating frequency of 11.7 MHz. The radioactive 135Cs has a very long half-life of about 2.3 million years, the longest of all radioactive isotopes of caesium. 137Cs and 134Cs have half-lives of 30 and two years, respectively. 137Cs decomposes to a short-lived 137mBa by beta decay, and then to nonradioactive barium, while 134Cs transforms into 134Ba directly. The isotopes with mass numbers of 129, 131, 132 and 136, have half-lives between a day and two weeks, while most of the other isotopes have half-lives from a few seconds to fractions of a second. At least 21 metastable nuclear isomers exist. Other than 134mCs (with a half-life of just under 3 hours), all are very unstable and decay with half-lives of a few minutes or less. The isotope 135Cs is one of the long-lived fission products of uranium produced in nuclear reactors. However, this fission product yield is reduced in most reactors because the predecessor, 135Xe, is a potent neutron poison and frequently transmutes to stable 136Xe before it can decay to 135Cs. The beta decay from 137Cs to 137mBa results in gamma radiation as the 137mBa relaxes to ground state 137Ba, with the emitted photons having an energy of 0.6617 MeV. 137Cs and 90Sr are the principal medium-lived products of nuclear fission, and the prime sources of radioactivity from spent nuclear fuel after several years of cooling, lasting several hundred years. Those two isotopes are the largest source of residual radioactivity in the area of the Chernobyl disaster. Because of the low capture rate, disposing of 137Cs through neutron capture is not feasible and the only current solution is to allow it to decay over time. Almost all caesium produced from nuclear fission comes from the beta decay of originally more neutron-rich fission products, passing through various isotopes of iodine and xenon. Because iodine and xenon are volatile and can diffuse through nuclear fuel or air, radioactive caesium is often created far from the original site of fission. With nuclear weapons testing in the 1950s through the 1980s, 137Cs was released into the atmosphere and returned to the surface of the earth as a component of radioactive fallout. It is a ready marker of the movement of soil and sediment from those times. Occurrence Caesium is a relatively rare element, estimated to average 3 parts per million in the Earth's crust. It is the 45th most abundant element and the 36th among the metals. Nevertheless, it is more abundant than such elements as antimony, cadmium, tin, and tungsten, and two orders of magnitude more abundant than mercury and silver; it is 3.3% as abundant as rubidium, with which it is closely associated, chemically. Due to its large ionic radius, caesium is one of the "incompatible elements". During magma crystallization, caesium is concentrated in the liquid phase and crystallizes last. Therefore, the largest deposits of caesium are zone pegmatite ore bodies formed by this enrichment process. Because caesium does not substitute for potassium as readily as rubidium does, the alkali evaporite minerals sylvite (KCl) and carnallite () may contain only 0.002% caesium. Consequently, caesium is found in few minerals. Percentage amounts of caesium may be found in beryl () and avogadrite (), up to 15 wt% Cs2O in the closely related mineral pezzottaite (), up to 8.4 wt% Cs2O in the rare mineral londonite (), and less in the more widespread rhodizite. The only economically important ore for caesium is pollucite , which is found in a few places around the world in zoned pegmatites, associated with the more commercially important lithium minerals, lepidolite and petalite. Within the pegmatites, the large grain size and the strong separation of the minerals results in high-grade ore for mining. The world's most significant and richest known source of caesium is the Tanco Mine at Bernic Lake in Manitoba, Canada, estimated to contain 350,000 metric tons of pollucite ore, representing more than two-thirds of the world's reserve base. Although the stoichiometric content of caesium in pollucite is 42.6%, pure pollucite samples from this deposit contain only about 34% caesium, while the average content is 24 wt%. Commercial pollucite contains more than 19% caesium. The Bikita pegmatite deposit in Zimbabwe is mined for its petalite, but it also contains a significant amount of pollucite. Another notable source of pollucite is in the Karibib Desert, Namibia. At the present rate of world mine production of 5 to 10 metric tons per year, reserves will last for thousands of years. Production Mining and refining pollucite ore is a selective process and is conducted on a smaller scale than for most other metals. The ore is crushed, hand-sorted, but not usually concentrated, and then ground. Caesium is then extracted from pollucite primarily by three methods: acid digestion, alkaline decomposition, and direct reduction. In the acid digestion, the silicate pollucite rock is dissolved with strong acids, such as hydrochloric (HCl), sulfuric (), hydrobromic (HBr), or hydrofluoric (HF) acids. With hydrochloric acid, a mixture of soluble chlorides is produced, and the insoluble chloride double salts of caesium are precipitated as caesium antimony chloride (), caesium iodine chloride (), or caesium hexachlorocerate (). After separation, the pure precipitated double salt is decomposed, and pure CsCl is precipitated by evaporating the water. The sulfuric acid method yields the insoluble double salt directly as caesium alum (). The aluminium sulfate component is converted to insoluble aluminium oxide by roasting the alum with carbon, and the resulting product is leached with water to yield a solution. Roasting pollucite with calcium carbonate and calcium chloride yields insoluble calcium silicates and soluble caesium chloride. Leaching with water or dilute ammonia () yields a dilute chloride (CsCl) solution. This solution can be evaporated to produce caesium chloride or transformed into caesium alum or caesium carbonate. Though not commercially feasible, the ore can be directly reduced with potassium, sodium, or calcium in vacuum to produce caesium metal directly. Most of the mined caesium (as salts) is directly converted into caesium formate (HCOO−Cs+) for applications such as oil drilling. To supply the developing market, Cabot Corporation built a production plant in 1997 at the Tanco mine near Bernic Lake in Manitoba, with a capacity of per year of caesium formate solution. The primary smaller-scale commercial compounds of caesium are caesium chloride and nitrate. Alternatively, caesium metal may be obtained from the purified compounds derived from the ore. Caesium chloride and the other caesium halides can be reduced at with calcium or barium, and caesium metal distilled from the result. In the same way, the aluminate, carbonate, or hydroxide may be reduced by magnesium. The metal can also be isolated by electrolysis of fused caesium cyanide (CsCN). Exceptionally pure and gas-free caesium can be produced by thermal decomposition of caesium azide , which can be produced from aqueous caesium sulfate and barium azide. In vacuum applications, caesium dichromate can be reacted with zirconium to produce pure caesium metal without other gaseous products. + 2 → 2 + 2 + The price of 99.8% pure caesium (metal basis) in 2009 was about , but the compounds are significantly cheaper. History In 1860, Robert Bunsen and Gustav Kirchhoff discovered caesium in the mineral water from Dürkheim, Germany. Because of the bright blue lines in the emission spectrum, they derived the name from the Latin word , meaning . Caesium was the first element to be discovered with a spectroscope, which had been invented by Bunsen and Kirchhoff only a year previously. To obtain a pure sample of caesium, of mineral water had to be evaporated to yield of concentrated salt solution. The alkaline earth metals were precipitated either as sulfates or oxalates, leaving the alkali metal in the solution. After conversion to the nitrates and extraction with ethanol, a sodium-free mixture was obtained. From this mixture, the lithium was precipitated by ammonium carbonate. Potassium, rubidium, and caesium form insoluble salts with chloroplatinic acid, but these salts show a slight difference in solubility in hot water, and the less-soluble caesium and rubidium hexachloroplatinate () were obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, caesium and rubidium were separated by the difference in solubility of their carbonates in alcohol. The process yielded of rubidium chloride and of caesium chloride from the initial 44,000 litres of mineral water. From the caesium chloride, the two scientists estimated the atomic weight of the new element at 123.35 (compared to the currently accepted one of 132.9). They tried to generate elemental caesium by electrolysis of molten caesium chloride, but instead of a metal, they obtained a blue homogeneous substance which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance"; as a result, they assigned it as a subchloride (). In reality, the product was probably a colloidal mixture of the metal and caesium chloride. The electrolysis of the aqueous solution of chloride with a mercury cathode produced a caesium amalgam which readily decomposed under the aqueous conditions. The pure metal was eventually isolated by the Swedish chemist Carl Setterberg while working on his doctorate with Kekulé and Bunsen. In 1882, he produced caesium metal by electrolysing caesium cyanide, avoiding the problems with the chloride. Historically, the most important use for caesium has been in research and development, primarily in chemical and electrical fields. Very few applications existed for caesium until the 1920s, when it came into use in radio vacuum tubes, where it had two functions; as a getter, it removed excess oxygen after manufacture, and as a coating on the heated cathode, it increased the electrical conductivity. Caesium was not recognized as a high-performance industrial metal until the 1950s. Applications for nonradioactive caesium included photoelectric cells, photomultiplier tubes, optical components of infrared spectrophotometers, catalysts for several organic reactions, crystals for scintillation counters, and in magnetohydrodynamic power generators. Caesium is also used as a source of positive ions in secondary ion mass spectrometry (SIMS). Since 1967, the International System of Measurements has based the primary unit of time, the second, on the properties of caesium. The International System of Units (SI) defines the second as the duration of 9,192,631,770 cycles at the microwave frequency of the spectral line corresponding to the transition between two hyperfine energy levels of the ground state of caesium-133. The 13th General Conference on Weights and Measures of 1967 defined a second as: "the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of caesium-133 atoms in their ground state undisturbed by external fields". Applications Petroleum exploration The largest present-day use of nonradioactive caesium is in caesium formate drilling fluids for the extractive oil industry. Aqueous solutions of caesium formate (HCOO−Cs+)—made by reacting caesium hydroxide with formic acid—were developed in the mid-1990s for use as oil well drilling and completion fluids. The function of a drilling fluid is to lubricate drill bits, to bring rock cuttings to the surface, and to maintain pressure on the formation during drilling of the well. Completion fluids assist the emplacement of control hardware after drilling but prior to production by maintaining the pressure. The high density of the caesium formate brine (up to 2.3 g/cm3, or 19.2 pounds per gallon), coupled with the relatively benign nature of most caesium compounds, reduces the requirement for toxic high-density suspended solids in the drilling fluid—a significant technological, engineering and environmental advantage. Unlike the components of many other heavy liquids, caesium formate is relatively environment-friendly. Caesium formate brine can be blended with potassium and sodium formates to decrease the density of the fluids to that of water (1.0 g/cm3, or 8.3 pounds per gallon). Furthermore, it is biodegradable and may be recycled, which is important in view of its high cost (about $4,000 per barrel in 2001). Alkali formates are safe to handle and do not damage the producing formation or downhole metals as corrosive alternative, high-density brines (such as zinc bromide solutions) sometimes do; they also require less cleanup and reduce disposal costs. Atomic clocks Caesium-based atomic clocks use the electromagnetic transitions in the hyperfine structure of caesium-133 atoms as a reference point. The first accurate caesium clock was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Caesium clocks have improved over the past half-century and are regarded as "the most accurate realization of a unit that mankind has yet achieved." These clocks measure frequency with an error of 2 to 3 parts in 1014, which corresponds to an accuracy of 2 nanoseconds per day, or one second in 1.4 million years. The latest versions are more accurate than 1 part in 1015, about 1 second in 20 million years. The caesium standard is the primary standard for standards-compliant time and frequency measurements. Caesium clocks regulate the timing of cell phone networks and the Internet. Definition of the second The second, symbol s, is the SI unit of time. The BIPM restated its definition at its 26th conference in 2018: "[The second] is defined by taking the fixed numerical value of the caesium frequency , the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be when expressed in the unit Hz, which is equal to s−1." Electric power and electronics Caesium vapour thermionic generators are low-power devices that convert heat energy to electrical energy. In the two-electrode vacuum tube converter, caesium neutralizes the space charge near the cathode and enhances the current flow. Caesium is also important for its photoemissive properties, converting light to electron flow. It is used in photoelectric cells because caesium-based cathodes, such as the intermetallic compound , have a low threshold voltage for emission of electrons. The range of photoemissive devices using caesium include optical character recognition devices, photomultiplier tubes, and video camera tubes. Nevertheless, germanium, rubidium, selenium, silicon, tellurium, and several other elements can be substituted for caesium in photosensitive materials. Caesium iodide (CsI), bromide (CsBr) and caesium fluoride (CsF) crystals are employed for scintillators in scintillation counters widely used in mineral exploration and particle physics research to detect gamma and X-ray radiation. Being a heavy element, caesium provides good stopping power with better detection. Caesium compounds may provide a faster response (CsF) and be less hygroscopic (CsI). Caesium vapour is used in many common magnetometers. The element is used as an internal standard in spectrophotometry. Like other alkali metals, caesium has a great affinity for oxygen and is used as a "getter" in vacuum tubes. Other uses of the metal include high-energy lasers, vapour glow lamps, and vapour rectifiers. Centrifugation fluids The high density of the caesium ion makes solutions of caesium chloride, caesium sulfate, and caesium trifluoroacetate () useful in molecular biology for density gradient ultracentrifugation. This technology is used primarily in the isolation of viral particles, subcellular organelles and fractions, and nucleic acids from biological samples. Chemical and medical use Relatively few chemical applications use caesium. Doping with caesium compounds enhances the effectiveness of several metal-ion catalysts for chemical synthesis, such as acrylic acid, anthraquinone, ethylene oxide, methanol, phthalic anhydride, styrene, methyl methacrylate monomers, and various olefins. It is also used in the catalytic conversion of sulfur dioxide into sulfur trioxide in the production of sulfuric acid. Caesium fluoride enjoys a niche use in organic chemistry as a base and as an anhydrous source of fluoride ion. Caesium salts sometimes replace potassium or sodium salts in organic synthesis, such as cyclization, esterification, and polymerization. Caesium has also been used in thermoluminescent radiation dosimetry (TLD): When exposed to radiation, it acquires crystal defects that, when heated, revert with emission of light proportionate to the received dose. Thus, measuring the light pulse with a photomultiplier tube can allow the accumulated radiation dose to be quantified. Nuclear and isotope applications Caesium-137 is a radioisotope commonly used as a gamma-emitter in industrial applications. Its advantages include a half-life of roughly 30 years, its availability from the nuclear fuel cycle, and having 137Ba as a stable end product. The high water solubility is a disadvantage which makes it incompatible with large pool irradiators for food and medical supplies. It has been used in agriculture, cancer treatment, and the sterilization of food, sewage sludge, and surgical equipment. Radioactive isotopes of caesium in radiation devices were used in the medical field to treat certain types of cancer, but emergence of better alternatives and the use of water-soluble caesium chloride in the sources, which could create wide-ranging contamination, gradually put some of these caesium sources out of use. Caesium-137 has been employed in a variety of industrial measurement gauges, including moisture, density, levelling, and thickness gauges. It has also been used in well logging devices for measuring the electron density of the rock formations, which is analogous to the bulk density of the formations. Caesium-137 has been used in hydrologic studies analogous to those with tritium. As a daughter product of fission bomb testing from the 1950s through the mid-1980s, caesium-137 was released into the atmosphere, where it was absorbed readily into solution. Known year-to-year variation within that period allows correlation with soil and sediment layers. Caesium-134, and to a lesser extent caesium-135, have also been used in hydrology to measure the caesium output by the nuclear power industry. While they are less prevalent than either caesium-133 or caesium-137, these bellwether isotopes are produced solely from anthropogenic sources. Other uses Caesium and mercury were used as a propellant in early ion engines designed for spacecraft propulsion on very long interplanetary or extraplanetary missions. The fuel was ionized by contact with a charged tungsten electrode. But corrosion by caesium on spacecraft components has pushed development in the direction of inert gas propellants, such as xenon, which are easier to handle in ground-based tests and do less potential damage to the spacecraft. Xenon was used in the experimental spacecraft Deep Space 1 launched in 1998. Nevertheless, field-emission electric propulsion thrusters that accelerate liquid metal ions such as caesium have been built. Caesium nitrate is used as an oxidizer and pyrotechnic colorant to burn silicon in infrared flares, such as the LUU-19 flare, because it emits much of its light in the near infrared spectrum. Caesium compounds may have been used as fuel additives to reduce the radar signature of exhaust plumes in the Lockheed A-12 CIA reconnaissance aircraft. Caesium and rubidium have been added as a carbonate to glass because they reduce electrical conductivity and improve stability and durability of fibre optics and night vision devices. Caesium fluoride or caesium aluminium fluoride are used in fluxes formulated for brazing aluminium alloys that contain magnesium. Magnetohydrodynamic (MHD) power-generating systems were researched, but failed to gain widespread acceptance. Caesium metal has also been considered as the working fluid in high-temperature Rankine cycle turboelectric generators. Caesium salts have been evaluated as antishock reagents following the administration of arsenical drugs. Because of their effect on heart rhythms, however, they are less likely to be used than potassium or rubidium salts. They have also been used to treat epilepsy. Caesium-133 can be laser cooled and used to probe fundamental and technological problems in quantum physics. It has a particularly convenient Feshbach spectrum to enable studies of ultracold atoms requiring tunable interactions. Health and safety hazards Nonradioactive caesium compounds are only mildly toxic, and nonradioactive caesium is not a significant environmental hazard. Because biochemical processes can confuse and substitute caesium with potassium, excess caesium can lead to hypokalemia, arrhythmia, and acute cardiac arrest, but such amounts would not ordinarily be encountered in natural sources. The median lethal dose (LD50) for caesium chloride in mice is 2.3 g per kilogram, which is comparable to the LD50 values of potassium chloride and sodium chloride. The principal use of nonradioactive caesium is as caesium formate in petroleum drilling fluids because it is much less toxic than alternatives, though it is more costly. Caesium metal is one of the most reactive elements and is highly explosive in the presence of water. The hydrogen gas produced by the reaction is heated by the thermal energy released at the same time, causing ignition and a violent explosion. This can occur with other alkali metals, but caesium is so potent that this explosive reaction can be triggered even by cold water. It is highly pyrophoric: the autoignition temperature of caesium is , and it ignites explosively in air to form caesium hydroxide and various oxides. Caesium hydroxide is a very strong base, and will rapidly corrode glass. The isotopes 134 and 137 are present in the biosphere in small amounts from human activities, differing by location. Radiocaesium does not accumulate in the body as readily as other fission products (such as radioiodine and radiostrontium). About 10% of absorbed radiocaesium washes out of the body relatively quickly in sweat and urine. The remaining 90% has a biological half-life between 50 and 150 days. Radiocaesium follows potassium and tends to accumulate in plant tissues, including fruits and vegetables. Plants vary widely in the absorption of caesium, sometimes displaying great resistance to it. It is also well-documented that mushrooms from contaminated forests accumulate radiocaesium (caesium-137) in the fungal sporocarps. Accumulation of caesium-137 in lakes has been a great concern after the Chernobyl disaster. Experiments with dogs showed that a single dose of 3.8 millicuries (140 MBq, 4.1 μg of caesium-137) per kilogram is lethal within three weeks; smaller amounts may cause infertility and cancer. The International Atomic Energy Agency and other sources have warned that radioactive materials, such as caesium-137, could be used in radiological dispersion devices, or "dirty bombs". See also Acerinox accident, a caesium-137 contamination accident in 1998 Goiânia accident, a major radioactive contamination incident in 1987 involving caesium-137 Kramatorsk radiological accident, a 137Cs lost-source incident between 1980 and 1989 Notes References External links Caesium or Cesium at The Periodic Table of Videos (University of Nottingham) View the reaction of Caesium (most reactive metal in the periodic table) with Fluorine (most reactive non-metal) courtesy of The Royal Institution. Alkali metals Chemical elements with body-centered cubic structure Chemical elements Glycine receptor agonists Reducing agents Articles containing video clips
https://en.wikipedia.org/wiki/Century
A century is a period of 100 years. Centuries are numbered ordinally in English and many other languages. The word century comes from the Latin centum, meaning one hundred. Century is sometimes abbreviated as c. A centennial or centenary is a hundredth anniversary, or a celebration of this, typically the remembrance of an event which took place a hundred years earlier. Start and end of centuries Although a century can mean any arbitrary period of 100 years, there are two viewpoints on the nature of standard centuries. One is based on strict construction, while the other is based on popular perception. According to the strict construction, the 1st century AD began with AD 1 and ended with AD 100, the 2nd century spanning the years 101 to 200, with the same pattern continuing onward. In this model, the n-th century starts with the year that ends with "01", and ends with the year that ends with "00"; for example, the 20th century comprises the years 1901 to 2000 in strict usage. In popular perception and practice, centuries are structured by grouping years based on sharing the 'hundreds' digit(s). In this model, the n-th century starts with the year that ends in "00" and ends with the year ending in "99"; for example, the years 1900 to 1999, in popular culture, constitute the 20th century. (This is similar to the grouping of "0-to-9 decades" which share the 'tens' digit.) To facilitate calendrical calculations by computer, the astronomical year numbering and ISO 8601 systems both contain a year zero, with the astronomical year 0 corresponding to the year 1 BCE, the astronomical year -1 corresponding to 2 BCE, and so on. Alternative naming systems Informally, years may be referred to in groups based on the hundreds part of the year. In this system, the years 1900–1999 are referred to as the nineteen hundreds (1900s). Aside from English usage, this system is used in Swedish, Danish, Norwegian, Icelandic, Finnish and Hungarian. The Swedish (or ), Danish (or ), Norwegian (or ), Finnish (or ) and Hungarian (or ) refer unambiguously to the years 1900–1999. Italian also has a similar system, but it only expresses the hundreds and omits the word for 'thousand'. This system mainly functions from the 11th to the 20th century: (that is 'the four hundred', the 15th century) (that is 'the five hundred', the 16th century). These terms are often used in other languages when referring to the history of Italy. Similar dating units in other calendar systems While the century has been commonly used in the West, other cultures and calendars have utilized differently sized groups of years in a similar manner. The Hindu calendar, in particular, summarizes its years into groups of 60, while the Aztec calendar considers groups of 52. See also Age of Discovery Ancient history Before Christ Common Era Decade List of decades, centuries, and millennia Lustrum Middle Ages Millennium Modern era Saeculum Year Notes References Bibliography The Battle of the Centuries, Ruth Freitag, U.S. Government Printing Office. Available from the Superintendent of Documents, P.O. Box 371954, Pittsburgh, PA 15250- 7954. Cite stock no. 030-001-00153-9. Retrieved 3 March 2019. 100 (number) Units of time
https://en.wikipedia.org/wiki/Carabiner
A carabiner or karabiner (), often shortened to biner or to crab, colloquially known as (climbing) clip, is a specialized type of shackle, a metal loop with a spring-loaded gate used to quickly and reversibly connect components, most notably in safety-critical systems. The word is a shortened form of Karabinerhaken (or also short Karabiner), a German phrase for a "carbine rifle hook" used by a carbine rifleman, or carabinier, to attach his carbine to a belt or bandolier. Use Carabiners are widely used in rope-intensive activities such as climbing, fall arrest systems, arboriculture, caving, sailing, hot-air ballooning, rope rescue, construction, industrial rope work, window cleaning, whitewater rescue, and acrobatics. They are predominantly made from both steel and aluminium. Those used in sports tend to be of a lighter weight than those used in commercial applications and rope rescue. Often referred to as carabiner-style or as mini-carabiners, carabiner keyrings and other light-use clips of similar style and design have also become popular. Most are stamped with a "not for climbing" or similar warning due to a common lack of load-testing and safety standards in manufacturing. While any metal link with a spring-loaded gate is technically a carabiner, the strict usage among the climbing community specifically refers only to devices manufactured and tested for load-bearing in safety-critical systems like rock and mountain climbing, typically rated to 20 kN or more. Carabiners on hot-air balloons are used to connect the envelope to the basket and are rated at 2.5, 3, or 4 tonnes. Load-bearing screw-gate carabiners are used to connect the diver's umbilical to the surface supplied diver's harness. They are usually rated for a safe working load of 5 kN or more (equivalent to a weight in excess of approximately 500 kg). Types Shape Carabiners come in four characteristic shapes: Oval: Symmetric. Most basic and utilitarian. Smooth regular curves are gentle on equipment and allow easy repositioning of loads. Their greatest disadvantage is that a load is shared equally on both the strong solid spine and the weaker gated axis. D: Asymmetric shape transfers the majority of the load on to the spine, the carabiner's strongest axis. Offset-D: Variant of a D with a greater asymmetry, allowing for a wider gate opening. Pear/HMS: Wider and rounder shape at the top than offset-D's, and typically larger. Used for belaying with a munter hitch, and with some types of belay device. The largest HMS carabiners can also be used for rappelling with a munter hitch (the size is needed to accommodate the hitch with two strands of rope). These are usually the heaviest carabiners. Locking mechanisms Carabiners fall into three broad locking categories: non-locking, manual locking, and auto locking. Non-locking Non-locking carabiners (or snap-links) have a sprung swinging gate that accepts a rope, webbing sling, or other hardware. Rock climbers frequently connect two non-locking carabiners with a short length of webbing to create a quickdraw (an extender). Two gate types are common: Solid gate: The more traditional carabiner design, incorporating a solid metal gate with separate pin and spring mechanisms. Most modern carabiners feature a 'key-lock nose shape and gate opening, which is less prone to snagging than traditional notch and pin design. Most locking carabiners are based on the solid gate design. Wire gate: A single piece of bent spring-steel wire forms the gate. Wire gate carabiners are significantly lighter than solid gates, with roughly the same strength. Wire gates are less prone to icing up than solid gates, an advantage in Alpine mountaineering and ice climbing. The reduced gate mass makes their wire bales less prone to "gate flutter", a dangerous condition created when the carabiner suddenly impacts rock or other hard surfaces during a fall, and the gate opens momentarily due to momentum (and both lowers the breaking strength of the carabiner when open, and potentially allows the rope to escape). Simple wiregate designs feature a notch that can snag objects (similar to original solid gate designs), but newer designs feature a shroud or guide wires around the "hooked" part of the carabiner nose to prevent snagging. Both solid and wire gate carabiners can be either "straight gate" or "bent gate". Bent-gate carabiners are easier to clip a rope into using only one hand, and so are often used for the rope-end carabiner of quickdraws and alpine draws used for lead climbing. Locking Locking carabiners have the same general shape as non-locking carabiners, but have an additional mechanism securing the gate to prevent unintentional opening during use. These mechanisms may be either threaded sleeves ("screw-lock"), spring-loaded sleeves ("twist-lock"), magnetic levers ("Magnetron"), other spring loaded unlocking levers or opposing double spring loaded gates ("twin-gate"). Manual Screw-lock (or screw gate): Have a threaded sleeve over the gate which must be engaged and disengaged manually. They have fewer moving parts than spring-loaded mechanisms, are less prone to malfunctioning due to contamination or component fatigue, and are easier to employ one-handed. They, however, require more total effort and are more time-consuming than pull-lock, twist-lock or lever-lock. Auto-locking Twist-lock, push-lock, twist-and-push-lock: Have a security sleeve over the gate which must be manually rotated and/or pulled to disengage, but which springs automatically to locked position upon release. They offer the advantage of re-engaging without additional user input, but being spring-loaded are prone to both spring fatigue and their more complex mechanisms becoming balky from dirt, ice, or other contamination. They are also difficult to open one-handed and with gloves on, and sometimes jam, getting stuck after being tightened under load, and being very hard to undo once the load is removed. Multiple-levers: Having at least two spring loaded levers that are each operated with one hand. Magnetic: Have two small levers with embedded magnets on either side of the locking gate which must be pushed towards each other or pinched simultaneously to unlock. Upon release the levers pull shut and into the locked position against a small steel insert in the carabiner nose. With the gate open the magnets in the two levers repel each other so they do not lock or stick together, which might prevent the gate from closing properly. Advantages are very easy one-handed operation, re-engaging without additional user input and few mechanical parts that can fail. Double-Gate: Have two opposed overlapping gates at the opening which prevent a rope or anchor from inadvertently passing through the gate in either direction. Gates may only be opened by pushing outwards from in between towards either direction. The carabiner can therefore be opened by splitting the gates with a fingertip, allowing easy one hand operation. The likelihood of a rope under tension to split the gates is therefore practically none. The lack of a rotating lock prevents a rolling knot, such as the Munter hitch, from unlocking the gate and passing through, giving a measure of inherent safety in use and reducing mechanical complexity. Certification Europe Recreation: Carabiners sold for use in climbing in Europe must conform to standard EN 12275:1998 "Mountaineering equipment – Connectors – Safety requirements and test methods", which governs testing protocols, rated strengths, and markings. A breaking strength of at least 20 kN (20,000 newtons = approximately 2040 kilograms of force which is significantly more than the weight of a small car) with the gate closed and 7 kN with the gate open is the standard for most climbing applications, although requirements vary depending on the activity. Carabiners are marked on the side with single letters showing their intended area of use, for example, K (via ferrata), B (base), and H (for belaying with an Italian or Munter hitch). Industry: Carabiners used for access in commercial and industrial environments within Europe must comply with EN 362:2004 "Personal protective equipment against falls from a height. Connectors." The minimum gate closed breaking strength of a carabiner conforming with EN 362:2004 is nominally the same as that of EN 12275:1998 at around 20 kN. Carabiners complying with both EN 12275:1998 and EN 362:2004 are available. United States Climbing and mountaineering: Minimum breaking strength (MBS) requirements and calculations for climbing and mountaineering carabiners in the USA are set out in ASTM Standard F1774. This standard calls for a MBS of 20kN on the long axis, and 7kN on the short axis (cross load). Rescue: Carabiners used for rescue are addressed in ASTM F1956. This document addresses two classifications of carabiners, light use and heavy-duty. Light use carabiners are the most widely used, and are commonly found in applications including technical rope rescue, mountain rescue, cave rescue, cliff rescue, military, SWAT, and even by some non-NFPA fire departments. ASTM requirements for light use carabiners are 27 kN MBS on the long axis, 7kN on the short axis. Requirements for the lesser-used heavy duty rescue carabiners are 40kN MBS long axis, 10.68kN short axis. Fire rescue: Minimum breaking strength requirements and calculations for rescue carabiners used by NFPA compliant agencies are set out in National Fire Protection Association standard 1983-2012 edition Fire Service Life Safety Rope and Equipment. The standard defines two classes of rescue carabiners. Technical use rescue carabiners are required to have minimum breaking strengths of 27 kN gate closed, 7 kN gate open and 7 kN minor axis. General use rescue carabiners are required to have minimum breaking strengths of 40 kN gate closed, 11 kN gate open and 11 kN minor axis. Testing procedures for rescue carabiners are set out in ASTM International standard F 1956 Standard Specification of Rescue Carabiners. Fall protection: Carabiners used for fall protection in US industry are classified as "connectors" and are required to meet Occupational Safety and Health Administration standard 1910.66 App C Personal Fall Arrest System which specifies "drop forged, pressed or formed steel, or made of equivalent materials" and a minimum breaking strength of . American National Standards Institute/American Society of Safety Engineers standard ANSI Z359.1-2007 Safety Requirement for Personal Fall Arrest Systems, Subsystems and Components, section 3.2.1.4 (for snap hooks and carabiners) is a voluntary consensus standard. This standard requires that all connectors/ carabiners support a minimum breaking strength (MBS) of and feature an auto-locking gate mechanism which supports a minimum breaking strength (MBS) of . See also Maillon Lobster clasp Rock-climbing equipment Glossary of climbing terms References Climbing equipment Caving equipment German inventions Mountaineering equipment Fasteners
https://en.wikipedia.org/wiki/Chalcogen
The chalcogens (ore forming) ( ) are the chemical elements in group 16 of the periodic table. This group is also known as the oxygen family. Group 16 consists of the elements oxygen (O), sulfur (S), selenium (Se), tellurium (Te), and the radioactive elements polonium (Po) and livermorium (Lv). Often, oxygen is treated separately from the other chalcogens, sometimes even excluded from the scope of the term "chalcogen" altogether, due to its very different chemical behavior from sulfur, selenium, tellurium, and polonium. The word "chalcogen" is derived from a combination of the Greek word () principally meaning copper (the term was also used for bronze/brass, any metal in the poetic sense, ore or coin), and the Latinized Greek word , meaning born or produced. Sulfur has been known since antiquity, and oxygen was recognized as an element in the 18th century. Selenium, tellurium and polonium were discovered in the 19th century, and livermorium in 2000. All of the chalcogens have six valence electrons, leaving them two electrons short of a full outer shell. Their most common oxidation states are −2, +2, +4, and +6. They have relatively low atomic radii, especially the lighter ones. Lighter chalcogens are typically nontoxic in their elemental form, and are often critical to life, while the heavier chalcogens are typically toxic. All of the naturally occurring chalcogens have some role in biological functions, either as a nutrient or a toxin. Selenium is an important nutrient (among others as a building block of selenocysteine) but is also commonly toxic. Tellurium often has unpleasant effects (although some organisms can use it), and polonium (especially the isotope polonium-210) is always harmful as a result of its radioactivity. Sulfur has more than 20 allotropes, oxygen has nine, selenium has at least eight, polonium has two, and only one crystal structure of tellurium has so far been discovered. There are numerous organic chalcogen compounds. Not counting oxygen, organic sulfur compounds are generally the most common, followed by organic selenium compounds and organic tellurium compounds. This trend also occurs with chalcogen pnictides and compounds containing chalcogens and carbon group elements. Oxygen is generally obtained by separation of air into nitrogen and oxygen. Sulfur is extracted from oil and natural gas. Selenium and tellurium are produced as byproducts of copper refining. Polonium is most available in naturally occurring actinide-containing materials. Livermorium has been synthesized in particle accelerators. The primary use of elemental oxygen is in steelmaking. Sulfur is mostly converted into sulfuric acid, which is heavily used in the chemical industry. Selenium's most common application is glassmaking. Tellurium compounds are mostly used in optical disks, electronic devices, and solar cells. Some of polonium's applications are due to its radioactivity. Properties Atomic and physical Chalcogens show similar patterns in electron configuration, especially in the outermost shells, where they all have the same number of valence electrons, resulting in similar trends in chemical behavior: All chalcogens have six valence electrons. All of the solid, stable chalcogens are soft and do not conduct heat well. Electronegativity decreases towards the chalcogens with higher atomic numbers. Density, melting and boiling points, and atomic and ionic radii tend to increase towards the chalcogens with higher atomic numbers. Isotopes Out of the six known chalcogens, one (oxygen) has an atomic number equal to a nuclear magic number, which means that their atomic nuclei tend to have increased stability towards radioactive decay. Oxygen has three stable isotopes, and 14 unstable ones. Sulfur has four stable isotopes, 20 radioactive ones, and one isomer. Selenium has six observationally stable or nearly stable isotopes, 26 radioactive isotopes, and 9 isomers. Tellurium has eight stable or nearly stable isotopes, 31 unstable ones, and 17 isomers. Polonium has 42 isotopes, none of which are stable. It has an additional 28 isomers. In addition to the stable isotopes, some radioactive chalcogen isotopes occur in nature, either because they are decay products, such as 210Po, because they are primordial, such as 82Se, because of cosmic ray spallation, or via nuclear fission of uranium. Livermorium isotopes 290Lv through 293Lv have been discovered; the most stable livermorium isotope is 293Lv, which has a half-life of 0.061 seconds. Among the lighter chalcogens (oxygen and sulfur), the most neutron-poor isotopes undergo proton emission, the moderately neutron-poor isotopes undergo electron capture or β+ decay, the moderately neutron-rich isotopes undergo β− decay, and the most neutron rich isotopes undergo neutron emission. The middle chalcogens (selenium and tellurium) have similar decay tendencies as the lighter chalcogens, but no proton-emitting isotopes have been observed, and some of the most neutron-deficient isotopes of tellurium undergo alpha decay. Polonium isotopes tend to decay via alpha or beta decay. Isotopes with nonzero nuclear spins are more abundant in nature among the chalcogens selenium and tellurium than they are with sulfur. Allotropes Oxygen's most common allotrope is diatomic oxygen, or O2, a reactive paramagnetic molecule that is ubiquitous to aerobic organisms and has a blue color in its liquid state. Another allotrope is O3, or ozone, which is three oxygen atoms bonded together in a bent formation. There is also an allotrope called tetraoxygen, or O4, and six allotropes of solid oxygen including "red oxygen", which has the formula O8. Sulfur has over 20 known allotropes, which is more than any other element except carbon. The most common allotropes are in the form of eight-atom rings, but other molecular allotropes that contain as few as two atoms or as many as 20 are known. Other notable sulfur allotropes include rhombic sulfur and monoclinic sulfur. Rhombic sulfur is the more stable of the two allotropes. Monoclinic sulfur takes the form of long needles and is formed when liquid sulfur is cooled to slightly below its melting point. The atoms in liquid sulfur are generally in the form of long chains, but above 190 °C, the chains begin to break down. If liquid sulfur above 190 °C is frozen very rapidly, the resulting sulfur is amorphous or "plastic" sulfur. Gaseous sulfur is a mixture of diatomic sulfur (S2) and 8-atom rings. Selenium has at least eight distinct allotropes. The gray allotrope, commonly referred to as the "metallic" allotrope, despite not being a metal, is stable and has a hexagonal crystal structure. The gray allotrope of selenium is soft, with a Mohs hardness of 2, and brittle. Four other allotropes of selenium are metastable. These include two monoclinic red allotropes and two amorphous allotropes, one of which is red and one of which is black. The red allotrope converts to the black allotrope in the presence of heat. The gray allotrope of selenium is made from spirals on selenium atoms, while one of the red allotropes is made of stacks of selenium rings (Se8). Tellurium is not known to have any allotropes, although its typical form is hexagonal. Polonium has two allotropes, which are known as α-polonium and β-polonium. α-polonium has a cubic crystal structure and converts to the rhombohedral β-polonium at 36 °C. The chalcogens have varying crystal structures. Oxygen's crystal structure is monoclinic, sulfur's is orthorhombic, selenium and tellurium have the hexagonal crystal structure, while polonium has a cubic crystal structure. Chemical Oxygen, sulfur, and selenium are nonmetals, and tellurium is a metalloid, meaning that its chemical properties are between those of a metal and those of a nonmetal. It is not certain whether polonium is a metal or a metalloid. Some sources refer to polonium as a metalloid, although it has some metallic properties. Also, some allotropes of selenium display characteristics of a metalloid, even though selenium is usually considered a nonmetal. Even though oxygen is a chalcogen, its chemical properties are different from those of other chalcogens. One reason for this is that the heavier chalcogens have vacant d-orbitals. Oxygen's electronegativity is also much higher than those of the other chalcogens. This makes oxygen's electric polarizability several times lower than those of the other chalcogens. For covalent bonding a chalcogen may accept two electrons according to the octet rule, leaving two lone pairs. When an atom forms two single bonds, they form an angle between 90° and 120°. In 1+ cations, such as , a chalcogen forms three molecular orbitals arranged in a trigonal pyramidal fashion and one lone pair. Double bonds are also common in chalcogen compounds, for example in chalcogenates (see below). The oxidation number of the most common chalcogen compounds with positive metals is −2. However the tendency for chalcogens to form compounds in the −2 state decreases towards the heavier chalcogens. Other oxidation numbers, such as −1 in pyrite and peroxide, do occur. The highest formal oxidation number is +6. This oxidation number is found in sulfates, selenates, tellurates, polonates, and their corresponding acids, such as sulfuric acid. Oxygen is the most electronegative element except for fluorine, and forms compounds with almost all of the chemical elements, including some of the noble gases. It commonly bonds with many metals and metalloids to form oxides, including iron oxide, titanium oxide, and silicon oxide. Oxygen's most common oxidation state is −2, and the oxidation state −1 is also relatively common. With hydrogen it forms water and hydrogen peroxide. Organic oxygen compounds are ubiquitous in organic chemistry. Sulfur's oxidation states are −2, +2, +4, and +6. Sulfur-containing analogs of oxygen compounds often have the prefix thio-. Sulfur's chemistry is similar to oxygen's, in many ways. One difference is that sulfur-sulfur double bonds are far weaker than oxygen-oxygen double bonds, but sulfur-sulfur single bonds are stronger than oxygen-oxygen single bonds. Organic sulfur compounds such as thiols have a strong specific smell, and a few are utilized by some organisms. Selenium's oxidation states are −2, +4, and +6. Selenium, like most chalcogens, bonds with oxygen. There are some organic selenium compounds, such as selenoproteins. Tellurium's oxidation states are −2, +2, +4, and +6. Tellurium forms the oxides tellurium monoxide, tellurium dioxide, and tellurium trioxide. Polonium's oxidation states are +2 and +4. There are many acids containing chalcogens, including sulfuric acid, sulfurous acid, selenic acid, and telluric acid. All hydrogen chalcogenides are toxic except for water. Oxygen ions often come in the forms of oxide ions (), peroxide ions (), and hydroxide ions (). Sulfur ions generally come in the form of sulfides (), bisulfides (), sulfites (), sulfates (), and thiosulfates (). Selenium ions usually come in the form of selenides (), selenites () and selenates (). Tellurium ions often come in the form of tellurates (). Molecules containing metal bonded to chalcogens are common as minerals. For example, pyrite (FeS2) is an iron ore, and the rare mineral calaverite is the ditelluride . Although all group 16 elements of the periodic table, including oxygen, can be defined as chalcogens, oxygen and oxides are usually distinguished from chalcogens and chalcogenides. The term chalcogenide is more commonly reserved for sulfides, selenides, and tellurides, rather than for oxides. Except for polonium, the chalcogens are all fairly similar to each other chemically. They all form X2− ions when reacting with electropositive metals. Sulfide minerals and analogous compounds produce gases upon reaction with oxygen. Compounds With halogens Chalcogens also form compounds with halogens known as chalcohalides, or chalcogen halides. The majority of simple chalcogen halides are well-known and widely used as chemical reagents. However, more complicated chalcogen halides, such as sulfenyl, sulfonyl, and sulfuryl halides, are less well known to science. Out of the compounds consisting purely of chalcogens and halogens, there are a total of 13 chalcogen fluorides, nine chalcogen chlorides, eight chalcogen bromides, and six chalcogen iodides that are known. The heavier chalcogen halides often have significant molecular interactions. Sulfur fluorides with low valences are fairly unstable and little is known about their properties. However, sulfur fluorides with high valences, such as sulfur hexafluoride, are stable and well-known. Sulfur tetrafluoride is also a well-known sulfur fluoride. Certain selenium fluorides, such as selenium difluoride, have been produced in small amounts. The crystal structures of both selenium tetrafluoride and tellurium tetrafluoride are known. Chalcogen chlorides and bromides have also been explored. In particular, selenium dichloride and sulfur dichloride can react to form organic selenium compounds. Dichalcogen dihalides, such as Se2Cl2 also are known to exist. There are also mixed chalcogen-halogen compounds. These include SeSX, with X being chlorine or bromine. Such compounds can form in mixtures of sulfur dichloride and selenium halides. These compounds have been fairly recently structurally characterized, as of 2008. In general, diselenium and disulfur chlorides and bromides are useful chemical reagents. Chalcogen halides with attached metal atoms are soluble in organic solutions. One example of such a compound is . Unlike selenium chlorides and bromides, selenium iodides have not been isolated, as of 2008, although it is likely that they occur in solution. Diselenium diiodide, however, does occur in equilibrium with selenium atoms and iodine molecules. Some tellurium halides with low valences, such as and , form polymers when in the solid state. These tellurium halides can be synthesized by the reduction of pure tellurium with superhydride and reacting the resulting product with tellurium tetrahalides. Ditellurium dihalides tend to get less stable as the halides become lower in atomic number and atomic mass. Tellurium also forms iodides with even fewer iodine atoms than diiodides. These include TeI and Te2I. These compounds have extended structures in the solid state. Halogens and chalcogens can also form halochalcogenate anions. Organic Alcohols, phenols and other similar compounds contain oxygen. However, in thiols, selenols and tellurols; sulfur, selenium, and tellurium replace oxygen. Thiols are better known than selenols or tellurols. Aside from alcohols, thiols are the most stable chalcogenols and tellurols are the least stable, being unstable in heat or light. Other organic chalcogen compounds include thioethers, selenoethers and telluroethers. Some of these, such as dimethyl sulfide, diethyl sulfide, and dipropyl sulfide are commercially available. Selenoethers are in the form of R2Se or RSeR. Telluroethers such as dimethyl telluride are typically prepared in the same way as thioethers and selenoethers. Organic chalcogen compounds, especially organic sulfur compounds, have the tendency to smell unpleasant. Dimethyl telluride also smells unpleasant, and selenophenol is renowned for its "metaphysical stench". There are also thioketones, selenoketones, and telluroketones. Out of these, thioketones are the most well-studied with 80% of chalcogenoketones papers being about them. Selenoketones make up 16% of such papers and telluroketones make up 4% of them. Thioketones have well-studied non-linear electric and photophysical properties. Selenoketones are less stable than thioketones and telluroketones are less stable than selenoketones. Telluroketones have the highest level of polarity of chalcogenoketones. With metals There is a very large number of metal chalcogenides. There are also ternary compounds containing alkali metals and transition metals. Highly metal-rich metal chalcogenides, such as Lu7Te and Lu8Te have domains of the metal's crystal lattice containing chalcogen atoms. While these compounds do exist, analogous chemicals that contain lanthanum, praseodymium, gadolinium, holmium, terbium, or ytterbium have not been discovered, as of 2008. The boron group metals aluminum, gallium, and indium also form bonds to chalcogens. The Ti3+ ion forms chalcogenide dimers such as TiTl5Se8. Metal chalcogenide dimers also occur as lower tellurides, such as Zr5Te6. Elemental chalcogens react with certain lanthanide compounds to form lanthanide clusters rich in chalcogens. Uranium(IV) chalcogenol compounds also exist. There are also transition metal chalcogenols which have potential to serve as catalysts and stabilize nanoparticles. With pnictogens Compounds with chalcogen-phosphorus bonds have been explored for more than 200 years. These compounds include unsophisticated phosphorus chalcogenides as well as large molecules with biological roles and phosphorus-chalcogen compounds with metal clusters. These compounds have numerous applications, including organo-phosphate insecticides, strike-anywhere matches and quantum dots. A total of 130,000 compounds with at least one phosphorus-sulfur bond, 6000 compounds with at least one phosphorus-selenium bond, and 350 compounds with at least one phosphorus-tellurium bond have been discovered. The decrease in the number of chalcogen-phosphorus compounds further down the periodic table is due to diminishing bond strength. Such compounds tend to have at least one phosphorus atom in the center, surrounded by four chalcogens and side chains. However, some phosphorus-chalcogen compounds also contain hydrogen (such as secondary phosphine chalcogenides) or nitrogen (such as dichalcogenoimidodiphosphates). Phosphorus selenides are typically harder to handle that phosphorus sulfides, and compounds in the form PxTey have not been discovered. Chalcogens also bond with other pnictogens, such as arsenic, antimony, and bismuth. Heavier chalcogen pnictides tend to form ribbon-like polymers instead of individual molecules. Chemical formulas of these compounds include Bi2S3 and Sb2Se3. Ternary chalcogen pnictides are also known. Examples of these include P4O6Se and P3SbS3. salts containing chalcogens and pnictogens also exist. Almost all chalcogen pnictide salts are typically in the form of [PnxE4x]3−, where Pn is a pnictogen and E is a chalcogen. Tertiary phosphines can react with chalcogens to form compounds in the form of R3PE, where E is a chalcogen. When E is sulfur, these compounds are relatively stable, but they are less so when E is selenium or tellurium. Similarly, secondary phosphines can react with chalcogens to form secondary phosphine chalcogenides. However, these compounds are in a state of equilibrium with chalcogenophosphinous acid. Secondary phosphine chalcogenides are weak acids. Binary compounds consisting of antimony or arsenic and a chalcogen. These compounds tend to be colorful and can be created by a reaction of the constituent elements at temperatures of . Other Chalcogens form single bonds and double bonds with other carbon group elements than carbon, such as silicon, germanium, and tin. Such compounds typically form from a reaction of carbon group halides and chalcogenol salts or chalcogenol bases. Cyclic compounds with chalcogens, carbon group elements, and boron atoms exist, and occur from the reaction of boron dichalcogenates and carbon group metal halides. Compounds in the form of M-E, where M is silicon, germanium, or tin, and E is sulfur, selenium or tellurium have been discovered. These form when carbon group hydrides react or when heavier versions of carbenes react. Sulfur and tellurium can bond with organic compounds containing both silicon and phosphorus. All of the chalcogens form hydrides. In some cases this occurs with chalcogens bonding with two hydrogen atoms. However tellurium hydride and polonium hydride are both volatile and highly labile. Also, oxygen can bond to hydrogen in a 1:1 ratio as in hydrogen peroxide, but this compound is unstable. Chalcogen compounds form a number of interchalcogens. For instance, sulfur forms the toxic sulfur dioxide and sulfur trioxide. Tellurium also forms oxides. There are some chalcogen sulfides as well. These include selenium sulfide, an ingredient in some shampoos. Since 1990, a number of borides with chalcogens bonded to them have been detected. The chalcogens in these compounds are mostly sulfur, although some do contain selenium instead. One such chalcogen boride consists of two molecules of dimethyl sulfide attached to a boron-hydrogen molecule. Other important boron-chalcogen compounds include macropolyhedral systems. Such compounds tend to feature sulfur as the chalcogen. There are also chalcogen borides with two, three, or four chalcogens. Many of these contain sulfur but some, such as Na2B2Se7 contain selenium instead. History Early discoveries Sulfur has been known since ancient times and is mentioned in the Bible fifteen times. It was known to the ancient Greeks and commonly mined by the ancient Romans. It was also historically used as a component of Greek fire. In the Middle Ages, it was a key part of alchemical experiments. In the 1700s and 1800s, scientists Joseph Louis Gay-Lussac and Louis-Jacques Thénard proved sulfur to be a chemical element. Early attempts to separate oxygen from air were hampered by the fact that air was thought of as a single element up to the 17th and 18th centuries. Robert Hooke, Mikhail Lomonosov, Ole Borch, and Pierre Bayden all successfully created oxygen, but did not realize it at the time. Oxygen was discovered by Joseph Priestley in 1774 when he focused sunlight on a sample of mercuric oxide and collected the resulting gas. Carl Wilhelm Scheele had also created oxygen in 1771 by the same method, but Scheele did not publish his results until 1777. Tellurium was first discovered in 1783 by Franz Joseph Müller von Reichenstein. He discovered tellurium in a sample of what is now known as calaverite. Müller assumed at first that the sample was pure antimony, but tests he ran on the sample did not agree with this. Muller then guessed that the sample was bismuth sulfide, but tests confirmed that the sample was not that. For some years, Muller pondered the problem. Eventually he realized that the sample was gold bonded with an unknown element. In 1796, Müller sent part of the sample to the German chemist Martin Klaproth, who purified the undiscovered element. Klaproth decided to call the element tellurium after the Latin word for earth. Selenium was discovered in 1817 by Jöns Jacob Berzelius. Berzelius noticed a reddish-brown sediment at a sulfuric acid manufacturing plant. The sample was thought to contain arsenic. Berzelius initially thought that the sediment contained tellurium, but came to realize that it also contained a new element, which he named selenium after the Greek moon goddess Selene. Periodic table placing Three of the chalcogens (sulfur, selenium, and tellurium) were part of the discovery of periodicity, as they are among a series of triads of elements in the same group that were noted by Johann Wolfgang Döbereiner as having similar properties. Around 1865 John Newlands produced a series of papers where he listed the elements in order of increasing atomic weight and similar physical and chemical properties that recurred at intervals of eight; he likened such periodicity to the octaves of music. His version included a "group b" consisting of oxygen, sulfur, selenium, tellurium, and osmium. After 1869, Dmitri Mendeleev proposed his periodic table placing oxygen at the top of "group VI" above sulfur, selenium, and tellurium. Chromium, molybdenum, tungsten, and uranium were sometimes included in this group, but they would be later rearranged as part of group VIB; uranium would later be moved to the actinide series. Oxygen, along with sulfur, selenium, tellurium, and later polonium would be grouped in group VIA, until the group's name was changed to group 16 in 1988. Modern discoveries In the late 19th century, Marie Curie and Pierre Curie discovered that a sample of pitchblende was emitting four times as much radioactivity as could be explained by the presence of uranium alone. The Curies gathered several tons of pitchblende and refined it for several months until they had a pure sample of polonium. The discovery officially took place in 1898. Prior to the invention of particle accelerators, the only way to produce polonium was to extract it over several months from uranium ore. The first attempt at creating livermorium was from 1976 to 1977 at the LBNL, who bombarded curium-248 with calcium-48, but were not successful. After several failed attempts in 1977, 1998, and 1999 by research groups in Russia, Germany, and the US, livermorium was created successfully in 2000 at the Joint Institute for Nuclear Research by bombarding curium-248 atoms with calcium-48 atoms. The element was known as ununhexium until it was officially named livermorium in 2012. Names and etymology In the 19th century, Jons Jacob Berzelius suggested calling the elements in group 16 "amphigens", as the elements in the group formed amphid salts (salts of oxyacids. Formerly regarded as composed of two oxides, an acid and a basic oxide) The term received some use in the early 1800s but is now obsolete. The name chalcogen comes from the Greek words (, literally "copper"), and (, born, gender, kindle). It was first used in 1932 by Wilhelm Biltz's group at Leibniz University Hannover, where it was proposed by Werner Fischer. The word "chalcogen" gained popularity in Germany during the 1930s because the term was analogous to "halogen". Although the literal meanings of the modern Greek words imply that chalcogen means "copper-former", this is misleading because the chalcogens have nothing to do with copper in particular. "Ore-former" has been suggested as a better translation, as the vast majority of metal ores are chalcogenides and the word in ancient Greek was associated with metals and metal-bearing rock in general; copper, and its alloy bronze, was one of the first metals to be used by humans. Oxygen's name comes from the Greek words oxy genes, meaning "acid-forming". Sulfur's name comes from either the Latin word or the Sanskrit word ; both of those terms are ancient words for sulfur. Selenium is named after the Greek goddess of the moon, Selene, to match the previously-discovered element tellurium, whose name comes from the Latin word , meaning earth. Polonium is named after Marie Curie's country of birth, Poland. Livermorium is named for the Lawrence Livermore National Laboratory. Occurrence The four lightest chalcogens (oxygen, sulfur, selenium, and tellurium) are all primordial elements on Earth. Sulfur and oxygen occur as constituent copper ores and selenium and tellurium occur in small traces in such ores. Polonium forms naturally from the decay of other elements, even though it is not primordial. Livermorium does not occur naturally at all. Oxygen makes up 21% of the atmosphere by weight, 89% of water by weight, 46% of the Earth's crust by weight, and 65% of the human body. Oxygen also occurs in many minerals, being found in all oxide minerals and hydroxide minerals, and in numerous other mineral groups. Stars of at least eight times the mass of the Sun also produce oxygen in their cores via nuclear fusion. Oxygen is the third-most abundant element in the universe, making up 1% of the universe by weight. Sulfur makes up 0.035% of the Earth's crust by weight, making it the 17th most abundant element there and makes up 0.25% of the human body. It is a major component of soil. Sulfur makes up 870 parts per million of seawater and about 1 part per billion of the atmosphere. Sulfur can be found in elemental form or in the form of sulfide minerals, sulfate minerals, or sulfosalt minerals. Stars of at least 12 times the mass of the Sun produce sulfur in their cores via nuclear fusion. Sulfur is the tenth most abundant element in the universe, making up 500 parts per million of the universe by weight. Selenium makes up 0.05 parts per million of the Earth's crust by weight. This makes it the 67th most abundant element in the Earth's crust. Selenium makes up on average 5 parts per million of the soils. Seawater contains around 200 parts per trillion of selenium. The atmosphere contains 1 nanogram of selenium per cubic meter. There are mineral groups known as selenates and selenites, but there are not many minerals in these groups. Selenium is not produced directly by nuclear fusion. Selenium makes up 30 parts per billion of the universe by weight. There are only 5 parts per billion of tellurium in the Earth's crust and 15 parts per billion of tellurium in seawater. Tellurium is one of the eight or nine least abundant elements in the Earth's crust. There are a few dozen tellurate minerals and telluride minerals, and tellurium occurs in some minerals with gold, such as sylvanite and calaverite. Tellurium makes up 9 parts per billion of the universe by weight. Polonium only occurs in trace amounts on Earth, via radioactive decay of uranium and thorium. It is present in uranium ores in concentrations of 100 micrograms per metric ton. Very minute amounts of polonium exist in the soil and thus in most food, and thus in the human body. The Earth's crust contains less than 1 part per billion of polonium, making it one of the ten rarest metals on Earth. Livermorium is always produced artificially in particle accelerators. Even when it is produced, only a small number of atoms are synthesized at a time. Chalcophile elements Chalcophile elements are those that remain on or close to the surface because they combine readily with chalcogens other than oxygen, forming compounds which do not sink into the core. Chalcophile ("chalcogen-loving") elements in this context are those metals and heavier nonmetals that have a low affinity for oxygen and prefer to bond with the heavier chalcogen sulfur as sulfides. Because sulfide minerals are much denser than the silicate minerals formed by lithophile elements, chalcophile elements separated below the lithophiles at the time of the first crystallisation of the Earth's crust. This has led to their depletion in the Earth's crust relative to their solar abundances, though this depletion has not reached the levels found with siderophile elements. Production Approximately 100 million metric tons of oxygen are produced yearly. Oxygen is most commonly produced by fractional distillation, in which air is cooled to a liquid, then warmed, allowing all the components of air except for oxygen to turn to gases and escape. Fractionally distilling air several times can produce 99.5% pure oxygen. Another method with which oxygen is produced is to send a stream of dry, clean air through a bed of molecular sieves made of zeolite, which absorbs the nitrogen in the air, leaving 90 to 93% pure oxygen. Sulfur can be mined in its elemental form, although this method is no longer as popular as it used to be. In 1865 a large deposit of elemental sulfur was discovered in the U.S. states of Louisiana and Texas, but it was difficult to extract at the time. In the 1890s, Herman Frasch came up with the solution of liquefying the sulfur with superheated steam and pumping the sulfur up to the surface. These days sulfur is instead more often extracted from oil, natural gas, and tar. The world production of selenium is around 1500 metric tons per year, out of which roughly 10% is recycled. Japan is the largest producer, producing 800 metric tons of selenium per year. Other large producers include Belgium (300 metric tons per year), the United States (over 200 metric tons per year), Sweden (130 metric tons per year), and Russia (100 metric tons per year). Selenium can be extracted from the waste from the process of electrolytically refining copper. Another method of producing selenium is to farm selenium-gathering plants such as milk vetch. This method could produce three kilograms of selenium per acre, but is not commonly practiced. Tellurium is mostly produced as a by-product of the processing of copper. Tellurium can also be refined by electrolytic reduction of sodium telluride. The world production of tellurium is between 150 and 200 metric tons per year. The United States is one of the largest producers of tellurium, producing around 50 metric tons per year. Peru, Japan, and Canada are also large producers of tellurium. Until the creation of nuclear reactors, all polonium had to be extracted from uranium ore. In modern times, most isotopes of polonium are produced by bombarding bismuth with neutrons. Polonium can also be produced by high neutron fluxes in nuclear reactors. Approximately 100 grams of polonium are produced yearly. All the polonium produced for commercial purposes is made in the Ozersk nuclear reactor in Russia. From there, it is taken to Samara, Russia for purification, and from there to St. Petersburg for distribution. The United States is the largest consumer of polonium. All livermorium is produced artificially in particle accelerators. The first successful production of livermorium was achieved by bombarding curium-248 atoms with calcium-48 atoms. As of 2011, roughly 25 atoms of livermorium had been synthesized. Applications Metabolism is the most important source and use of oxygen. Minor industrial uses include Steelmaking (55% of all purified oxygen produced), the chemical industry (25% of all purified oxygen), medical use, water treatment (as oxygen kills some types of bacteria), rocket fuel (in liquid form), and metal cutting. Most sulfur produced is transformed into sulfur dioxide, which is further transformed into sulfuric acid, a very common industrial chemical. Other common uses include being a key ingredient of gunpowder and Greek fire, and being used to change soil pH. Sulfur is also mixed into rubber to vulcanize it. Sulfur is used in some types of concrete and fireworks. 60% of all sulfuric acid produced is used to generate phosphoric acid. Sulfur is used as a pesticide (specifically as an acaricide and fungicide) on "orchard, ornamental, vegetable, grain, and other crops." Around 40% of all selenium produced goes to glassmaking. 30% of all selenium produced goes to metallurgy, including manganese production. 15% of all selenium produced goes to agriculture. Electronics such as photovoltaic materials claim 10% of all selenium produced. Pigments account for 5% of all selenium produced. Historically, machines such as photocopiers and light meters used one-third of all selenium produced, but this application is in steady decline. Tellurium suboxide, a mixture of tellurium and tellurium dioxide, is used in the rewritable data layer of some CD-RW disks and DVD-RW disks. Bismuth telluride is also used in many microelectronic devices, such as photoreceptors. Tellurium is sometimes used as an alternative to sulfur in vulcanized rubber. Cadmium telluride is used as a high-efficiency material in solar panels. Some of polonium's applications relate to the element's radioactivity. For instance, polonium is used as an alpha-particle generator for research. Polonium alloyed with beryllium provides an efficient neutron source. Polonium is also used in nuclear batteries. Most polonium is used in antistatic devices. Livermorium does not have any uses whatsoever due to its extreme rarity and short half-life. Organochalcogen compounds are involved in the semiconductor process. These compounds also feature into ligand chemistry and biochemistry. One application of chalcogens themselves is to manipulate redox couples in supramolecular chemistry (chemistry involving non-covalent bond interactions). This application leads on to such applications as crystal packing, assembly of large molecules, and biological recognition of patterns. The secondary bonding interactions of the larger chalcogens, selenium and tellurium, can create organic solvent-holding acetylene nanotubes. Chalcogen interactions are useful for conformational analysis and stereoelectronic effects, among other things. Chalcogenides with through bonds also have applications. For instance, divalent sulfur can stabilize carbanions, cationic centers, and radical. Chalcogens can confer upon ligands (such as DCTO) properties such as being able to transform Cu(II) to Cu(I). Studying chalcogen interactions gives access to radical cations, which are used in mainstream synthetic chemistry. Metallic redox centers of biological importance are tunable by interactions of ligands containing chalcogens, such as methionine and selenocysteine. Also, chalcogen through-bonds can provide insight about the process of electron transfer. Biological role Oxygen is needed by almost all organisms for the purpose of generating ATP. It is also a key component of most other biological compounds, such as water, amino acids and DNA. Human blood contains a large amount of oxygen. Human bones contain 28% oxygen. Human tissue contains 16% oxygen. A typical 70-kilogram human contains 43 kilograms of oxygen, mostly in the form of water. All animals need significant amounts of sulfur. Some amino acids, such as cysteine and methionine contain sulfur. Plant roots take up sulfate ions from the soil and reduce it to sulfide ions. Metalloproteins also use sulfur to attach to useful metal atoms in the body and sulfur similarly attaches itself to poisonous metal atoms like cadmium to haul them to the safety of the liver. On average, humans consume 900 milligrams of sulfur each day. Sulfur compounds, such as those found in skunk spray often have strong odors. All animals and some plants need trace amounts of selenium, but only for some specialized enzymes. Humans consume on average between 6 and 200 micrograms of selenium per day. Mushrooms and brazil nuts are especially noted for their high selenium content. Selenium in foods is most commonly found in the form of amino acids such as selenocysteine and selenomethionine. Selenium can protect against heavy metal poisoning. Tellurium is not known to be needed for animal life, although a few fungi can incorporate it in compounds in place of selenium. Microorganisms also absorb tellurium and emit dimethyl telluride. Most tellurium in the blood stream is excreted slowly in urine, but some is converted to dimethyl telluride and released through the lungs. On average, humans ingest about 600 micrograms of tellurium daily. Plants can take up some tellurium from the soil. Onions and garlic have been found to contain as much as 300 parts per million of tellurium in dry weight. Polonium has no biological role, and is highly toxic on account of being radioactive. Toxicity Oxygen is generally nontoxic, but oxygen toxicity has been reported when it is used in high concentrations. In both elemental gaseous form and as a component of water, it is vital to almost all life on Earth. Despite this, liquid oxygen is highly dangerous. Even gaseous oxygen is dangerous in excess. For instance, sports divers have occasionally drowned from convulsions caused by breathing pure oxygen at a depth of more than underwater. Oxygen is also toxic to some bacteria. Ozone, an allotrope of oxygen, is toxic to most life. It can cause lesions in the respiratory tract. Sulfur is generally nontoxic and is even a vital nutrient for humans. However, in its elemental form it can cause redness in the eyes and skin, a burning sensation and a cough if inhaled, a burning sensation and diarrhoea and/or catharsis if ingested, and can irritate the mucous membranes. An excess of sulfur can be toxic for cows because microbes in the rumens of cows produce toxic hydrogen sulfide upon reaction with sulfur. Many sulfur compounds, such as hydrogen sulfide (H2S) and sulfur dioxide (SO2) are highly toxic. Selenium is a trace nutrient required by humans on the order of tens or hundreds of micrograms per day. A dose of over 450 micrograms can be toxic, resulting in bad breath and body odor. Extended, low-level exposure, which can occur at some industries, results in weight loss, anemia, and dermatitis. In many cases of selenium poisoning, selenous acid is formed in the body. Hydrogen selenide (H2Se) is highly toxic. Exposure to tellurium can produce unpleasant side effects. As little as 10 micrograms of tellurium per cubic meter of air can cause notoriously unpleasant breath, described as smelling like rotten garlic. Acute tellurium poisoning can cause vomiting, gut inflammation, internal bleeding, and respiratory failure. Extended, low-level exposure to tellurium causes tiredness and indigestion. Sodium tellurite (Na2TeO3) is lethal in amounts of around 2 grams. Polonium is dangerous as an alpha particle emitter. If ingested, polonium-210 is a million times as toxic as hydrogen cyanide by weight; it has been used as a murder weapon in the past, most famously to kill Alexander Litvinenko. Polonium poisoning can cause nausea, vomiting, anorexia, and lymphopenia. It can also damage hair follicles and white blood cells. Polonium-210 is only dangerous if ingested or inhaled because its alpha particle emissions cannot penetrate human skin. Polonium-209 is also toxic, and can cause leukemia. Amphid salts Amphid salts was a name given by Jons Jacob Berzelius in the 19th century for chemical salts derived from the 16th group of the periodic table which included oxygen, sulfur, selenium, and tellurium. The term received some use in the early 1800s but is now obsolete. The current term in use for the 16th group is chalcogens. See also Chalcogenide Gold chalcogenides Halogen Interchalcogen Pnictogen References External links Periodic table Groups (periodic table)
https://en.wikipedia.org/wiki/Cyanide
In chemistry, a cyanide () is a chemical compound that contains a functional group. This group, known as the cyano group, consists of a carbon atom triple-bonded to a nitrogen atom. In inorganic cyanides, the cyanide group is present as the cyanide anion . This anion is extremely poisonous. Soluble salts such as sodium cyanide (NaCN) and potassium cyanide (KCN) are highly toxic. Hydrocyanic acid, also known as hydrogen cyanide, or HCN, is a highly volatile liquid that is produced on a large scale industrially. It is obtained by acidification of cyanide salts. Organic cyanides are usually called nitriles. In nitriles, the group is linked by a single covalent bond to carbon. For example, in acetonitrile (), the cyanide group is bonded to methyl (). Although nitriles generally do not release cyanide ions, the cyanohydrins do and are thus toxic. Bonding The cyanide ion is isoelectronic with carbon monoxide and with molecular nitrogen N≡N. A triple bond exists between C and N. The negative charge is concentrated on carbon C. Occurrence In nature Cyanides are produced by certain bacteria, fungi, and algae. It is an antifeedant in a number of plants. Cyanides are found in substantial amounts in certain seeds and fruit stones, e.g., those of bitter almonds, apricots, apples, and peaches. Chemical compounds that can release cyanide are known as cyanogenic compounds. In plants, cyanides are usually bound to sugar molecules in the form of cyanogenic glycosides and defend the plant against herbivores. Cassava roots (also called manioc), an important potato-like food grown in tropical countries (and the base from which tapioca is made), also contain cyanogenic glycosides. The Madagascar bamboo Cathariostachys madagascariensis produces cyanide as a deterrent to grazing. In response, the golden bamboo lemur, which eats the bamboo, has developed a high tolerance to cyanide. The hydrogenase enzymes contain cyanide ligands attached to iron in their active sites. The biosynthesis of cyanide in the NiFe hydrogenases proceeds from carbamoyl phosphate, which converts to cysteinyl thiocyanate, the donor. Interstellar medium The cyanide radical •CN has been identified in interstellar space. Cyanogen, , is used to measure the temperature of interstellar gas clouds. Pyrolysis and combustion product Hydrogen cyanide is produced by the combustion or pyrolysis of certain materials under oxygen-deficient conditions. For example, it can be detected in the exhaust of internal combustion engines and tobacco smoke. Certain plastics, especially those derived from acrylonitrile, release hydrogen cyanide when heated or burnt. Organic derivatives In IUPAC nomenclature, organic compounds that have a functional group are called nitriles. An example of a nitrile is acetonitrile, . Nitriles usually do not release cyanide ions. A functional group with a hydroxyl and cyanide bonded to the same carbon atom is called cyanohydrin (). Unlike nitriles, cyanohydrins do release poisonous hydrogen cyanide. Reactions Protonation Cyanide is basic. The pKa of hydrogen cyanide is 9.21. Thus, addition of acids stronger than hydrogen cyanide to solutions of cyanide salts releases hydrogen cyanide. Hydrolysis Cyanide is unstable in water, but the reaction is slow until about 170 °C. It undergoes hydrolysis to give ammonia and formate, which are far less toxic than cyanide: Cyanide hydrolase is an enzyme that catalyzes this reaction. Alkylation Because of the cyanide anion's high nucleophilicity, cyano groups are readily introduced into organic molecules by displacement of a halide group (e.g., the chloride on methyl chloride). In general, organic cyanides are called nitriles. In organic synthesis, cyanide is a C-1 synthon; i.e., it can be used to lengthen a carbon chain by one, while retaining the ability to be functionalized. Redox The cyanide ion is a reductant and is oxidized by strong oxidizing agents such as molecular chlorine (), hypochlorite (), and hydrogen peroxide (). These oxidizers are used to destroy cyanides in effluents from gold mining. Metal complexation The cyanide anion reacts with transition metals to form M-CN bonds. This reaction is the basis of cyanide's toxicity. The high affinities of metals for this anion can be attributed to its negative charge, compactness, and ability to engage in π-bonding. Among the most important cyanide coordination compounds are the potassium ferrocyanide and the pigment Prussian blue, which are both essentially nontoxic due to the tight binding of the cyanides to a central iron atom. Prussian blue was first accidentally made around 1706, by heating substances containing iron and carbon and nitrogen, and other cyanides made subsequently (and named after it). Among its many uses, Prussian blue gives the blue color to blueprints, bluing, and cyanotypes. Manufacture The principal process used to manufacture cyanides is the Andrussow process in which gaseous hydrogen cyanide is produced from methane and ammonia in the presence of oxygen and a platinum catalyst. Sodium cyanide, the precursor to most cyanides, is produced by treating hydrogen cyanide with sodium hydroxide: Toxicity Many cyanides are highly toxic. The cyanide anion is an inhibitor of the enzyme cytochrome c oxidase (also known as aa3), the fourth complex of the electron transport chain found in the inner membrane of the mitochondria of eukaryotic cells. It attaches to the iron within this protein. The binding of cyanide to this enzyme prevents transport of electrons from cytochrome c to oxygen. As a result, the electron transport chain is disrupted, meaning that the cell can no longer aerobically produce ATP for energy. Tissues that depend highly on aerobic respiration, such as the central nervous system and the heart, are particularly affected. This is an example of histotoxic hypoxia. The most hazardous compound is hydrogen cyanide, which is a gas and kills by inhalation. For this reason, an air respirator supplied by an external oxygen source must be worn when working with hydrogen cyanide. Hydrogen cyanide is produced by adding acid to a solution containing a cyanide salt. Alkaline solutions of cyanide are safer to use because they do not evolve hydrogen cyanide gas. Hydrogen cyanide may be produced in the combustion of polyurethanes; for this reason, polyurethanes are not recommended for use in domestic and aircraft furniture. Oral ingestion of a small quantity of solid cyanide or a cyanide solution of as little as 200 mg, or exposure to airborne cyanide of 270 ppm, is sufficient to cause death within minutes. Organic nitriles do not readily release cyanide ions, and so have low toxicities. By contrast, compounds such as trimethylsilyl cyanide readily release HCN or the cyanide ion upon contact with water. Antidote Hydroxocobalamin reacts with cyanide to form cyanocobalamin, which can be safely eliminated by the kidneys. This method has the advantage of avoiding the formation of methemoglobin (see below). This antidote kit is sold under the brand name Cyanokit and was approved by the U.S. FDA in 2006. An older cyanide antidote kit included administration of three substances: amyl nitrite pearls (administered by inhalation), sodium nitrite, and sodium thiosulfate. The goal of the antidote was to generate a large pool of ferric iron () to compete for cyanide with cytochrome a3 (so that cyanide will bind to the antidote rather than the enzyme). The nitrites oxidize hemoglobin to methemoglobin, which competes with cytochrome oxidase for the cyanide ion. Cyanmethemoglobin is formed and the cytochrome oxidase enzyme is restored. The major mechanism to remove the cyanide from the body is by enzymatic conversion to thiocyanate by the mitochondrial enzyme rhodanese. Thiocyanate is a relatively non-toxic molecule and is excreted by the kidneys. To accelerate this detoxification, sodium thiosulfate is administered to provide a sulfur donor for rhodanese, needed in order to produce thiocyanate. Sensitivity Minimum risk levels (MRLs) may not protect for delayed health effects or health effects acquired following repeated sublethal exposure, such as hypersensitivity, asthma, or bronchitis. MRLs may be revised after sufficient data accumulates. Applications Mining Cyanide is mainly produced for the mining of silver and gold: It helps dissolve these metals allowing separation from the other solids. In the cyanide process, finely ground high-grade ore is mixed with the cyanide (at a ratio of about 1:500 parts NaCN to ore); low-grade ores are stacked into heaps and sprayed with a cyanide solution (at a ratio of about 1:1000 parts NaCN to ore). The precious metals are complexed by the cyanide anions to form soluble derivatives, e.g., (dicyanoargentate(I)) and (dicyanoaurate(I)). Silver is less "noble" than gold and often occurs as the sulfide, in which case redox is not invoked (no is required). Instead, a displacement reaction occurs: Ag2S + 4 NaCN + H2O -> 2 Na[Ag(CN)2] + NaSH + NaOH 4 Au + 8 NaCN + O2 + 2 H2O -> 4 Na[Au(CN)2] + 4 NaOH The "pregnant liquor" containing these ions is separated from the solids, which are discarded to a tailing pond or spent heap, the recoverable gold having been removed. The metal is recovered from the "pregnant solution" by reduction with zinc dust or by adsorption onto activated carbon. This process can result in environmental and health problems. A number of environmental disasters have followed the overflow of tailing ponds at gold mines. Cyanide contamination of waterways has resulted in numerous cases of human and aquatic species mortality. Aqueous cyanide is hydrolyzed rapidly, especially in sunlight. It can mobilize some heavy metals such as mercury if present. Gold can also be associated with arsenopyrite (FeAsS), which is similar to iron pyrite (fool's gold), wherein half of the sulfur atoms are replaced by arsenic. Gold-containing arsenopyrite ores are similarly reactive toward inorganic cyanide. Industrial organic chemistry The second major application of alkali metal cyanides (after mining) is in the production of CN-containing compounds, usually nitriles. Acyl cyanides are produced from acyl chlorides and cyanide. Cyanogen, cyanogen chloride, and the trimer cyanuric chloride are derived from alkali metal cyanides. Medical uses The cyanide compound sodium nitroprusside is used mainly in clinical chemistry to measure urine ketone bodies mainly as a follow-up to diabetic patients. On occasion, it is used in emergency medical situations to produce a rapid decrease in blood pressure in humans; it is also used as a vasodilator in vascular research. The cobalt in artificial vitamin B12 contains a cyanide ligand as an artifact of the purification process; this must be removed by the body before the vitamin molecule can be activated for biochemical use. During World War I, a copper cyanide compound was briefly used by Japanese physicians for the treatment of tuberculosis and leprosy. Illegal fishing and poaching Cyanides are illegally used to capture live fish near coral reefs for the aquarium and seafood markets. The practice is controversial, dangerous, and damaging but is driven by the lucrative exotic fish market. Poachers in Africa have been known to use cyanide to poison waterholes, to kill elephants for their ivory. Pest control M44 cyanide devices are used in the United States to kill coyotes and other canids. Cyanide is also used for pest control in New Zealand, particularly for possums, an introduced marsupial that threatens the conservation of native species and spreads tuberculosis amongst cattle. Possums can become bait shy but the use of pellets containing the cyanide reduces bait shyness. Cyanide has been known to kill native birds, including the endangered kiwi. Cyanide is also effective for controlling the dama wallaby, another introduced marsupial pest in New Zealand. A licence is required to store, handle and use cyanide in New Zealand. Cyanides are used as insecticides for fumigating ships. Cyanide salts are used for killing ants, and have in some places been used as rat poison (the less toxic poison arsenic is more common). Niche uses Potassium ferrocyanide is used to achieve a blue color on cast bronze sculptures during the final finishing stage of the sculpture. On its own, it will produce a very dark shade of blue and is often mixed with other chemicals to achieve the desired tint and hue. It is applied using a torch and paint brush while wearing the standard safety equipment used for any patina application: rubber gloves, safety glasses, and a respirator. The actual amount of cyanide in the mixture varies according to the recipes used by each foundry. Cyanide is also used in jewelry-making and certain kinds of photography such as sepia toning. Although usually thought to be toxic, cyanide and cyanohydrins increase germination in various plant species. Human poisoning Deliberate cyanide poisoning of humans has occurred many times throughout history. Common salts such as sodium cyanide are involatile but water-soluble, so are poisonous by ingestion. Hydrogen cyanide is a gas, making it more indiscriminately dangerous, however it is lighter than air and rapidly disperses up into the atmosphere, which makes it ineffective as a chemical weapon. Poisoning by hydrogen cyanide is more effective in an enclosed space, such as a gas chamber. Most significantly, hydrogen cyanide released from pellets of Zyklon-B was used extensively in the extermination camps of the Holocaust. Food additive Because of the high stability of their complexation with iron, ferrocyanides (Sodium ferrocyanide E535, Potassium ferrocyanide E536, and Calcium ferrocyanide E538) do not decompose to lethal levels in the human body and are used in the food industry as, e.g., an anticaking agent in table salt. Chemical tests for cyanide Cyanide is quantified by potentiometric titration, a method widely used in gold mining. It can also be determined by titration with silver ion. Some analyses begin with an air-purge of an acidified boiling solution, sweeping the vapors into a basic absorber solution. The cyanide salt absorbed in the basic solution is then analyzed. Qualitative tests Because of the notorious toxicity of cyanide, many methods have been investigated. Benzidine gives a blue coloration in the presence of ferricyanide. Iron(II) sulfate added to a solution of cyanide, such as the filtrate from the sodium fusion test, gives prussian blue. A solution of para-benzoquinone in DMSO reacts with inorganic cyanide to form a cyanophenol, which is fluorescent. Illumination with a UV light gives a green/blue glow if the test is positive. References External links ATSDR medical management guidelines for cyanide poisoning (US) HSE recommendations for first aid treatment of cyanide poisoning (UK) Hydrogen cyanide and cyanides (CICAD 61) IPCS/CEC Evaluation of antidotes for poisoning by cyanides National Pollutant Inventory – Cyanide compounds fact sheet Eating apple seeds is safe despite the small amount of cyanide Toxicological Profile for Cyanide, U.S. Department of Health and Human Services, July 2006 Safety data (French) Institut national de recherche et de sécurité (1997). "Cyanure d'hydrogène et solutions aqueuses". Fiche toxicologique n° 4, Paris: INRS, 5 pp. (PDF file, ) Institut national de recherche et de sécurité (1997). "Cyanure de sodium. Cyanure de potassium". Fiche toxicologique n° 111, Paris: INRS, 6 pp. (PDF file, ) Anions Blood agents Nitrogen(−III) compounds Toxicology
https://en.wikipedia.org/wiki/Catalysis
Catalysis () is the increase in rate of a chemical reaction due to an added substance known as a catalyst (). Catalysts are not consumed by the reaction and remain unchanged after it. If the reaction is rapid and the catalyst recycles quickly, very small amounts of catalyst often suffice; mixing, surface area, and temperature are important factors in reaction rate. Catalysts generally react with one or more reactants to form intermediates that subsequently give the final reaction product, in the process of regenerating the catalyst. The rate increase occurs because the catalyst allows the reaction to occur by an alternative mechanism which may be much faster than the non-catalyzed mechanism. However the non-catalyzed mechanism does remain possible, so that the total rate (catalyzed plus non-catalyzed) can only increase in the presence of the catalyst and never decrease. Catalysis may be classified as either homogeneous, whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant, or heterogeneous, whose components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category. Catalysis is ubiquitous in chemical industry of all kinds. Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture. The term "catalyst" is derived from Greek , kataluein, meaning "loosen" or "untie". The concept of catalysis was invented by chemist Elizabeth Fulhame, based on her novel work in oxidation-reduction experiments. General principles Example An illustrative example is the effect of catalysts to speed the decomposition of hydrogen peroxide into water and oxygen: 2 HO → 2 HO + O This reaction proceeds because the reaction products are more stable than the starting compound, but this decomposition is so slow that hydrogen peroxide solutions are commercially available. In the presence of a catalyst such as manganese dioxide this reaction proceeds much more rapidly. This effect is readily seen by the effervescence of oxygen. The catalyst is not consumed in the reaction, and may be recovered unchanged and re-used indefinitely. Accordingly, manganese dioxide is said to catalyze this reaction. In living organisms, this reaction is catalyzed by enzymes (proteins that serve as catalysts) such as catalase. Units The SI derived unit for measuring the catalytic activity of a catalyst is the katal, which is quantified in moles per second. The productivity of a catalyst can be described by the turnover number (or TON) and the catalytic activity by the turn over frequency (TOF), which is the TON per time unit. The biochemical equivalent is the enzyme unit. For more information on the efficiency of enzymatic catalysis, see the article on enzymes. Catalytic reaction mechanisms In general, chemical reactions occur faster in the presence of a catalyst because the catalyst provides an alternative reaction mechanism (reaction pathway) having a lower activation energy than the non-catalyzed mechanism. In catalyzed mechanisms, the catalyst is regenerated. As a simple example occurring in the gas phase, the reaction 2 SO2 + O2 → 2 SO3 can be catalyzed by adding nitric oxide. The reaction occurs in two steps: 2NO + O2 → 2NO2 (rate-determining) NO2 + SO2 → NO + SO3 (fast) The NO catalyst is regenerated. The overall rate is the rate of the slow step v = 2k1[NO]2[O2]. An example of heterogeneous catalysis is the reaction of oxygen and hydrogen on the surface of titanium dioxide (TiO, or titania) to produce water. Scanning tunneling microscopy showed that the molecules undergo adsorption and dissociation. The dissociated, surface-bound O and H atoms diffuse together. The intermediate reaction states are: HO, HO, then HO and the reaction product (water molecule dimers), after which the water molecule desorbs from the catalyst surface. Reaction energetics Catalysts enable pathways that differ from the uncatalyzed reactions. These pathways have lower activation energy. Consequently, more molecular collisions have the energy needed to reach the transition state. Hence, catalysts can enable reactions that would otherwise be blocked or slowed by a kinetic barrier. The catalyst may increase the reaction rate or selectivity, or enable the reaction at lower temperatures. This effect can be illustrated with an energy profile diagram. In the catalyzed elementary reaction, catalysts do not change the extent of a reaction: they have no effect on the chemical equilibrium of a reaction. The ratio of the forward and the reverse reaction rates is unaffected (see also thermodynamics). The second law of thermodynamics describes why a catalyst does not change the chemical equilibrium of a reaction. Suppose there was such a catalyst that shifted an equilibrium. Introducing the catalyst to the system would result in a reaction to move to the new equilibrium, producing energy. Production of energy is a necessary result since reactions are spontaneous only if Gibbs free energy is produced, and if there is no energy barrier, there is no need for a catalyst. Then, removing the catalyst would also result in a reaction, producing energy; i.e. the addition and its reverse process, removal, would both produce energy. Thus, a catalyst that could change the equilibrium would be a perpetual motion machine, a contradiction to the laws of thermodynamics. Thus, catalysts do not alter the equilibrium constant. (A catalyst can however change the equilibrium concentrations by reacting in a subsequent step. It is then consumed as the reaction proceeds, and thus it is also a reactant. Illustrative is the base-catalyzed hydrolysis of esters, where the produced carboxylic acid immediately reacts with the base catalyst and thus the reaction equilibrium is shifted towards hydrolysis.) The catalyst stabilizes the transition state more than it stabilizes the starting material. It decreases the kinetic barrier by decreasing the difference in energy between starting material and the transition state. It does not change the energy difference between starting materials and products (thermodynamic barrier), or the available energy (this is provided by the environment as heat or light). Related concepts Some so-called catalysts are really precatalysts. Precatalysts convert to catalysts in the reaction. For example, Wilkinson's catalyst RhCl(PPh) loses one triphenylphosphine ligand before entering the true catalytic cycle. Precatalysts are easier to store but are easily activated in situ. Because of this preactivation step, many catalytic reactions involve an induction period. In cooperative catalysis, chemical species that improve catalytic activity are called cocatalysts or promoters. In tandem catalysis two or more different catalysts are coupled in a one-pot reaction. In autocatalysis, the catalyst is a product of the overall reaction, in contrast to all other types of catalysis considered in this article. The simplest example of autocatalysis is a reaction of type A + B → 2 B, in one or in several steps. The overall reaction is just A → B, so that B is a product. But since B is also a reactant, it may be present in the rate equation and affect the reaction rate. As the reaction proceeds, the concentration of B increases and can accelerate the reaction as a catalyst. In effect, the reaction accelerates itself or is autocatalyzed. An example is the hydrolysis of an ester such as aspirin to a carboxylic acid and an alcohol. In the absence of added acid catalysts, the carboxylic acid product catalyzes the hydrolysis. A true catalyst can work in tandem with a sacrificial catalyst. The true catalyst is consumed in the elementary reaction and turned into a deactivated form. The sacrificial catalyst regenerates the true catalyst for another cycle. The sacrificial catalyst is consumed in the reaction, and as such, it is not really a catalyst, but a reagent. For example, osmium tetroxide (OsO4) is a good reagent for dihydroxylation, but it is highly toxic and expensive. In Upjohn dihydroxylation, the sacrificial catalyst N-methylmorpholine N-oxide (NMMO) regenerates OsO4, and only catalytic quantities of OsO4 are needed. Classification Catalysis may be classified as either homogeneous or heterogeneous. A homogeneous catalysis is one whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant's molecules. A heterogeneous catalysis is one where the reaction components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category. Similar mechanistic principles apply to heterogeneous, homogeneous, and biocatalysis. Heterogeneous catalysis Heterogeneous catalysts act in a different phase than the reactants. Most heterogeneous catalysts are solids that act on substrates in a liquid or gaseous reaction mixture. Important heterogeneous catalysts include zeolites, alumina, higher-order oxides, graphitic carbon, transition metal oxides, metals such as Raney nickel for hydrogenation, and vanadium(V) oxide for oxidation of sulfur dioxide into sulfur trioxide by the contact process. Diverse mechanisms for reactions on surfaces are known, depending on how the adsorption takes place (Langmuir-Hinshelwood, Eley-Rideal, and Mars-van Krevelen). The total surface area of a solid has an important effect on the reaction rate. The smaller the catalyst particle size, the larger the surface area for a given mass of particles. A heterogeneous catalyst has active sites, which are the atoms or crystal faces where the substrate actually binds. Active sites are atoms but are often described as a facet (edge, surface, step, etc) of a solid. Most of the volume but also most of the surface of a heterogeneous catalyst may be catalytically inactive. Finding out the nature of the active site is technically challenging. For example, the catalyst for the Haber process for the synthesis of ammonia from nitrogen and hydrogen is often described as iron. But detailed studies and many optimizations have led to catalysts that are mixtures of iron-potassium-calcium-aluminum-oxide. The reacting gases adsorb onto active sites on the iron particles. Once physically adsorbed, the reagents partially or wholly dissociate and form new bonds. In this way the particularly strong triple bond in nitrogen is broken, which would be extremely uncommon in the gas phase due to its high activation energy. Thus, the activation energy of the overall reaction is lowered, and the rate of reaction increases. Another place where a heterogeneous catalyst is applied is in the oxidation of sulfur dioxide on vanadium(V) oxide for the production of sulfuric acid. Many heterogeneous catalysts are in fact nanomaterials. Heterogeneous catalysts are typically "supported," which means that the catalyst is dispersed on a second material that enhances the effectiveness or minimizes its cost. Supports prevent or minimize agglomeration and sintering of small catalyst particles, exposing more surface area, thus catalysts have a higher specific activity (per gram) on support. Sometimes the support is merely a surface on which the catalyst is spread to increase the surface area. More often, the support and the catalyst interact, affecting the catalytic reaction. Supports can also be used in nanoparticle synthesis by providing sites for individual molecules of catalyst to chemically bind. Supports are porous materials with a high surface area, most commonly alumina, zeolites or various kinds of activated carbon. Specialized supports include silicon dioxide, titanium dioxide, calcium carbonate, and barium sulfate. Electrocatalysts In the context of electrochemistry, specifically in fuel cell engineering, various metal-containing catalysts are used to enhance the rates of the half reactions that comprise the fuel cell. One common type of fuel cell electrocatalyst is based upon nanoparticles of platinum that are supported on slightly larger carbon particles. When in contact with one of the electrodes in a fuel cell, this platinum increases the rate of oxygen reduction either to water or to hydroxide or hydrogen peroxide. Homogeneous catalysis Homogeneous catalysts function in the same phase as the reactants. Typically homogeneous catalysts are dissolved in a solvent with the substrates. One example of homogeneous catalysis involves the influence of H on the esterification of carboxylic acids, such as the formation of methyl acetate from acetic acid and methanol. High-volume processes requiring a homogeneous catalyst include hydroformylation, hydrosilylation, hydrocyanation. For inorganic chemists, homogeneous catalysis is often synonymous with organometallic catalysts. Many homogeneous catalysts are however not organometallic, illustrated by the use of cobalt salts that catalyze the oxidation of p-xylene to terephthalic acid. Organocatalysis Whereas transition metals sometimes attract most of the attention in the study of catalysis, small organic molecules without metals can also exhibit catalytic properties, as is apparent from the fact that many enzymes lack transition metals. Typically, organic catalysts require a higher loading (amount of catalyst per unit amount of reactant, expressed in mol% amount of substance) than transition metal(-ion)-based catalysts, but these catalysts are usually commercially available in bulk, helping to lower costs. In the early 2000s, these organocatalysts were considered "new generation" and are competitive to traditional metal(-ion)-containing catalysts. Organocatalysts are supposed to operate akin to metal-free enzymes utilizing, e.g., non-covalent interactions such as hydrogen bonding. The discipline organocatalysis is divided into the application of covalent (e.g., proline, DMAP) and non-covalent (e.g., thiourea organocatalysis) organocatalysts referring to the preferred catalyst-substrate binding and interaction, respectively. The Nobel Prize in Chemistry 2021 was awarded jointly to Benjamin List and David W.C. MacMillan "for the development of asymmetric organocatalysis." Photocatalysts Photocatalysis is the phenomenon where the catalyst can receive light to generate an excited state that effect redox reactions. Singlet oxygen is usually produced by photocatalysis. Photocatalysts are components of dye-sensitized solar cells. Enzymes and biocatalysts In biology, enzymes are protein-based catalysts in metabolism and catabolism. Most biocatalysts are enzymes, but other non-protein-based classes of biomolecules also exhibit catalytic properties including ribozymes, and synthetic deoxyribozymes. Biocatalysts can be thought of as an intermediate between homogeneous and heterogeneous catalysts, although strictly speaking soluble enzymes are homogeneous catalysts and membrane-bound enzymes are heterogeneous. Several factors affect the activity of enzymes (and other catalysts) including temperature, pH, the concentration of enzymes, substrate, and products. A particularly important reagent in enzymatic reactions is water, which is the product of many bond-forming reactions and a reactant in many bond-breaking processes. In biocatalysis, enzymes are employed to prepare many commodity chemicals including high-fructose corn syrup and acrylamide. Some monoclonal antibodies whose binding target is a stable molecule that resembles the transition state of a chemical reaction can function as weak catalysts for that chemical reaction by lowering its activation energy. Such catalytic antibodies are sometimes called "abzymes". Significance Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture. In 2005, catalytic processes generated about $900 billion in products worldwide. Catalysis is so pervasive that subareas are not readily classified. Some areas of particular concentration are surveyed below. Energy processing Petroleum refining makes intensive use of catalysis for alkylation, catalytic cracking (breaking long-chain hydrocarbons into smaller pieces), naphtha reforming and steam reforming (conversion of hydrocarbons into synthesis gas). Even the exhaust from the burning of fossil fuels is treated via catalysis: Catalytic converters, typically composed of platinum and rhodium, break down some of the more harmful byproducts of automobile exhaust. 2 CO + 2 NO → 2 CO + N With regard to synthetic fuels, an old but still important process is the Fischer-Tropsch synthesis of hydrocarbons from synthesis gas, which itself is processed via water-gas shift reactions, catalyzed by iron. The Sabatier reaction produces methane from carbon dioxide and hydrogen. Biodiesel and related biofuels require processing via both inorganic and biocatalysts. Fuel cells rely on catalysts for both the anodic and cathodic reactions. Catalytic heaters generate flameless heat from a supply of combustible fuel. Bulk chemicals Some of the largest-scale chemicals are produced via catalytic oxidation, often using oxygen. Examples include nitric acid (from ammonia), sulfuric acid (from sulfur dioxide to sulfur trioxide by the contact process), terephthalic acid from p-xylene, acrylic acid from propylene or propane and acrylonitrile from propane and ammonia. The production of ammonia is one of the largest-scale and most energy-intensive processes. In the Haber process nitrogen is combined with hydrogen over an iron oxide catalyst. Methanol is prepared from carbon monoxide or carbon dioxide but using copper-zinc catalysts. Bulk polymers derived from ethylene and propylene are often prepared via Ziegler-Natta catalysis. Polyesters, polyamides, and isocyanates are derived via acid-base catalysis. Most carbonylation processes require metal catalysts, examples include the Monsanto acetic acid process and hydroformylation. Fine chemicals Many fine chemicals are prepared via catalysis; methods include those of heavy industry as well as more specialized processes that would be prohibitively expensive on a large scale. Examples include the Heck reaction, and Friedel–Crafts reactions. Because most bioactive compounds are chiral, many pharmaceuticals are produced by enantioselective catalysis (catalytic asymmetric synthesis). (R)-1,2-Propandiol, the precursor to the antibacterial levofloxacin, can be synthesized efficiently from hydroxyacetone by using catalysts based on BINAP-ruthenium complexes, in Noyori asymmetric hydrogenation: Food processing One of the most obvious applications of catalysis is the hydrogenation (reaction with hydrogen gas) of fats using nickel catalyst to produce margarine. Many other foodstuffs are prepared via biocatalysis (see below). Environment Catalysis affects the environment by increasing the efficiency of industrial processes, but catalysis also plays a direct role in the environment. A notable example is the catalytic role of chlorine free radicals in the breakdown of ozone. These radicals are formed by the action of ultraviolet radiation on chlorofluorocarbons (CFCs). Cl + O → ClO + O ClO + O → Cl + O History The term "catalyst", broadly defined as anything that increases the rate of a process, is derived from Greek καταλύειν, meaning "to annul," or "to untie," or "to pick up". The concept of catalysis was invented by chemist Elizabeth Fulhame and described in a 1794 book, based on her novel work in oxidation–reduction reactions. The first chemical reaction in organic chemistry that knowingly used a catalyst was studied in 1811 by Gottlieb Kirchhoff, who discovered the acid-catalyzed conversion of starch to glucose. The term catalysis was later used by Jöns Jakob Berzelius in 1835 to describe reactions that are accelerated by substances that remain unchanged after the reaction. Fulhame, who predated Berzelius, did work with water as opposed to metals in her reduction experiments. Other 18th century chemists who worked in catalysis were Eilhard Mitscherlich who referred to it as contact processes, and Johann Wolfgang Döbereiner who spoke of contact action. He developed Döbereiner's lamp, a lighter based on hydrogen and a platinum sponge, which became a commercial success in the 1820s that lives on today. Humphry Davy discovered the use of platinum in catalysis. In the 1880s, Wilhelm Ostwald at Leipzig University started a systematic investigation into reactions that were catalyzed by the presence of acids and bases, and found that chemical reactions occur at finite rates and that these rates can be used to determine the strengths of acids and bases. For this work, Ostwald was awarded the 1909 Nobel Prize in Chemistry. Vladimir Ipatieff performed some of the earliest industrial scale reactions, including the discovery and commercialization of oligomerization and the development of catalysts for hydrogenation. Inhibitors, poisons, and promoters An added substance that lowers the rate is called a reaction inhibitor if reversible and catalyst poisons if irreversible. Promoters are substances that increase the catalytic activity, even though they are not catalysts by themselves. Inhibitors are sometimes referred to as "negative catalysts" since they decrease the reaction rate. However the term inhibitor is preferred since they do not work by introducing a reaction path with higher activation energy; this would not lower the rate since the reaction would continue to occur by the non-catalyzed path. Instead, they act either by deactivating catalysts or by removing reaction intermediates such as free radicals. In heterogeneous catalysis, coking inhibits the catalyst, which becomes covered by polymeric side products. The inhibitor may modify selectivity in addition to rate. For instance, in the hydrogenation of alkynes to alkenes, a palladium (Pd) catalyst partly "poisoned" with lead(II) acetate (Pb(CHCO)) can be used. Without the deactivation of the catalyst, the alkene produced would be further hydrogenated to alkane. The inhibitor can produce this effect by, e.g., selectively poisoning only certain types of active sites. Another mechanism is the modification of surface geometry. For instance, in hydrogenation operations, large planes of metal surface function as sites of hydrogenolysis catalysis while sites catalyzing hydrogenation of unsaturates are smaller. Thus, a poison that covers the surface randomly will tend to lower the number of uncontaminated large planes but leave proportionally smaller sites free, thus changing the hydrogenation vs. hydrogenolysis selectivity. Many other mechanisms are also possible. Promoters can cover up the surface to prevent the production of a mat of coke, or even actively remove such material (e.g., rhenium on platinum in platforming). They can aid the dispersion of the catalytic material or bind to reagents. See also References External links Science Aid: Catalysts Page for high school level science W.A. Herrmann Technische Universität presentation Alumite Catalyst, Kameyama-Sakurai Laboratory, Japan Inorganic Chemistry and Catalysis Group, Utrecht University, The Netherlands Centre for Surface Chemistry and Catalysis Carbons & Catalysts Group, University of Concepcion, Chile Center for Enabling New Technologies Through Catalysis, An NSF Center for Chemical Innovation, USA "Bubbles turn on chemical catalysts", Science News magazine online, April 6, 2009. Chemical kinetics Articles containing video clips
https://en.wikipedia.org/wiki/Circumference
In geometry, the circumference (from Latin circumferens, meaning "carrying around") is the perimeter of a circle or ellipse. The circumference is the arc length of the circle, as if it were opened up and straightened out to a line segment. More generally, the perimeter is the curve length around any closed figure. Circumference may also refer to the circle itself, that is, the locus corresponding to the edge of a disk. The is the circumference, or length, of any one of its great circles. Circle The circumference of a circle is the distance around it, but if, as in many elementary treatments, distance is defined in terms of straight lines, this cannot be used as a definition. Under these circumstances, the circumference of a circle may be defined as the limit of the perimeters of inscribed regular polygons as the number of sides increases without bound. The term circumference is used when measuring physical objects, as well as when considering abstract geometric forms. Relationship with The circumference of a circle is related to one of the most important mathematical constants. This constant, pi, is represented by the Greek letter The first few decimal digits of the numerical value of are 3.141592653589793 ... Pi is defined as the ratio of a circle's circumference to its diameter Or, equivalently, as the ratio of the circumference to twice the radius. The above formula can be rearranged to solve for the circumference: The ratio of the circle's circumference to its radius is called the circle constant, and is equivalent to . The value is also the amount of radians in one turn. The use of the mathematical constant is ubiquitous in mathematics, engineering, and science. In Measurement of a Circle written circa 250 BCE, Archimedes showed that this ratio ( since he did not use the name ) was greater than 3 but less than 3 by calculating the perimeters of an inscribed and a circumscribed regular polygon of 96 sides. This method for approximating was used for centuries, obtaining more accuracy by using polygons of larger and larger number of sides. The last such calculation was performed in 1630 by Christoph Grienberger who used polygons with 1040 sides. Ellipse Circumference is used by some authors to denote the perimeter of an ellipse. There is no general formula for the circumference of an ellipse in terms of the semi-major and semi-minor axes of the ellipse that uses only elementary functions. However, there are approximate formulas in terms of these parameters. One such approximation, due to Euler (1773), for the canonical ellipse, is Some lower and upper bounds on the circumference of the canonical ellipse with are: Here the upper bound is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and minor axes. The circumference of an ellipse can be expressed exactly in terms of the complete elliptic integral of the second kind. More precisely, where is the length of the semi-major axis and is the eccentricity See also References External links Numericana - Circumference of an ellipse Geometric measurement Circles
https://en.wikipedia.org/wiki/Color
Color (American English) or colour (Commonwealth English) is the visual perception based on the electromagnetic spectrum. Though color is not an inherent property of matter, color perception is related to an object's light absorption, reflection, emission spectra and interference. For most humans, colors are perceived in the visible light spectrum with three types of cone cells (trichromacy). Other animals may have a different number of cone cell types or have eyes sensitive to different wavelength, such as bees that can distinguish ultraviolet, and thus have a different color sensitivity range. Animal perception of color originates from different light wavelength or spectral sensitivity in cone cell types, which is then processed by the brain. Colors have perceived properties such as hue, colorfulness (saturation) and luminance. Colors can also be additively mixed (commonly used for actual light) or subtractively mixed (commonly used for materials). If the colors are mixed in the right proportions, because of metamerism, they may look the same as a single-wavelength light. For convenience, colors can be organized in a color space, which when being abstracted as a mathematical color model can assign each region of color with a corresponding set of numbers. As such, color spaces are an essential tool for color reproduction in print, photography, computer monitors and television. The most well-known color models are RGB, CMYK, YUV, HSL and HSV. Because the perception of color is an important aspect of human life, different colors have been associated with emotions, activity, and nationality. Names of color regions in different cultures can have different, sometimes overlapping areas. In visual arts, color theory is used to govern the use of colors in an aesthetically pleasing and harmonious way. The theory of color includes the color complements; color balance; and classification of primary colors (traditionally red, yellow, blue), secondary colors (traditionally orange, green, purple) and tertiary colors. The study of colors in general is called color science. Physical properties Electromagnetic radiation is characterized by its wavelength (or frequency) and its intensity. When the wavelength is within the visible spectrum (the range of wavelengths humans can perceive, approximately from 390 nm to 700 nm), it is known as "visible light". Most light sources emit light at many different wavelengths; a source's spectrum is a distribution giving its intensity at each wavelength. Although the spectrum of light arriving at the eye from a given direction determines the color sensation in that direction, there are many more possible spectral combinations than color sensations. In fact, one may formally define a color as a class of spectra that give rise to the same color sensation, although such classes would vary widely among different species, and to a lesser extent among individuals within the same species. In each such class, the members are called metamers of the color in question. This effect can be visualized by comparing the light sources' spectral power distributions and the resulting colors. Spectral colors The familiar colors of the rainbow in the spectrum—named using the Latin word for appearance or apparition by Isaac Newton in 1671—include all those colors that can be produced by visible light of a single wavelength only, the pure spectral or monochromatic colors. The table at right shows approximate frequencies (in terahertz) and wavelengths (in nanometers) for spectral colors in the visible range. Spectral colors have 100% purity, and are fully saturated. A complex mixture of spectral colors can be used to describe any color, which is the definition of a light power spectrum. The color table should not be interpreted as a definitive list; the spectral colors form a continuous spectrum, and how it is divided into distinct colors linguistically is a matter of culture and historical contingency. Despite the ubiquitous ROYGBIV mnemonic used to remember the spectral colors in English, the inclusion or exclusion of colors in this table is contentious, with disagreement often focused on indigo and cyan. Even if the subset of color terms is agreed, their wavelength ranges and borders between them may not be. The intensity of a spectral color, relative to the context in which it is viewed, may alter its perception considerably according to the Bezold–Brücke shift; for example, a low-intensity orange-yellow is brown, and a low-intensity yellow-green is olive green. Color of objects The physical color of an object depends on how it absorbs and scatters light. Most objects scatter light to some degree and do not reflect or transmit light specularly like glasses or mirrors. A transparent object allows almost all light to transmit or pass through, thus transparent objects are perceived as colorless. Conversely, an opaque object does not allow light to transmit through and instead absorbing or reflecting the light it receives. Like transparent objects, translucent objects allow light to transmit through, but translucent objects are seen colored because they scatter or absorb certain wavelengths of light via internal scatterance. The absorbed light is often dissipated as heat. Color vision Development of theories of color vision Although Aristotle and other ancient scientists had already written on the nature of light and color vision, it was not until Newton that light was identified as the source of the color sensation. In 1810, Goethe published his comprehensive Theory of Colors in which he provided a rational description of color experience, which 'tells us how it originates, not what it is'. (Schopenhauer) In 1801 Thomas Young proposed his trichromatic theory, based on the observation that any color could be matched with a combination of three lights. This theory was later refined by James Clerk Maxwell and Hermann von Helmholtz. As Helmholtz puts it, "the principles of Newton's law of mixture were experimentally confirmed by Maxwell in 1856. Young's theory of color sensations, like so much else that this marvelous investigator achieved in advance of his time, remained unnoticed until Maxwell directed attention to it." At the same time as Helmholtz, Ewald Hering developed the opponent process theory of color, noting that color blindness and afterimages typically come in opponent pairs (red-green, blue-orange, yellow-violet, and black-white). Ultimately these two theories were synthesized in 1957 by Hurvich and Jameson, who showed that retinal processing corresponds to the trichromatic theory, while processing at the level of the lateral geniculate nucleus corresponds to the opponent theory. In 1931, an international group of experts known as the Commission internationale de l'éclairage (CIE) developed a mathematical color model, which mapped out the space of observable colors and assigned a set of three numbers to each. Color in the eye The ability of the human eye to distinguish colors is based upon the varying sensitivity of different cells in the retina to light of different wavelengths. Humans are trichromatic—the retina contains three types of color receptor cells, or cones. One type, relatively distinct from the other two, is most responsive to light that is perceived as blue or blue-violet, with wavelengths around 450 nm; cones of this type are sometimes called short-wavelength cones or S cones (or misleadingly, blue cones). The other two types are closely related genetically and chemically: middle-wavelength cones, M cones, or green cones are most sensitive to light perceived as green, with wavelengths around 540 nm, while the long-wavelength cones, L cones, or red cones, are most sensitive to light that is perceived as greenish yellow, with wavelengths around 570 nm. Light, no matter how complex its composition of wavelengths, is reduced to three color components by the eye. Each cone type adheres to the principle of univariance, which is that each cone's output is determined by the amount of light that falls on it over all wavelengths. For each location in the visual field, the three types of cones yield three signals based on the extent to which each is stimulated. These amounts of stimulation are sometimes called tristimulus values. The response curve as a function of wavelength varies for each type of cone. Because the curves overlap, some tristimulus values do not occur for any incoming light combination. For example, it is not possible to stimulate only the mid-wavelength (so-called "green") cones; the other cones will inevitably be stimulated to some degree at the same time. The set of all possible tristimulus values determines the human color space. It has been estimated that humans can distinguish roughly 10 million different colors. The other type of light-sensitive cell in the eye, the rod, has a different response curve. In normal situations, when light is bright enough to strongly stimulate the cones, rods play virtually no role in vision at all. On the other hand, in dim light, the cones are understimulated leaving only the signal from the rods, resulting in a colorless response. (Furthermore, the rods are barely sensitive to light in the "red" range.) In certain conditions of intermediate illumination, the rod response and a weak cone response can together result in color discriminations not accounted for by cone responses alone. These effects, combined, are summarized also in the Kruithof curve, which describes the change of color perception and pleasingness of light as a function of temperature and intensity. Color in the brain While the mechanisms of color vision at the level of the retina are well-described in terms of tristimulus values, color processing after that point is organized differently. A dominant theory of color vision proposes that color information is transmitted out of the eye by three opponent processes, or opponent channels, each constructed from the raw output of the cones: a red–green channel, a blue–yellow channel, and a black–white "luminance" channel. This theory has been supported by neurobiology, and accounts for the structure of our subjective color experience. Specifically, it explains why humans cannot perceive a "reddish green" or "yellowish blue", and it predicts the color wheel: it is the collection of colors for which at least one of the two color channels measures a value at one of its extremes. The exact nature of color perception beyond the processing already described, and indeed the status of color as a feature of the perceived world or rather as a feature of our perception of the world—a type of qualia—is a matter of complex and continuing philosophical dispute. Nonstandard color perception Color vision deficiency A color vision deficiency causes an individual to perceive a smaller gamut of colors than the standard observer with normal color vision. The effect can be mild, having lower "color resolution" (i.e. anomalous trichromacy), moderate, lacking an entire dimension or channel of color (e.g. dichromacy), or complete, lacking all color perception (i.e. monochromacy). Most forms of color blindness derive from one or more of the three classes of cone cells either being missing, having a shifted spectral sensitivity or having lower responsiveness to incoming light. In addition, cerebral achromatopsia is caused by neural anomalies in those parts of the brain where visual processing takes place. Some colors that appear distinct to an individual with normal color vision will appear metameric to the color blind. The most common form of color blindness is congenital red–green color blindness, affecting ~8% of males. Individuals with the strongest form of this condition (dichromacy) will experience blue and purple, green and yellow, teal and gray as colors of confusion, i.e. metamers. Tetrachromacy Outside of humans, which are mostly trichromatic (having three types of cones), most mammals are dichromatic, possessing only two cones. However, outside of mammals, most vertebrate are tetrachromatic, having four types of cones, and includes most, birds, reptiles, amphibians and bony fish. An extra dimension of color vision means these vertebrates can see two distinct colors that a normal human would view as metamers. Some invertebrates, such as the mantis shrimp, have an even higher number of cones (12) that could lead to a richer color gamut than even imaginable by humans. The existence of human tetrachromats is a contentious notion. As many as half of all human females have 4 distinct cone classes, which could enable tetrachromacy. However, a distinction must be made between retinal (or weak) tetrachromats, which express four cone classes in the retina, and functional (or strong) tetrachromats, which are able to make the enhanced color discriminations expected of tetrachromats. In fact, there is only one peer-reviewed report of a functional tetrachromat. It is estimated that while the average person is able to see one million colors, someone with functional tetrachromacy could see a hundred million colors. Synesthesia In certain forms of synesthesia, perceiving letters and numbers (grapheme–color synesthesia) or hearing sounds (chromesthesia) will evoke a perception of color. Behavioral and functional neuroimaging experiments have demonstrated that these color experiences lead to changes in behavioral tasks and lead to increased activation of brain regions involved in color perception, thus demonstrating their reality, and similarity to real color percepts, albeit evoked through a non-standard route. Synesthesia can occur genetically, with 4% of the population having variants associated with the condition. Synesthesia has also been known to occur with brain damage, drugs, and sensory deprivation. The philosopher Pythagoras experienced synesthesia and provided one of the first written accounts of the condition in approximately 550 BCE. He created mathematical equations for musical notes that could form part of a scale, such as an octave. Afterimages After exposure to strong light in their sensitivity range, photoreceptors of a given type become desensitized. For a few seconds after the light ceases, they will continue to signal less strongly than they otherwise would. Colors observed during that period will appear to lack the color component detected by the desensitized photoreceptors. This effect is responsible for the phenomenon of afterimages, in which the eye may continue to see a bright figure after looking away from it, but in a complementary color. Afterimage effects have also been used by artists, including Vincent van Gogh. Color constancy When an artist uses a limited color palette, the human eye tends to compensate by seeing any gray or neutral color as the color which is missing from the color wheel. For example, in a limited palette consisting of red, yellow, black, and white, a mixture of yellow and black will appear as a variety of green, a mixture of red and black will appear as a variety of purple, and pure gray will appear bluish. The trichromatic theory is strictly true when the visual system is in a fixed state of adaptation. In reality, the visual system is constantly adapting to changes in the environment and compares the various colors in a scene to reduce the effects of the illumination. If a scene is illuminated with one light, and then with another, as long as the difference between the light sources stays within a reasonable range, the colors in the scene appear relatively constant to us. This was studied by Edwin H. Land in the 1970s and led to his retinex theory of color constancy. Both phenomena are readily explained and mathematically modeled with modern theories of chromatic adaptation and color appearance (e.g. CIECAM02, iCAM). There is no need to dismiss the trichromatic theory of vision, but rather it can be enhanced with an understanding of how the visual system adapts to changes in the viewing environment. Reproduction Color reproduction is the science of creating colors for the human eye that faithfully represent the desired color. It focuses on how to construct a spectrum of wavelengths that will best evoke a certain color in an observer. Most colors are not spectral colors, meaning they are mixtures of various wavelengths of light. However, these non-spectral colors are often described by their dominant wavelength, which identifies the single wavelength of light that produces a sensation most similar to the non-spectral color. Dominant wavelength is roughly akin to hue. There are many color perceptions that by definition cannot be pure spectral colors due to desaturation or because they are purples (mixtures of red and violet light, from opposite ends of the spectrum). Some examples of necessarily non-spectral colors are the achromatic colors (black, gray, and white) and colors such as pink, tan, and magenta. Two different light spectra that have the same effect on the three color receptors in the human eye will be perceived as the same color. They are metamers of that color. This is exemplified by the white light emitted by fluorescent lamps, which typically has a spectrum of a few narrow bands, while daylight has a continuous spectrum. The human eye cannot tell the difference between such light spectra just by looking into the light source, although the color rendering index of each light source may affect the color of objects illuminated by these metameric light sources. Similarly, most human color perceptions can be generated by a mixture of three colors called primaries. This is used to reproduce color scenes in photography, printing, television, and other media. There are a number of methods or color spaces for specifying a color in terms of three particular primary colors. Each method has its advantages and disadvantages depending on the particular application. No mixture of colors, however, can produce a response truly identical to that of a spectral color, although one can get close, especially for the longer wavelengths, where the CIE 1931 color space chromaticity diagram has a nearly straight edge. For example, mixing green light (530 nm) and blue light (460 nm) produces cyan light that is slightly desaturated, because response of the red color receptor would be greater to the green and blue light in the mixture than it would be to a pure cyan light at 485 nm that has the same intensity as the mixture of blue and green. Because of this, and because the primaries in color printing systems generally are not pure themselves, the colors reproduced are never perfectly saturated spectral colors, and so spectral colors cannot be matched exactly. However, natural scenes rarely contain fully saturated colors, thus such scenes can usually be approximated well by these systems. The range of colors that can be reproduced with a given color reproduction system is called the gamut. The CIE chromaticity diagram can be used to describe the gamut. Another problem with color reproduction systems is connected with the initial measurement of color, or colorimetry. The characteristics of the color sensors in measurement devices (e.g. cameras, scanners) are often very far from the characteristics of the receptors in the human eye. A color reproduction system "tuned" to a human with normal color vision may give very inaccurate results for other observers, according to color vision deviations to the standard observer. The different color response of different devices can be problematic if not properly managed. For color information stored and transferred in digital form, color management techniques, such as those based on ICC profiles, can help to avoid distortions of the reproduced colors. Color management does not circumvent the gamut limitations of particular output devices, but can assist in finding good mapping of input colors into the gamut that can be reproduced. Additive coloring Additive color is light created by mixing together light of two or more different colors. Red, green, and blue are the additive primary colors normally used in additive color systems such as projectors, televisions, and computer terminals. Subtractive coloring Subtractive coloring uses dyes, inks, pigments, or filters to absorb some wavelengths of light and not others. The color that a surface displays comes from the parts of the visible spectrum that are not absorbed and therefore remain visible. Without pigments or dye, fabric fibers, paint base and paper are usually made of particles that scatter white light (all colors) well in all directions. When a pigment or ink is added, wavelengths are absorbed or "subtracted" from white light, so light of another color reaches the eye. If the light is not a pure white source (the case of nearly all forms of artificial lighting), the resulting spectrum will appear a slightly different color. Red paint, viewed under blue light, may appear black. Red paint is red because it scatters only the red components of the spectrum. If red paint is illuminated by blue light, it will be absorbed by the red paint, creating the appearance of a black object. Structural color Structural colors are colors caused by interference effects rather than by pigments. Color effects are produced when a material is scored with fine parallel lines, formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the color's wavelength. If the microstructures are spaced randomly, light of shorter wavelengths will be scattered preferentially to produce Tyndall effect colors: the blue of the sky (Rayleigh scattering, caused by structures much smaller than the wavelength of light, in this case, air molecules), the luster of opals, and the blue of human irises. If the microstructures are aligned in arrays, for example, the array of pits in a CD, they behave as a diffraction grating: the grating reflects different wavelengths in different directions due to interference phenomena, separating mixed "white" light into light of different wavelengths. If the structure is one or more thin layers then it will reflect some wavelengths and transmit others, depending on the layers' thickness. Structural color is studied in the field of thin-film optics. The most ordered or the most changeable structural colors are iridescent. Structural color is responsible for the blues and greens of the feathers of many birds (the blue jay, for example), as well as certain butterfly wings and beetle shells. Variations in the pattern's spacing often give rise to an iridescent effect, as seen in peacock feathers, soap bubbles, films of oil, and mother of pearl, because the reflected color depends upon the viewing angle. Numerous scientists have carried out research in butterfly wings and beetle shells, including Isaac Newton and Robert Hooke. Since 1942, electron micrography has been used, advancing the development of products that exploit structural color, such as "photonic" cosmetics. Cultural perspective Colors, their meanings and associations can play a major role in works of art, including literature. Associations Individual colors have a variety of cultural associations such as national colors (in general described in individual color articles and color symbolism). The field of color psychology attempts to identify the effects of color on human emotion and activity. Chromotherapy is a form of alternative medicine attributed to various Eastern traditions. Colors have different associations in different countries and cultures. Different colors have been demonstrated to have effects on cognition. For example, researchers at the University of Linz in Austria demonstrated that the color red significantly decreases cognitive functioning in men. The combination of the colors red and yellow together can induce hunger, which has been capitalized on by a number of chain restaurants. Color plays a role in memory development too. A photograph that is in black and white is slightly less memorable than one in color. Studies also show that wearing bright colors makes you more memorable to people you meet. Terminology Colors vary in several different ways, including hue (shades of red, orange, yellow, green, blue, and violet, etc), saturation, brightness. Some color words are derived from the name of an object of that color, such as "orange" or "salmon", while others are abstract, like "red". In the 1969 study Basic Color Terms: Their Universality and Evolution, Brent Berlin and Paul Kay describe a pattern in naming "basic" colors (like "red" but not "red-orange" or "dark red" or "blood red", which are "shades" of red). All languages that have two "basic" color names distinguish dark/cool colors from bright/warm colors. The next colors to be distinguished are usually red and then yellow or green. All languages with six "basic" colors include black, white, red, green, blue, and yellow. The pattern holds up to a set of twelve: black, gray, white, pink, red, orange, yellow, green, blue, purple, brown, and azure (distinct from blue in Russian and Italian, but not English). See also Chromophore Color analysis Color in Chinese culture Color mapping Complementary colors Impossible color International Color Consortium International Commission on Illumination Lists of colors (compact version) Neutral color Pearlescent coating including Metal effect pigments Pseudocolor Primary, secondary and tertiary colors References External links Image processing Vision
https://en.wikipedia.org/wiki/Computation
A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computations are mathematical equations and computer algorithms. Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. The study of computation is the field of computability, itself a sub-field of computer science. Introduction The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing Machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, Herbrand-Gödel-Kleene's general recursiveness and Emil Post's 1-definability. Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages. Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. Some examples of mathematical statements that are computable include: All statements characterised in modern programming languages, including C++, Python, and Java. All calculations carried by an electronic computer, calculator or abacus. All calculations carried out on an analytical engine. All calculations carried out on a Turing Machine. The majority of mathematical statements and calculations given in maths textbooks. Some examples of mathematical statements that are not computable include: Calculations or statements which are ill-defined, such that they cannot be unambiguously encoded into a Turing machine: ("Paul loves me twice as much as Joe"). Problem statements which do appear to be well-defined, but for which it can be proved that no Turing machine exists to solve them (such as the halting problem). The Physical process of computation Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Turing's 1937 proof, On Computable Numbers, with an Application to the Entscheidungsproblem, demonstrated that there is a formal equivalence between computable statements and particular physical systems, commonly called computers. Examples of such physical systems are: Turing machines, human mathematicians following strict rules, digital computers, mechanical computers, analog computers and others. Alternative accounts of computation The mapping account An alternative account of computation is found throughout the works of Hilary Putnam and others. Peter Godfrey-Smith has dubbed this the "simple mapping account." Gualtiero Piccinini's summary of this account states that a physical system can be said to perform a specific computation when there is a mapping between the state of that system and the computation such that the "microphysical states [of the system] mirror the state transitions between the computational states." The semantic account Philosophers such as Jerry Fodor have suggested various accounts of computation with the restriction that semantic content be a necessary condition for computation (that is, what differentiates an arbitrary physical system from a computing system is that the operands of the computation represent something). This notion attempts to prevent the logical abstraction of the mapping account of pancomputationalism, the idea that everything can be said to be computing everything. The mechanistic account Gualtiero Piccinini proposes an account of computation based on mechanical philosophy. It states that physical computing systems are types of mechanisms that, by design, perform physical computation, or the manipulation (by a functional mechanism) of a "medium-independent" vehicle according to a rule. "Medium-independence" requires that the property can be instantiated by multiple realizers and multiple mechanisms, and that the inputs and outputs of the mechanism also be multiply realizable. In short, medium-independence allows for the use of physical variables with properties other than voltage (as in typical digital computers); this is imperative in considering other types of computation, such as that which occurs in the brain or in a quantum computer. A rule, in this sense, provides a mapping among inputs, outputs, and internal states of the physical computing system. Mathematical models In the theory of computation, a diversity of mathematical models of computation has been developed. Typical mathematical models of computers are the following: State models including Turing machine, pushdown automaton, finite state automaton, and PRAM Functional models including lambda calculus Logical models including logic programming Concurrent models including actor model and process calculi Giunti calls the models studied by computation theory computational systems, and he argues that all of them are mathematical dynamical systems with discrete time and discrete state space. He maintains that a computational system is a complex object which consists of three parts. First, a mathematical dynamical system with discrete time and discrete state space; second, a computational setup , which is made up of a theoretical part , and a real part ; third, an interpretation , which links the dynamical system with the setup . See also Computability Theory Hypercomputation Computational problem Limits of computation Computationalism Notes References Theoretical computer science Computability theory
https://en.wikipedia.org/wiki/Minidish
The Minidish is the tradename used for the small-sized satellite dish used by Freesat and Sky. The term has entered the vocabulary in the UK and Ireland as a generic term for a satellite dish, particularly small ones. The Minidish is an oval, mesh satellite dish capable of reflecting signals broadcast in the upper X band and . Two sizes exist: "Zone 1" dishes are issued in southern and Northern England and parts of Scotland and were 43 cm vertically prior to 2009; newer mark 4 dishes are approximately 50 cm "Zone 2" dishes are issued in elsewhere (Wales, Northern Ireland, Republic of Ireland, Scotland and northern England), which are 57 cm vertically. The Minidish uses a non-standard connector for the LNB, consisting of a peg about in width and in height prior to the mark 4 dishes introduced in 2009, as opposed to the 40 mm collar. This enforces the use of Sky-approved equipment, but also ensures that a suitable LNB is used. Due to the shape of the dish, an LNB with an oval feedhorn is required to get full signal. References Satellite television Radio electronics Sky Group Brands that became generic
https://en.wikipedia.org/wiki/Indolamines
Indolamines are a family of neurotransmitters that share a common molecular structure (namely, indolamine). Indolamines are a classification of monoamine neurotransmitter, along with catecholamines and ethylamine derivatives. A common example of an indolamine is the tryptophan derivative serotonin, a neurotransmitter involved in mood and sleep. Another example of an indolamine is melatonin. In biochemistry, indolamines are substituted indole compounds that contain an amino group. Examples of indolamines include the lysergamides. Synthesis Indolamines are biologically synthesized from the essential amino acid tryptophan. Tryptophan is synthesized into serotonin through the addition of a hydroxyl group by the enzyme tryptophan hydroxylase and the subsequent removal of the carboxyl group by the enzyme 5-HTP decarboxylase. See also Indole Tryptamine References Neurotransmitters Indoles Amines
https://en.wikipedia.org/wiki/R4000
The R4000 is a microprocessor developed by MIPS Computer Systems that implements the MIPS III instruction set architecture (ISA). Officially announced on 1 October 1991, it was one of the first 64-bit microprocessors and the first MIPS III implementation. In the early 1990s, when RISC microprocessors were expected to replace CISC microprocessors such as the Intel i486, the R4000 was selected to be the microprocessor of the Advanced Computing Environment (ACE), an industry standard that intended to define a common RISC platform. ACE ultimately failed for a number of reasons, but the R4000 found success in the workstation and server markets. Models There are three configurations of the R4000: the R4000PC, an entry-level model with no support for a secondary cache; the R4000SC, a model with secondary cache but no multiprocessor capability; and the R4000MC, a model with secondary cache and support for the cache coherency protocols required by multiprocessor systems. Description The R4000 is a scalar superpipelined microprocessor with an eight-stage integer pipeline. During the first stage (IF), a virtual address for an instruction is generated and the instruction translation lookaside buffer (TLB) begins the translation of the address to a physical address. In the second stage (IS), translation is completed and the instruction is fetched from an internal 8 KB instruction cache. The instruction cache is direct-mapped and virtually indexed, physically tagged. It has a 16- or 32-byte line size. Architecturally, it could be expanded to 32 KB. During the third stage (RF), the instruction is decoded and the register file is read. The MIPS III defines two register files, one for the integer unit and the other for floating-point. Each register file is 64 bits wide and contained 32 entries. The integer register file has two read ports and one write port, while the floating-point register file has two read ports and two write ports. Execution begins at stage four (EX) for both integer and floating-point instructions; and is written back to the register files when completed in stage eight (WB). Results may be bypassed if possible. Integer execution The R4000 has an arithmetic logic unit (ALU), a shifter, multiplier and divider and load aligner for executing integer instructions. The ALU consists of a 64-bit carry-select adder and a logic unit and is pipelined. The shifter is a 32-bit barrel shifter. It performs 64-bit shifts in two cycles, stalling the pipeline as a result. This design was chosen to save die area. The multiplier and divider are not pipelined and have significant latencies: multiplies have a 10- or 20-cycle latency for 32-bit or 64-bit integers, respectively; whereas divides have a 69- or 133-cycle latency for 32-bit or 64-bit integers, respectively. Most instructions have a single cycle latency. The ALU adder is also used for calculating virtual addresses for loads, stores and branches. Load and store instructions are executed by the integer pipeline, and access the on-chip 8 KB data cache. Floating-point execution The R4000 has an on-die IEEE 754-1985-compliant floating-point unit (FPU), referred to as the R4010. The FPU is a coprocessor designated CP1 (the MIPS ISA defined four coprocessors, designated CP0 to CP3). The FPU can operate in two modes, 32- or 64-bit which are selected by setting a bit, the FR bit, in the CPU status register. In 32-bit mode, the 32 floating-point registers become 32 bits wide when used to hold single-precision floating-point numbers. When used to hold double-precision numbers, there are 16 floating-point registers (the registers are paired). The FPU can operate in parallel with the ALU unless there is a data or resource dependency, which causes it to stall. It contains three sub-units: an adder, a multiplier and a divider. The multiplier and divider can execute an instruction in parallel with the adder, but they use the adder in their final stages of execution, thus imposing limits to overlapping execution. Thus, under certain conditions, it can execute up to three instructions at any time, one in each unit. The FPU is capable of retiring one instruction per cycle. The adder and multiplier are pipelined. The multiplier has a four-stage multiplier pipeline. It is clocked at twice the clock frequency of the microprocessor for adequate performance and uses dynamic logic to achieve the high clock frequency. Division has a 23- or 36-cycle latency for single- or double-precision operations and square-root has a 54- or 112-cycle latency. Division and square-root uses the SRT algorithm. Memory management The memory management unit (MMU) uses a 48-entry translation lookaside buffer to translate virtual addresses. The R4000 uses a 64-bit virtual address, but only implements 40 of the 64 bits, allowing 1 TB of virtual memory; the remaining bits are checked to ensure that they contain zero. The R4000 uses a 36-bit physical address, thus is able to address 64 GB of physical memory. Secondary cache The R4000 (SC and MC configurations only) supports an external secondary cache with a capacity of 128 KB to 4 MB. The cache is accessed via a dedicated 128-bit data bus. The secondary cache can be configured either as a unified cache or as a split instruction and data cache. In the latter configuration, each cache can have a capacity of 128 KB to 2 MB. The secondary cache is physically indexed, physically tagged and has a programmable line size of 128, 256, 512 or 1,024 bytes. The cache controller is on-die. The cache is built from standard static random access memory (SRAM). The data and tag buses are ECC-protected. System bus The R4000 uses a 64-bit system bus called the SysAD bus. The SysAD bus was an address and data multiplexed bus, that is, it used the same set of wires to transfer data and addresses. While this reduces bandwidth, it is also less expensive than providing a separate address bus, which requires more pins and increases the complexity of the system. The SysAD bus can be configured to operate at half, a third or a quarter of the internal clock frequency. The SysAD bus generates its clock signal by dividing the operating frequency. Transistor count, die dimensions and process details The R4000 contains 1.2 million transistors. It was designed for a 1.0 μm two-layer metal complementary metal–oxide–semiconductor (CMOS) process. As MIPS was a fabless company, the R4000 was fabricated by partners in their own processes, which had a 0.8 μm minimum feature size. Clocking The R4000 generates the various clock signals from a master clock signal generated externally. For the operating frequency, the R4000 multiplies the master clock signal by two by use of an on-die phase-locked loop (PLL). Packaging The R4000PC is packaged in a 179-pin ceramic pin grid array (CPGA). The R4000SC and R4000MC are packaged in a 447-pin ceramic staggered pin grid array (SPGA). The pin out of the R4000MC is different from the R4000SC, with some pins which are unused on the R4000SC used for signals to implement cache coherency on the R4000MC. The pin-out of the R4000PC is similar to that of the PGA-packaged R4200 and R4600 microprocessors. This characteristic enables a properly designed system to use any of the three microprocessors. R4400 The R4400 is a further development of the R4000. It was announced in early November 1992. Samples of the microprocessor had been shipped to selected customers before then, with general availability in January 1993. The R4400 operates at clock frequencies of 100, 133, 150, 200, and 250 MHz. The only major improvement from the R4000 is larger primary caches, which were doubled in capacity to 16 KB each from 8 KB each. It contained 2.3 million transistors. The R4400 was licensed by Integrated Device Technology (IDT), LSI Logic, NEC, Performance Semiconductor, Siemens AG and Toshiba. IDT, NEC, Siemens and Toshiba fabricated and marketed the microprocessor. LSI Logic used the R4400 in custom products. Performance Semiconductor sold their logic division to Cypress Semiconductor where the MIPS microprocessor products were discontinued. NEC marketed their version as the VR4400. The first version, a 150 MHz part, was announced in November 1992. Early versions were fabricated in a 0.6 μm process. In mid-1995, a 250 MHz part began sampling. It was fabricated in a 0.35 μm four-layer-metal process. NEC also produced the MR4401, a ceramic multi-chip module (MCM) that contained a VR4400SC with ten 1 Mbit SRAM chips that implemented a 1 MB secondary cache. The MCM was pin-compatible with the R4x00PC. The first version, a 150 MHz part, was announced in 1994. In 1995, a 200 MHz part was announced. Toshiba marketed their version as the TC86R4400. A 200 MHz part containing 2.3 million transistors and measuring 134 mm2 fabricated in a 0.3 μm process was introduced in mid-1994. The R4400PC was priced at , the R4400SC at , and the R4400MC at in quantities of 10,000. Usage The R4400 is used by: Carrera Computers in their Windows NT personal computers and workstations Concurrent Computer Corporation in their real-time multiprocessor Maxion systems DeskStation Technology in their Windows NT personal computers and DeskStation Tyne workstation Digital Equipment Corporation in their DECstation 5000/260 workstation and server NEC Corporation in their RISCstation workstations, RISCserver servers, and Cenju-3 supercomputer NeTPower in their Windows NT workstations and servers Pyramid Technology used the R4400MC in their Nile Series servers Siemens Nixdorf Informationssysteme (SNI) in their RM-series UNIX servers and SR2000 mainframe Silicon Graphics in their Onyx, Indigo, Indigo2, and Indy workstations; and in their Challenge server Tandem Computers in their NonStop Himalaya fault-tolerant servers Chipsets The R4000 and R4400 microprocessors were interfaced to the system by custom ASICs or by commercially available chipsets. System vendors such as SGI developed their own ASICs for their systems. Commercial chipsets were developed, fabricated and marketed by companies such as Toshiba with their the Tiger Shark chipset, which provided a i486-compatible bus. Notes References Heinrich, Joe. MIPS R4000 Microprocessor User's Manual, Second Edition. Sunil Mirapuri, Michael Woodacre, Nader Vasseghi, "The Mips R4000 Processor," IEEE Micro, vol. 12. no. 2, pp. 10–22, March/April 1992 Advanced RISC Computing MIPS implementations MIPS microprocessors Superscalar microprocessors 64-bit computers 64-bit microprocessors
https://en.wikipedia.org/wiki/Hexachlorobenzene
Hexachlorobenzene, or perchlorobenzene, is an organochloride with the molecular formula C6Cl6. It is a fungicide formerly used as a seed treatment, especially on wheat to control the fungal disease bunt. It has been banned globally under the Stockholm Convention on Persistent Organic Pollutants. Physical and chemical properties Hexachlorobenzene is a stable, white, crystalline chlorinated hydrocarbon. It is sparingly soluble in organic solvents such as benzene, diethyl ether and alcohol, but practically insoluble in water with no reaction. It has a flash point of 468 °F and it is stable under normal temperatures and pressures. It is combustible but it does not ignite readily. When heated to decomposition, hexachlorobenzene emits highly toxic fumes of hydrochloric acid, other chlorinated compounds (such as phosgene), carbon monoxide, and carbon dioxide. History Hexachlorobenzene was first known as "Julin's chloride of carbon" as it was discovered as a strange and unexpected product of impurities reacting in Julin's nitric acid factory. In 1864, Hugo Müller synthesised the compound by the reaction of benzene and antimony pentachloride, he then suggested that his compound was the same as Julin's chloride of carbon. Müller previously also believed it was the same compound as Michael Faraday's "perchloride of carbon" and obtained a small sample of Julin's chloride of carbon to send Richard Phillips and Faraday for investigation. In 1867, Henry Bassett proved that those were the same compounds and named it hexachlorobenzene". Leopold Gmelin named it "dichloride of carbon" and claimed that the carbon was derived from cast iron and the chlorine was from crude saltpetre. Victor Regnault obtained hexachlorobenzene from the decomposition of chloroform and tetrachloroethylene vapours through a red-hot tube. Synthesis Hexachlorobenzene has been made on a laboratory scale since the 1890s, by the electrophilic aromatic substitution reaction of chlorine with benzene or chlorobenzenes. Large-scale manufacture for use as a fungicide was developed by using the residue remaining after purification of the mixture of isomers of hexachlorocyclohexane, from which the insecticide lindane (the γ-isomer) had been removed, leaving the unwanted α- and β- isomers. This mixture is produced when benzene is reacted with chlorine in the presence of ultraviolet light (e.g. from sunlight). Usage Hexachlorobenzene was used in agriculture to control the fungus tilletia caries (common bunt of wheat). It is also effective on tilletia controversa, dwarf bunt. The compound was introduced in 1947, normally formulated as a seed dressing but is now banned in many countries. Safety Hexachlorobenzene is an animal carcinogen and is considered to be a probable human carcinogen. After its introduction as a fungicide in 1945, for crop seeds, this toxic chemical was found in all food types. Hexachlorobenzene was banned from use in the United States in 1966. This material has been classified by the International Agency for Research on Cancer (IARC) as a Group 2B carcinogen (possibly carcinogenic to humans). Animal carcinogenicity data for hexachlorobenzene show increased incidences of liver, kidney (renal tubular tumours) and thyroid cancers. Chronic oral exposure in humans has been shown to give rise to a liver disease (porphyria cutanea tarda), skin lesions with discoloration, ulceration, photosensitivity, thyroid effects, bone effects and loss of hair. Neurological changes have been reported in rodents exposed to hexachlorobenzene. Hexachlorobenzene may cause embryolethality and teratogenic effects. Human and animal studies have demonstrated that hexachlorobenzene crosses the placenta to accumulate in foetal tissues and is transferred in breast milk. HCB is very toxic to aquatic organisms. It may cause long term adverse effects in the aquatic environment. Therefore, release into waterways should be avoided. It is persistent in the environment. Ecological investigations have found that biomagnification up the food chain does occur. Hexachlorobenzene has a half life in the soil of between 3 and 6 years. Risk of bioaccumulation in an aquatic species is high. Toxicology Oral LD50 (rat): 10,000 mg/kg Oral LD50 (mice): 4,000 mg/kg Inhalation LC50 (rat): 3600 mg/m3 Material has relatively low acute toxicity but is toxic because of its persistent and cumulative nature in body tissues in rich lipid content. Unique Exposure Incident In Anatolia, Turkey between 1955 and 1959, during a period when bread wheat was unavailable, 500 people were fatally poisoned and more than 4,000 people fell ill by eating bread made with HCB-treated seed that was intended for agriculture use. Most of the sick were affected with a liver condition called porphyria cutanea tarda, which disturbs the metabolism of hemoglobin and results in skin lesions. Almost all breastfeeding children under the age of two, whose mothers had eaten tainted bread, died from a condition called "pembe yara" or "pink sore", most likely from high doses of HCB in the breast milk. In one mother's breast milk the HCB level was found to be 20 parts per million in lipid, approximately 2,000 times the average levels of contamination found in breast-milk samples around the world. Follow-up studies 20 to 30 years after the poisoning found average HCB levels in breast milk were still more than seven times the average for unexposed women in that part of the world (56 specimens of human milk obtained from mothers with porphyria, average value was 0.51 ppm in HCB-exposed patients compared to 0.07 ppm in unexposed controls), and 150 times the level allowed in cow's milk. In the same follow-up study of 252 patients (162 males and 90 females, avg. current age of 35.7 years), 20–30 years' postexposure, many subjects had dermatologic, neurologic, and orthopedic symptoms and signs. The observed clinical findings include scarring of the face and hands (83.7%), hyperpigmentation (65%), hypertrichosis (44.8%), pinched faces (40.1%), painless arthritis (70.2%), small hands (66.6%), sensory shading (60.6%), myotonia (37.9%), cogwheeling (41.9%), enlarged thyroid (34.9%), and enlarged liver (4.8%). Urine and stool porphyrin levels were determined in all patients, and 17 have at least one of the porphyrins elevated. Offspring of mothers with three decades of HCB-induced porphyria appear normal. See also Chlorobenzenes—different numbers of chlorine substituents Pentachlorobenzenethiol References Cited works Additional references International Agency for Research on Cancer. In: IARC Monographs on the Evaluation of Carcinogenic Risk to Humans. World Health Organisation, Vol 79, 2001pp 493–567 Registry of Toxic Effects of Chemical Substances. Ed. D. Sweet, US Dept. of Health & Human Services: Cincinnati, 2005. Environmental Health Criteria No 195; International Programme on Chemical Safety, World health Organization, Geneva, 1997. Toxicological Profile for Hexachlorobenzene (Update), US Dept of Health & Human Services, Sept 2002. Merck Index, 11th Edition, 4600 External links Obsolete pesticides Chloroarenes Endocrine disruptors Fungicides Hazardous air pollutants IARC Group 2B carcinogens Persistent organic pollutants under the Stockholm Convention Suspected teratogens Suspected embryotoxicants Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution Perchlorocarbons
https://en.wikipedia.org/wiki/Mirex
Mirex is an organochloride that was commercialized as an insecticide and later banned because of its impact on the environment. This white crystalline odorless solid is a derivative of cyclopentadiene. It was popularized to control fire ants but by virtue of its chemical robustness and lipophilicity it was recognized as a bioaccumulative pollutant. The spread of the red imported fire ant was encouraged by the use of mirex, which also kills native ants that are highly competitive with the fire ants. The United States Environmental Protection Agency prohibited its use in 1976. It is prohibited by the Stockholm Convention on Persistent Organic Pollutants. Production and applications Mirex was first synthesized in 1946, but was not used in pesticide formulations until 1955. Mirex was produced by the dimerization of hexachlorocyclopentadiene in the presence of aluminium chloride. Mirex is a stomach insecticide, meaning that it must be ingested by the organism in order to poison it. The insecticidal use was focused on Southeastern United States to control the imported fire ants Solenopsis saevissima richteri and Solenopsis invicta. Approximately 250,000 kg of mirex was applied to fields between 1962-75 (US NRC, 1978). Most of the mirex was in the form of "4X mirex bait," which consists of 0.3% mirex in 14.7% soybean oil mixed with 85% corncob grits. Application of the 4X bait was designed to give a coverage of 4.2 g mirex/ha and was delivered by aircraft, helicopter or tractor. 1x and 2x bait were also used. Use of mirex as a pesticide was banned in 1978. The Stockholm Convention banned production and use of several persistent organic pollutant, and mirex is one of the "dirty dozen". Degradation Characteristic of chlorocarbons, mirex does not burn easily; combustion products are expected to include carbon dioxide, carbon monoxide, hydrogen chloride, chlorine, phosgene, and other organochlorine species. Slow oxidation produces chlordecone ("Kepone"), a related insecticide that is also banned in most of the western world, but more readily degraded. Sunlight degrades mirex primarily to photomirex (8-monohydromirex) and later partly to 2,8-dihydromirex. Mirex is highly resistant to microbiological degradation. It only slowly dechlorinates to a monohydro derivative by anaerobic microbial action in sewage sludge and by enteric bacteria. Degradation by soil microorganisms has not been described. Bioaccumulation and biomagnification Mirex is highly cumulative and amount depends upon the concentration and duration of exposure. There is evidence of accumulation of mirex in aquatic and terrestrial food chains to harmful levels. After 6 applications of mirex bait at 1.4 kg/ha, high mirex levels were found in some species; turtle fat contained 24.8 mg mirex/kg, kingfishers, 1.9 mg/kg, coyote fat, 6 mg/kg, opossum fat, 9.5 mg/kg, and racoon fat, 73.9 mg/kg. In a model ecosystem with a terrestrial-aquatic interface, sorghum seedlings were treated with mirex at 1.1 kg/ha. Caterpillars fed on these seedlings and their faeces contaminated the water which contained algae, snails, Daphnia, mosquito larvae, and fish. After 33 days, the ecological magnification value was 219 for fish and 1165 for snails. Although general environmental levels are low, it is widespread in the biotic and abiotic environment. Being lipophilic, mirex is strongly adsorbed on sediments. Safety Mirex is only moderately toxic in single-dose animal studies (oral values range from 365–3000 mg/kg body weight). It can enter the body via inhalation, ingestion, and via the skin. The most sensitive effects of repeated exposure in animals are principally associated with the liver, and these effects have been observed with doses as low as 1.0 mg/kg diet (0.05 mg/kg body weight per day), the lowest dose tested. At higher dose levels, it is fetotoxic (25 mg/kg in diet) and teratogenic (6.0 mg/kg per day). Mirex was not generally active in short-term tests for genetic activity. There is sufficient evidence of its carcinogenicity in mice and rats. Delayed onset of toxic effects and mortality is typical of mirex poisoning. Mirex is toxic for a range of aquatic organisms, with crustacea being particularly sensitive. Mirex induces pervasive chronic physiological and biochemical disorders in various vertebrates. No acceptable daily intake (ADI) for mirex has been advised by FAO/WHO. IARC (1979) evaluated mirex's carcinogenic hazard and concluded that "there is sufficient evidence for its carcinogenicity to mice and rats. In the absence of adequate data in humans, based on above result it can be said, that it has carcinogenic risk to humans”. Data on human health effects do not exist . Health effects Per a 1995 ATSDR report mirex caused fatty changes in the livers, hyperexcitability and convulsion, and inhibition of reproduction in animals. It is a potent endocrine disruptor, interfering with estrogen-mediated functions such as ovulation, pregnancy, and endometrial growth. It also induced liver cancer by interaction with estrogen in female rodents. References See also International Organization for the Management of Chemicals (IOMC), 1995, POPs Assessment Report, December.1995. Lambrych KL, and JP Hassett. Wavelength-Dependent Photoreactivity of Mirex in Lake Ontario. Environ. Sci. Technol. 2006, 40, 858-863 Mirex Health and Safety Guide. IPCS International Program on Chemical Safety. Health and Safety Guide No.39. 1990 Toxicological Review of Mirex. In support of summary information on the Integrated Risk Information System (IRIS) 2003. U.S. Environmental Protection Agency, Washington DC. Obsolete pesticides Organochloride insecticides IARC Group 2B carcinogens Endocrine disruptors Persistent organic pollutants under the Stockholm Convention Fetotoxicants Teratogens Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution Cyclobutanes Perchlorocarbons
https://en.wikipedia.org/wiki/Clorgiline
Clorgiline (INN), or clorgyline (BAN), is a monoamine oxidase inhibitor (MAOI) structurally related to pargyline which is described as an antidepressant. Specifically, it is an irreversible and selective inhibitor of monoamine oxidase A (MAO-A). Clorgiline was never marketed, but it has found use in scientific research. It has been found to bind with high affinity to the σ1 receptor (Ki = 3.2 nM) and with very high affinity to the I2 imidazoline receptor (Ki = 40 pM). Clorgiline is also a multidrug efflux pump inhibitor. Holmes et al., 2012 reverse azole fungicide resistance using clorgiline, showing promise for its use in multiple fungicide resistance. References Abandoned drugs Propargyl compounds Amines Chloroarenes Monoamine oxidase inhibitors Phenol ethers Sigma agonists
https://en.wikipedia.org/wiki/ARCAspace
Romanian Cosmonautics and Aeronautics Association (), also known as ARCAspace, is an aerospace company based in Râmnicu Vâlcea, Romania. It builds rockets, high-altitude balloons, and unmanned aerial vehicles. It was founded in 1999 as a non-governmental organization in Romania by the Romanian engineer and entrepreneur Dumitru Popescu and other rocket and aeronautics enthusiasts. Since then, ARCA has launched two stratospheric rockets and four large-scale stratospheric balloons including a cluster balloon. It was awarded two governmental contracts with the Romanian government and one contract with the European Space Agency. ARCASpace is currently developing a three-stage, semi-reusable steam-powered rocket called EcoRocket and in 2022 has shifted its business model to Asteroid mining. History 1999–2004: Demonstrator rocket family ARCA was established as Romanian Cosmonautics and Aeronautics Association (), a non-governmental organization in 1999 by a group of rocket and aeronautics enthusiasts. Their goal was to construct and launch space rockets. After experimenting with designs for different fuels and rocket engine types, including solid fuel rockets, they decided to use fiberglass for engine and tank construction and hydrogen peroxide as fuel. Their first vehicle was named Demonstrator and was a long, unguided, self-stabilized rocket. It never flew, instead it was used in various public exhibitions to attract funds and sponsorships. Their second rocket, Demonstrator 2, was constructed in 2003. For this, ARCA created their first rocket engine testing installation where they tested their hydrogen peroxide engine. After the tests were successful, they constructed Demonstrator 2B which was an improved version of their previous rocket. It had a length and diameter and used an high launch pad. In 2003 ARCA also signed up for the Ansari X Prize international competition and started design for the Orizont suborbital vehicle capable of carrying a crew of two up to an altitude of . Orizont was to be ARCA's competing vehicle for the Ansari X Prize. It was designed to use a disposable jet engine up to an altitude of and then ignite its main hydrogen peroxide rocket engine in order to propel it to the altitude. On September 9, 2004, ARCA successfully launched the Demonstrator 2B rocket from Cape Midia Air Force Base. Because of powerful wind gusts up to , they were forced to use only 20 percent of the intended fuel quantity in order to keep with the allocated safety zone by the Air Force. The altitude reached was . 90 journalists from Romania, Germany, and Austria were present at the launch. After the launch, ARCA started construction of the Orizont spaceplane and completed the aircraft structure by 2005. 2005–2010: Stabilo and Helen rockets ARCA organized a public presentation of their Orizont spaceplane in front of the Palace of the Parliament in Bucharest. Because of financial problems encountered with the construction of Orizont, ARCA decided to suspend its development and instead design a new, much smaller rocket called Stabilo. It was designed to be launched from a stratospheric solar balloon and carry one person into space. Design and construction of large scale polyethylene balloons started and on December 2, 2006, at Onesti, Bacau, the crew capsule of Stabilo rocket was lifted to an altitude of 14,700 m. The capsule was safely recovered that evening. The event was transmitted live on several Romanian TV stations. On 27 September 2007, the entire Stabilo rocket (crew capsule + rocket booster) was lifted to an altitude of 12,000 m using the largest solar balloon constructed until that date. The mission was launched from Cape Midia Air Force Base, and the rocket was recovered from the Black Sea surface by Romanian Navy divers. At this moment ARCA proved its ability to conduct large-scale operations and to coordinate military institutions like the Romanian Navy and the Romanian Air Force. In 2007 ARCA won two governmental contracts with the Research Ministry for a suborbital rocket and a solar balloon. The Romanian Space Agency, the University of Bucharest and other Romanian institutions were subcontractors to ARCA for these projects. In early 2008 ARCA joined the Google Lunar X Prize competition and designed the Haas orbital launcher. Their lunar rover was named European Lunar Lander and used a monopropellant rocket engine for landing and hovering. Haas was a three-stage orbital rocket powered by hybrid engines using a bitumen-based fuel and hydrogen peroxide as oxidizer. It was supposed to be launched from 18,000 m carried by the largest solar balloon ever constructed, having a volume of 2 million cubic meters. For the Haas rocket, they created a three-stage much smaller demonstrator called Helen that was intended to test technologies and operation. The Helen rocket was intentionally not aerodynamically stabilized, being intended to use a technique based on the pendulum rocket fallacy. The Romanian bank BRD – Groupe Société Générale awarded ARCA a 300,000 euro sponsorship for their activities. Romanian cosmonaut Dumitru Prunariu highly praised ARCA's achievements and noted their ability to efficiently utilize private funds. In 2009 ARCA performed a series of engine tests using the Stabilo rocket engine in order to validate the design for the Helen rocket. The first attempt to launch the Helen rocket took place on November 14, 2009. Romanian Naval Forces participated with the NSSL 281 Constanta ship, the Venus divers ship, the Fulgerul fast boat and two other fast craft boats. For this mission, ARCA constructed a massive 150,000 cubic meter solar balloon, approximately five times as large as their previous balloon. After the balloon began inflating, the mission crew discovered that the balloon inflation arms were wrapped around the lower part of the balloon. Inflation was halted and the crew attempted to unwrap the arms. Three hours later the arms were repositioned and inflation was ready to resume but the sun was already nearing the horizon, and heating the solar balloon was no longer possible. The decision was made to cancel the mission. ARCA decided to redesign the Helen rocket to use two stages and a helium balloon instead. They named the rocket Helen 2. On April 27, 2010, they performed an avionics test for the European Lunar Lander payload to be lifted by the Helen 2 rocket, using a hot air balloon that lifted three ARCA members to 5,200 m altitude. On August 4, 2010, a new attempt to launch the rocket was made, but a construction error in the helium balloon caused it to rupture and the mission was aborted. A new helium balloon was manufactured designed to carry only the second stage of Helen 2 rocket. On October 1, 2010, the rocket performed a successful flight to an altitude of 38,700 m reaching a maximum velocity of 2320 km/h. Upon atmospheric reentry the rocket capsule parachute failed to deploy and the capsule was lost at sea, but the data was transmitted to the mission control center on the 281 Constanta ship and to the Romanian Air Traffic Services Administration. 2011–2013: IAR-111 aircraft, Executor engine and Haas rocket family After the difficulties encountered with the stratospheric balloons, ARCA decided to change their approach to orbital launch for the Google Lunar X Prize. They designed a supersonic rocket plane powered by a liquid-fueled rocket engine using kerosene as fuel and liquid oxygen as oxidizer. The aircraft, initially named E-111, was renamed IAR-111 after ARCA received permission from IAR S.A. Brasov to use the traditional IAR designation for military and civilian aircraft constructed since 1925. The aircraft was intended to fly to an altitude of 17.000 m and launch a heavily modified version of the Haas rocket, named Haas 2. Haas 2 was an air-launched three-stage orbital rocket intended to place a 200 kg payload into orbit. Work on the plane structure began in late 2010. By 2011 all the fiberglass molds for the aircraft were finished and one-third of the aircraft structure was completed. The crew capsule escape system was tested on September 26, 2011, when a Mil Mi-17 helicopter belonging to the Special Aviation Unit dropped the capsule from an altitude of 700 m over the Black Sea. The emergency parachute deployed successfully and the capsule was recovered from the sea surface by the Romanian Coast Guard. In 2012 ARCA decided to focus on the construction of the rocket engine of the IAR-111 aircraft. The engine, named Executor, is made of composite materials, has a thrust of 24 tons force (52,000 lbf) and is turbopump fueled. It uses ablative cooling for the main chamber and nozzle where the outer layers of the composite material vaporize in contact with the high temperature exhaust mixture and prevent overheating. ARCA also presented a long-term space program, until 2025, that besides IAR-111 envisioned a small scale orbital rocket (Haas 2C), a suborbital crewed rocket (Haas 2B) and a medium scale crewed orbital rocket (Super Haas). In March 2012, ARCA tested an extremely lightweight composite materials kerosene tank that is intended to be used for the Haas 2C rocket. After criticism from the Romanian Space Agency (ROSA) intensified in printed media and television, ARCA decided to send a public letter to the Romanian Prime Minister to intervene in this matter. ARCA mentioned that the Romanian Space Agency is in no position to criticize after the failure of their cubesat Goliat recently launched with a Vega rocket. Furthermore, ARCA was privately funded compared with ROSA which uses public funding. In June 2012 ARCA presented their Haas 2C rocket in Victoria Square in Bucharest, in front of the Romanian Government palace. The same year ARCA won a $1,200,000 contract with the European Space Agency to participate in the ExoMars program. Named the High Altitude Drop Test, the contract consisted of a series of stratospheric balloon drop tests to verify the structural integrity of the EDM parachutes used in Martian atmospheric deceleration. On September 16, 2013, ARCA performed the first successful flight in the ExoMars program, lifting three pressurised avionics containers over the Black Sea to an altitude of 24,400 m. In November, the concrete test stand for the Executor engine was completed. 2014–2019: AirStrato to Launch Assist System On February 10 ARCA presented a high-altitude uncrewed aerial vehicle, named AirStrato, that was meant to replace stratospheric balloon usage for equipment testing and other near space missions. It was intended to be solar powered for extended endurance, was 7 m in length and had a 16 m wingspan with a takeoff weight of 230 kg. The aircraft first flew on February 28. ARCA announced that if the development was successful they would consider developing a commercial version available for sale to customers. On October 17, 2014, ARCA announced that it had transferred its headquarters to the United States to Las Cruces, New Mexico. In a press release they announced that in Romania activities related to software and rocket engine development will continue. They also announced that Air Strato UAV would be available for purchase to customers and that Las Cruces will also serve as a production center for the aircraft. On November 25 they released a website for the UAV revealing two models available for purchase, AirStrato Explorer that could reach altitudes up to 18,000 m with 20 hours endurance and AirStrato Pioneer that would be limited to 8000 m and 12 hours endurance. On July 13, 2015 ARCA announced the beginning of activities in New Mexico, including production and flight tests of AirStrato UAS and Haas rockets, investing . In November 2017, CEO Dimitru Popescu was arrested and charged with 12 counts of fraud. As a result, he left the country and reestablished operations in Romania. The charges were later dropped. In early 2019, ARCA announced the development of the steam-powered Launch Assist System and began testing the aerospike engine. 2020–Present: EcoRocket, AMi, and Pivot to Asteroid Mining In 2020, tests of the steam-powered aerospike continued and ARCA announced a new launch vehicle, the EcoRocket, derived from the LAS technology. In 2021, the EcoRocket design was altered slightly to a three-stage vehicle as tests of the steam-powered aerospike continued. In 2022, ARCA announced the AMi Exploration Initiative, effectively pivoting its business model away from the commercial launch sector and towards cryptocurrency and asteroid mining. The AMi program will utilize the AMi Cargo vehicle and EcoRocket Heavy to mine valuable materials from asteroids. Beginning in the late 2020s, the company plans to start a series of asteroid mining missions to return valuable metals (mostly platinum) to Earth for sale. It intends to fund this venture primarily through the sales of the AMi token, an upcoming cryptocurrency on the Ethereum blockchain. Vehicles Haas rocket family The Haas rocket family was to be a series of rockets of various sizes and configurations intended to replace the initial Haas balloon-launched rocket. After the difficulties encountered with balloon operation in Mission 3 and Mission 4, ARCA decided to redesign the rocket to be ground-launched. Although heavier and more expensive, ground-launched rockets are more reliable, easier to operate and can carry heavier payloads into orbit. Haas 2B Haas 2B was to be a single-stage suborbital rocket intended for space tourism. It was designed to transport a crew capsule and service module into a suborbital trajectory. The crew capsule and service module would have been the same as the ones used for the larger multi-stage Super Haas orbital rocket. At the NASA DC-X conference in Alamogordo, New Mexico in August 2013 ARCA presented an updated version of the Haas 2B rocket with a capsule capable of carrying a crew of five into space. There were discussions with Spaceport America representatives to operate the Haas 2B rocket from New Mexico. Haas 2C Haas 2C was to be an orbital rocket intended for commercial payload launches. There were two planned variants of the rocket, a single stage to orbit variant capable of placing a payload into orbit and a two-stage variant capable of lifting a payload into orbit. After testing the extremely lightweight composite tank, ARCA designed a single stage long rocket with a total weight of , having a thrust-to-weight ratio of 26:1 and a payload. The company displayed the rocket in Victoria Square in Bucharest, in front of the Romanian Government building. The second stage version was to be powered by the Executor engine for the lower stage, and the upper stage use a smaller engine adapted for vacuum, named Venator. Haas 2CA Haas 2CA was to be a rocket designed to be able to launch 100 kg into a low-Earth orbit, at a price of US$1 million per launch. The first flight was intended to launch from Wallops Flight Facility in 2018. The rocket was designed as a Single-stage-to-orbit (SSTO) and featured an Aerospike engine, producing of thrust at sea level and of thrust in vacuum. IAR-111 rocket plane Romanian Aeronautical Industry Brașov (), also known as IAR-111, was a sea-launched suborbital rocket plane. It used the same Executor engine as Haas 2B and 2C rockets. It was to have a length of , a wingspan of and a take-off mass of . It can carry a crew of two, a pilot and a passenger. The flight sequence consists of take-off from sea surface, horizontal flight at subsonic speed, followed by a rapid climb to an altitude of in approximately two minutes. As a space tourism development platform, it could reach at . After fuel depletion, IAR-111 was to descend in gliding flight and land on the sea surface. In case of emergency, the crew capsule was to be detachable and equipped with two rocket-propelled parachutes. The IAR-111 capsule was flight tested during Mission 6. The mission took place in cooperation with the Special Aviation Unit and the Coast Guard, belonging to the Ministry of Internal Affairs and Administration. AirStrato unmanned aerial vehicle AirStrato was an electric powered medium-sized unmanned aerial vehicle that was being developed by ARCA. There were two variants planned, the AirStrato Explorer with a target flight ceiling of 18,000 m and AirStrato Pioneer with a target flight ceiling of 8000 m. It was supposed to carry a 45 kg payload consisting of surveillance equipment, scientific instruments, or additional battery pods for extended autonomy. The first prototype's maiden flight took place on February 28, 2014. It was equipped with fixed landing gear. Two more prototypes were constructed that lacked landing gear. Instead, ARCA opted for a pneumatic catapult as a launcher and landing skids and a recovery parachute for landing. Both prototypes only performed take-off and landing testing and short low-altitude flights. ESA Drop Test Vehicle ARCA has constructed a drop test vehicle for the European Space Agency intended to test the atmospheric deceleration parachutes for the ExoMars EDM lander module. It has the same weight and parachute deployment systems present on the ESA module. The DTV is intended to be lifted to an altitude of 24 km by a stratospheric helium balloon. From that height, it will fall freely reaching a dynamic pressure similar to that encountered by the ExoMars EDM at entry into the Mars atmosphere. At that dynamic pressure the parachute will deploy and the module will land on the Black Sea surface and will be recovered by the Romanian Naval Forces. EcoRocket Demonstrator The EcoRocket Demonstrator (formerly just EcoRocket) is a partially-reusable three-stage orbital launch vehicle currently under development. The EcoRocket Demonstrator is slated to launch in 2022. The vehicle's reusable first stage will use a battery-powered steam rocket to propel a small second stage to an altitude of 7 kilometers. The second stage will then proceed to a higher altitude to deploy a tiny third stage, carrying the payload. The third stage utilizes RP-1 and high test peroxide to propel a payload of up to 10 kilograms into orbit. The rocket takes its name from the supposed ecological benefits of not burning as much kerosene (despite using kerosene to achieve most of orbital velocity). The EcoRocket will launch partially submerged in the Black Sea, in a similar manner to the Sea Dragon. Both the first and second stages are intended to be reusable, parachuting back into the ocean for recovery. The vehicle is intended to demonstrate technologies for the upcoming EcoRocket Heavy. EcoRocket Heavy The EcoRocket Heavy is a planned variant of EcoRocket, designed to support ARCA's AMi asteroid mining initiative. The EcoRocket heavy will be a three-stage launch vehicle derived from EcoRocket's technology. The stages will be arranged concentrically around the payload in the center (in a layout occasionally called "onion staging"), with the outermost stage firing, then detaching and allowing the next outermost stage to ignite, and so on. The EcoRocket heavy, like the EcoRocket, will use a three-stage design, with the first two stages using steam power and the final stage using a kerosene/liquid oxygen mixture to propel itself to orbit. Each stage will consist of multiple "propulsion modules" attached together, in a manner many commentators have compared to the now-defunct German launch company OTRAG. The vehicle will be thirty meters in diameter, and, like the EcoRocket Demonstrator, will launch from the ocean, and be partially reusable, recovering the first two stages. The EcoRocket Heavy largely abandons aerospike engines, using only traditional rocket nozzles. AMi Cargo The AMi Cargo vehicle is the vehicle designed to support ARCA's asteroid mining operations, and as the primary payload for the EcoRocket Heavy. The AMi Cargo vehicle will approach an asteroid, and then release the battery-powered Recovery Capsule (which appears to be derived from the earlier suborbital capsule for the Haas 2B), which will use the engine on its service module to approach the target asteroid. The spacecraft will then harpoon the asteroid, then reel itself in to begin mining operations. Upon completion of mining, it will return to the AMi Cargo vehicle, which will propel it back to Earth. Upon reaching Earth, the capsule will detach and jettison the service module prior to reentry. The capsule will then splash down under parachute for recovery of the material inside. ARCA intends to eventually upgrade the spacecraft for uncrewed missions to other planets. To support deep space operations, ARCA intends to construct their own Deep Space Network, akin to NASA's system. Rocket engines Executor The Executor was a liquid-fueled rocket engine intended to power the IAR-111 Excelsior supersonic plane and Haas 2B and 2C rockets. Executor was an open cycle gas generator rocket engine, that uses liquid oxygen and kerosene and has a maximum thrust of 24 tons force. ARCA decided to use composite materials and aluminum alloys on a large scale. The composite materials offer low construction costs and reduced weight of the components. They were used in the construction of the combustion chamber and the nozzle, and also the gas generator and some elements in the turbopumps. The combustion chamber and the nozzle are built from two layers. The internal layer is made of silica fiber and phenolic resin, and the external one is made of carbon fiber and epoxy resin. The phenolic resin reinforced with silica fiber pyrolyzes endothermally in the combustion chamber walls, releasing gases like oxygen and hydrogen, leaving a local carbon matrix. The gases spread through the carbon matrix and reach the internal surface of the wall where they meet the hot combustion gases and act as a cooling agent. Furthermore, the engine is equipped with a cooling system that injects 10 percent of the total kerosene mass onto the internal walls. The pump volutes were made of 6062 type aluminum alloy. The pump rotors are made through lathing and milling using 304 type steel. The supersonic turbine was made of refractory steel, both the core and the blades. The turbine rotation speed was 20,000 rpm and has a 1.5 MW power. The intake gas temperature was 620 °C. The main engine valves were made of 6060 type aluminum and were pneumatically powered, without adjustment. The engine injector and the liquid oxygen intake pipes were made of 304 L type steel and the kerosene intake pipe was made of composite materials. The engine had the possibility to shift the thrust by 5 degrees on two axes. The articulated system was made of composite materials and high-grade steel alloy. The engine is rotated using two hydraulic pistons that use kerosene from the pump exhaust system. ARCA announced that the Executor engine had a thrust/mass ratio of 110. Venator Venator was a liquid-fueled pressure-fed rocket engine that will be used to power the second stage of the Haas 2C rocket. It burned liquid oxygen and kerosene and had a maximum thrust of . The engine had no valves on the main pipes. Instead, it used burst disks on the main pipes, between the tanks and the engine. The second stage was pressurized at at lift-off and after the first stage burn-out, the second stage would be pressurized at 16 atm. At that pressure the disks would burst and the fuel would flow through the engine. LAS The Launch Assist System was an aerospike engine that was to use electrically heated water to produce steam, which would then generate thrust. The LAS was to reduce cost of rockets by manner of reducing the associated complexity, since steam powered rockets are far less complex than even the simplest liquid fueled engines. It was to be a self contained unit including both the engine and propellant tank. It could theoretically achieve a specific impulse of 67 seconds. The LAS was proposed to be a first stage for the Haas 2CA rocket, or to serve as a strap-on booster for existing vehicles, including the Atlas V, Falcon 9, Delta IV, and Ariane 6. The EcoRocket Demonstrator and Heavy will use a reworked version of this system with two nozzles (one for launch, and one for landing) called the LAS 25D. AMi Propulsion System The AMi Cargo vehicle will use a new propulsion system, described by ARCA as "electric-arc propulsion." The reaction mass will be water, and the impulse will be provided electrically using electricity from large solar arrays. Beyond this, not much is known about the nature of this system, however, ARCA intends it to be capable of running for days on end. Missions Mission 1 Mission 1 took place on December 2, 2006, when a solar balloon carried the STABILO system capsule to an altitude of . The altitude was slightly lower than intended because of extreme turbulence encountered during the last stage of the flight. In light of this, it was decided not to risk damaging the system. The flight had been planned since August 2006, when another large solar balloon was launched at low altitude in controlled flight. During this time a specially designed parachute was tested. It was the first stratospheric flight performed by ARCA, and the event was transmitted live; over 20 journalists were present. Mission 2 Mission 2 of STABILO 1B was launched on 27 September 2007 from Cape Midia Air Force Base. The Romanian Air Force participated with two radar stations. Civil Aviation and the Romanian Navy also participated, the latter with one naval diver's ship. The first and second vehicle stages reached an altitude of . After one hour and 30 minutes and having traveled from the launch location, STABILO landed on the sea surface and was intercepted by a Navy Saturn ship and recovered by divers. The recovery ship was guided by the satellite transmission system and by Air Force radar. The vehicle was transported to the Navy shipyard. The electronic equipment continued to transmit to the command center even 8 hours after the flight had ended. Mission 3, 4 and 4B Helen was a demonstrator rocket for the Haas balloon-launched orbital rocket. It was intended to test in flight the avionics and gravitational stabilization method proposed for the much larger Haas rocket. Helen was intended to reach an altitude of . Two versions were created, a three-stage rocket that had cylindrical tanks and used hydrogen peroxide as monopropellant fuel, and a two-stage spherical tank rocket that used the same propulsion type. The rocket used a physically flawed stabilization technique based on the pendulum rocket fallacy. Mission 3 took place on November 14, 2009, on the Black Sea. Romanian Naval Forces participated in the mission with one logistical ship, one diver's ship and another fast craft. For this mission, ARCA constructed the largest stratospheric helium balloon to date. An error in construction caused the balloon's inflation arms to wrap around the base of the balloon when it was inflated. The team managed to unwrap the arms and resume inflation but sunset was approaching and the solar balloon could no longer be used. The mission was cancelled. For Mission 4 ARCAspace decided to use a helium balloon instead and to redesign the Helen rocket. The new version, named Helen 2, was prepared for flight on August 4, 2010. When balloon inflation was initiated, the balloon ruptured because of a construction error and the mission was cancelled. A new attempt was made on October 1, 2010, by using only the final stage of the Helen 2 rocket and a smaller helium balloon. The flight, named Mission 4B, was successful, Helen 2 launching at an altitude of and the rocket reaching an altitude of . After the difficulties encountered with stratospheric balloons, ARCA decided to stop work on the Haas rocket and design a new family of ground-launched orbital and suborbital rockets. Mission 5 Mission 5 was carried out in partnership with the Romanian Air Club and the Romanian Aeronautic Federation. It took place before the Helen 2 rocket launch. The flight took place on April 27, 2010, between 07:45 and 08:45 AM, taking off from Hogiz, Brasov. A manned hot air balloon lifted the Helen 2 rocket pressurised capsule to an altitude of . The maximum distance between the carrier balloon and the command center at Sanpetru airfield was , which corresponded with the Helen 2 rocket simulated safety zone. The balloon crew was composed of Mihai Ilie – pilot, Mugurel Ionescu – copilot, and Dumitru Popescu – ELL equipment operator. The objective of the flight was to test telemetry, command and live TV transmission for the Helen 2 rocket. Mission 6 Mission 6 tested the recovery system for the IAR-111 supersonic plane crew capsule. On September 26, 2011, a Mi-17 helicopter from Special Aviation Unit lifted the capsule to above mean sea level. At that altitude, the helicopter released the capsule. The parachute deployed, and the capsule landed on the sea surface. It was recovered by the same helicopter with the help of the Romanian Coast Guard. WP3 WP3 was a validation test flight for the ExoMars Program High Altitude Drop Test (HADT), carried out in cooperation with the European Space Agency. The launch took place from the Black Sea coast on September 16, 2013, and the hardware comprised three pressurized containers containing the avionics equipment that will be necessary to test the ExoMars spacecraft parachute during future incoming flights. The pressurized containers, carried by a cluster balloon, were launched at 7:15 AM and the ascension took 90 minutes. When the containers reached an altitude of , they were released under a dedicated recovery parachute and landed on the sea twenty minutes later. The containers and the recovery parachute were recovered by the Navy from the launch point. The objectives were flight testing the avionics and communication systems, demonstrating the container sealing after sea landing and the capability to identify and recover the equipment from the sea surface. Mission 9 Mission 9 was to be a short vertical hop of the EcoRocket's first stage, testing the booster landing system in much the same manner as SpaceX's Starhopper. This mission has apparently been scrapped, however, ARCA completed a short, low-altitude flight of the EcoRocket Demonstrator's second stage in the fall of 2021 with no landing attempt to test the RCS systems aboard the rocket. The stage was attached to an umbilical during the flight. Mission 10 Mission 10 will be the first orbital flight of the EcoRocket. See also ArcaBoard Romanian Space Agency Rockoon References External links Ansari X Prize official site Latest ARCA Space, Space Fellowship news Google Lunar X Prize official site National Plan for Research Development and Innovation Space advocacy Science and technology in Romania Private spaceflight companies Google Lunar X Prize
https://en.wikipedia.org/wiki/PODSnet
Pagan Occult Distribution System Network (PODSnet) was a neopagan/occult computer network of Pagan Sysops and Sysops carrying Pagan/Magickal/Occult oriented echoes operating on an international basis, with FIDO Nodes in Australia, Canada, Germany, the U.K., and across the USA. PODSnet grew rapidly, and at its height, was the largest privately distributed network of Pagans, Occultists, and other people of an esoteric bent on this planet. Origins PODSnet grew out of an Echomail area/public forum (Echo) named MAGICK on FidoNet, which was created by J. Brad Hicks, the Sysop of the Weirdbase BBS back in 1985. MAGICK was the 8th Echo conference created on FidoNet. It quickly grew to 12 systems, and then went international when the first Canadian Pagan BBS, Solsbury Hill (Farrell McGovern, Sysop), joined. This was just a hint of its growth to come. Another early expansion was the addition of two more echoes, MUNDANE and METAPHYSICAL. MUNDANE was created to move all "chat"; that is personal discussions, and other conversations that were of a non-pagan or magickal nature. Simultaneously, METAPHYSICAL was created for long, "article-style" posts of information on full rituals, papers and essays of a Pagan, Occult or Magickal nature. These three were bundled as the "Magicknet Trio". If a BBS carried one, they had to carry all three. At its height, there were over 50 "official" echoes that were considered part of the PODSNet backbone, with several others available. Structure Similarly to FidoNet, PODSnet was organized into Zones, Regions, Networks, Nodes and Points; however, unlike FidoNet, these were not geographically determined, as the individual SysOp would determine from where to receive the network feed. Additionally, Points were more common within PODSnet due to the specialized nature of the network. Like many open source and standards-based technology projects, FidoNet grew rapidly, and then forked. The addition of Zones to the Fidonet technology allowed for easier routing of email internationally, and the creation of networks outside of the control of International Fido Net Association (IFNA). As a number of associated Echos were added to the Magicknet Trio, the Sysops who carried them collectively decided to form their own network, the Pagan Occult Distribution System, or PODSnet. It asked for the zone number of 93, as the other popular occult-oriented zone numbers, 5 and 23 (see Discordianism) were already reserved. PODSNet Book of Shadows One of the most enduring contributions to the online world was a collection of rituals, articles poetry and discussion collected by Paul Seymour of the Riders of the Crystal Wind, and often referred to as either the Internet Book of Shadows or the PODSNet Book of Shadows. These volumes (there are seven in all) are, in fact, a collection of rituals, spells, recipes, messages, and essays from and among members of PODSNet. As PodsNet users came from various religious paths, from Asatru to Zen Buddhist, and their contributions as well as topical messages were compiled two to three times a year during the life of PODSNet. Since the end of the BBS era, these files have circulated online on a number of services, often with introductory material stripped, and offered for sale on sites such as eBay.com. Charging money for the collection is in direct violation of the copyright notice within the volumes that the material is offered free of charge; additionally, portions of the content are under individual copyright by a variety of publishers, including Weiser, Llewellyn Publishing and others, as some texts were extracted in their entirety from published books. Other pieces have subsequently been formally published by their authors, including Dorothy Morrison, Mike Nichols and Isaac Bonewits, among others. References External links Internet Archive of The PODSnet Internet home (official site temporary offline) J. Brad Hick's Homepage Jay Loveless's PODSnet page. Jay was one of PODSnet's administrators. (Page retrieved from the Internet Archive Wayback Machine as IO.COM is no longer on-line.) PODSNet Alumni Group on Facebook PODSnet "General Chat" Echo on Yahoo! Groups Vice's article on PODSnet by Tamlin Magee PODSNet modern forum Bulletin board systems Modern paganism and technology Wide area networks Modern pagan websites 1980s in modern paganism
https://en.wikipedia.org/wiki/Kalkar
Kalkar ( is a municipality in the district of Kleve, in North Rhine-Westphalia, Germany. It is located near the Rhine, approx. 10 km south-east of Cleves. The catholic church St. Nicolai has preserved one of the most significant sacral inventories from the late Middle Ages in Germany. History Kalkar was founded by Dirk VI of Cleves in 1230 and received city rights in 1242. It was one of the seven "capitals" of Cleves (called Kleve), until the line of the Duchy of Cleves died out in 1609, whereupon the city went over to the Margraviate of Brandenburg. Marie of Burgundy, Duchess of Cleves retired to Monreberg castle in Kalkar, where she founded a Dominican convent in 1455. Under her influence the city bloomed and artists were attracted to the favorable climate for cultural investment. She died at Monreberg castle in 1463. Air base The USAF 470TH Air Base Squadron supports the NATO Joint Air Power Competence Center (JAPCC) in Kalkar and the NATO CAOC in Uedem. The 470th is not located in Kalkar however. Nuclear reactor Between 1957 and 1991, West Germany, Belgium and the Netherlands pursued an ambitious plan for a fast breeder nuclear reactor, the a prototype reactor, SNR-300, near Kalkar. Construction of the SNR-300 began in April 1973. In the wake of large anti-nuclear protests at Wyhl and Brokdorf, demonstrations against the SNR-300 reactor escalated in the mid-1970s. A large demonstration in September 1977 involved a "massive police operation that included the complete closure of autobahns in northern Germany and identity checks of almost 150,000 people". Construction of the Kalkar reactor was completed in the middle of 1985, but a new state government was clearly against the project, and opposition mounted following the Chernobyl disaster in April 1986. In March 1991, the German federal government said that the SNR-300 would not be put into operation; the project costs, originally estimated at $150 to $200 million, escalated to a final cost of about $4 billion (equivalent to about $B in ). The nuclear reactor plant has since been turned into Kern-Wasser Wunderland, an amusement park with a rollercoaster and several other rides and restaurants. Novel In the science fiction novel "The Moon Maid", Edgar Rice Burroughs used "Kalkars" as the name for a malevolent fictional race living on the Moon and later invading Earth. Gallery References Populated places on the Rhine Anti–nuclear power movement Anti-nuclear movement in Germany Kleve (district)
https://en.wikipedia.org/wiki/Pi-system
In mathematics, a -system (or pi-system) on a set is a collection of certain subsets of such that is non-empty. If then That is, is a non-empty family of subsets of that is closed under non-empty finite intersections. The importance of -systems arises from the fact that if two probability measures agree on a -system, then they agree on the -algebra generated by that -system. Moreover, if other properties, such as equality of integrals, hold for the -system, then they hold for the generated -algebra as well. This is the case whenever the collection of subsets for which the property holds is a -system. -systems are also useful for checking independence of random variables. This is desirable because in practice, -systems are often simpler to work with than -algebras. For example, it may be awkward to work with -algebras generated by infinitely many sets So instead we may examine the union of all -algebras generated by finitely many sets This forms a -system that generates the desired -algebra. Another example is the collection of all intervals of the real line, along with the empty set, which is a -system that generates the very important Borel -algebra of subsets of the real line. Definitions A -system is a non-empty collection of sets that is closed under non-empty finite intersections, which is equivalent to containing the intersection of any two of its elements. If every set in this -system is a subset of then it is called a For any non-empty family of subsets of there exists a -system called the , that is the unique smallest -system of containing every element of It is equal to the intersection of all -systems containing and can be explicitly described as the set of all possible non-empty finite intersections of elements of A non-empty family of sets has the finite intersection property if and only if the -system it generates does not contain the empty set as an element. Examples For any real numbers and the intervals form a -system, and the intervals form a -system if the empty set is also included. The topology (collection of open subsets) of any topological space is a -system. Every filter is a -system. Every -system that doesn't contain the empty set is a prefilter (also known as a filter base). For any measurable function the set   defines a -system, and is called the -system by (Alternatively, defines a -system generated by ) If and are -systems for and respectively, then is a -system for the Cartesian product Every -algebra is a -system. Relationship to -systems A -system on is a set of subsets of satisfying if then if is a sequence of (pairwise) subsets in then Whilst it is true that any -algebra satisfies the properties of being both a -system and a -system, it is not true that any -system is a -system, and moreover it is not true that any -system is a -algebra. However, a useful classification is that any set system which is both a -system and a -system is a -algebra. This is used as a step in proving the - theorem. The - theorem Let be a -system, and let   be a -system contained in The - theorem states that the -algebra generated by is contained in The - theorem can be used to prove many elementary measure theoretic results. For instance, it is used in proving the uniqueness claim of the Carathéodory extension theorem for -finite measures. The - theorem is closely related to the monotone class theorem, which provides a similar relationship between monotone classes and algebras, and can be used to derive many of the same results. Since -systems are simpler classes than algebras, it can be easier to identify the sets that are in them while, on the other hand, checking whether the property under consideration determines a -system is often relatively easy. Despite the difference between the two theorems, the - theorem is sometimes referred to as the monotone class theorem. Example Let be two measures on the -algebra and suppose that is generated by a -system If for all and then This is the uniqueness statement of the Carathéodory extension theorem for finite measures. If this result does not seem very remarkable, consider the fact that it usually is very difficult or even impossible to fully describe every set in the -algebra, and so the problem of equating measures would be completely hopeless without such a tool. Idea of the proof Define the collection of sets By the first assumption, and agree on and thus By the second assumption, and it can further be shown that is a -system. It follows from the - theorem that and so That is to say, the measures agree on -Systems in probability -systems are more commonly used in the study of probability theory than in the general field of measure theory. This is primarily due to probabilistic notions such as independence, though it may also be a consequence of the fact that the - theorem was proven by the probabilist Eugene Dynkin. Standard measure theory texts typically prove the same results via monotone classes, rather than -systems. Equality in distribution The - theorem motivates the common definition of the probability distribution of a random variable in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as whereas the seemingly more general of the variable is the probability measure where is the Borel -algebra. The random variables and (on two possibly different probability spaces) are (or ), denoted by if they have the same cumulative distribution functions; that is, if The motivation for the definition stems from the observation that if then that is exactly to say that and agree on the -system which generates and so by the example above: A similar result holds for the joint distribution of a random vector. For example, suppose and are two random variables defined on the same probability space with respectively generated -systems and The joint cumulative distribution function of is However, and Because is a -system generated by the random pair the - theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of In other words, and have the same distribution if and only if they have the same joint cumulative distribution function. In the theory of stochastic processes, two processes are known to be equal in distribution if and only if they agree on all finite-dimensional distributions; that is, for all The proof of this is another application of the - theorem. Independent random variables The theory of -system plays an important role in the probabilistic notion of independence. If and are two random variables defined on the same probability space then the random variables are independent if and only if their -systems satisfy for all and which is to say that are independent. This actually is a special case of the use of -systems for determining the distribution of Example Let where are iid standard normal random variables. Define the radius and argument (arctan) variables Then and are independent random variables. To prove this, it is sufficient to show that the -systems are independent: that is, for all and Confirming that this is the case is an exercise in changing variables. Fix and then the probability can be expressed as an integral of the probability density function of See also Notes Citations References Measure theory Families of sets
https://en.wikipedia.org/wiki/Seqlock
A seqlock (short for sequence lock) is a special locking mechanism used in Linux for supporting fast writes of shared variables between two parallel operating system routines. The semantics stabilized as of version 2.5.59, and they are present in the 2.6.x stable kernel series. The seqlocks were developed by Stephen Hemminger and originally called frlocks, based on earlier work by Andrea Arcangeli. The first implementation was in the x86-64 time code where it was needed to synchronize with user space where it was not possible to use a real lock. It is a reader–writer consistent mechanism which avoids the problem of writer starvation. A seqlock consists of storage for saving a sequence number in addition to a lock. The lock is to support synchronization between two writers and the counter is for indicating consistency in readers. In addition to updating the shared data, the writer increments the sequence number, both after acquiring the lock and before releasing the lock. Readers read the sequence number before and after reading the shared data. If the sequence number is odd on either occasion, a writer had taken the lock while the data was being read and it may have changed. If the sequence numbers are different, a writer has changed the data while it was being read. In either case readers simply retry (using a loop) until they read the same even sequence number before and after. The reader never blocks, but it may have to retry if a write is in progress; this speeds up the readers in the case where the data was not modified, since they do not have to acquire the lock as they would with a traditional read–write lock. Also, writers do not wait for readers, whereas with traditional read–write locks they do, leading to potential resource starvation in a situation where there are a number of readers (because the writer must wait for there to be no readers). Because of these two factors, seqlocks are more efficient than traditional read–write locks for the situation where there are many readers and few writers. The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve). The technique will not work for data that contains pointers, because any writer could invalidate a pointer that a reader has already followed. Updating the memory block being pointed-to is fine using seqlocks, but updating the pointer itself is not allowed. In a case where the pointers themselves must be updated or changed, using read-copy-update synchronization is preferred. This was first applied to system time counter updating. Each time interrupt updates the time of the day; there may be many readers of the time for operating system internal use and applications, but writes are relatively infrequent and only occur one at a time. The BSD timecounter code for instance appears to use a similar technique. One subtle issue of using seqlocks for a time counter is that it is impossible to step through it with a debugger. The retry logic will trigger all the time because the debugger is slow enough to make the read race occur always. See also Synchronization Spinlock References fast reader/writer lock for gettimeofday 2.5.30 Effective synchronisation on Linux systems Driver porting: mutual exclusion with seqlocks Simple seqlock implementation Improved seqlock algorithm with lock-free readers Seqlocks and Memory Models(slides) Concurrency control Linux kernel
https://en.wikipedia.org/wiki/Plutonium-238
Plutonium-238 (238Pu or Pu-238) is a radioactive isotope of plutonium that has a half-life of 87.7 years. Plutonium-238 is a very powerful alpha emitter; as alpha particles are easily blocked, this makes the plutonium-238 isotope suitable for usage in radioisotope thermoelectric generators (RTGs) and radioisotope heater units. The density of plutonium-238 at room temperature is about 19.8 g/cc. The material will generate about 0.57 watts per gram of 238Pu. The bare sphere critical mass of metallic plutonium-238 is not precisely known, but its calculated range is between 9.04 and 10.07 kilograms. History Initial production Plutonium-238 was the first isotope of plutonium to be discovered. It was synthesized by Glenn Seaborg and associates in December 1940 by bombarding uranium-238 with deuterons, creating neptunium-238. + → + 2 The neptunium isotope then undergoes β− decay to plutonium-238, with a half-life of 2.12 days: → + + Plutonium-238 naturally decays to uranium-234 and then further along the radium series to lead-206. Historically, most plutonium-238 has been produced by Savannah River in their weapons reactor, by irradiating with neutrons neptunium-237 (half life ). + → Neptunium-237 is a by-product of the production of plutonium-239 weapons-grade material, and when the site was shut down in 1988, 238Pu was mixed with about 16% 239Pu. Manhattan Project Plutonium was first synthesized in 1940 and isolated in 1941 by chemists at the University of California, Berkeley. The Manhattan Project began shortly after the discovery, with most early research (pre-1944) carried out using small samples manufactured using the large cyclotrons at the Berkeley Rad Lab and Washington University in St. Louis. Much of the difficulty encountered during the Manhattan Project regarded the production and testing of nuclear fuel. Both uranium and plutonium were eventually determined to be fissile, but in each case they had to be purified to select for the isotopes suitable for an atomic bomb. With World War II underway, the research teams were pressed for time. Micrograms of plutonium were made by cyclotrons in 1942 and 1943. In the fall of 1943 Robert Oppenheimer is quoted as saying "there's only a twentieth of a milligram in existence." By his request, the Rad Lab at Berkeley made available 1.2 mg of plutonium by the end of October 1943, most of which was taken to Los Alamos for theoretical work there. The world's second reactor, the X-10 Graphite Reactor built at a secret site at Oak Ridge, would be fully operational in 1944. In November 1943, shortly after its initial start-up, it was able to produce a minuscule 500 mg. However, this plutonium was mixed with large amounts of uranium fuel and destined for the nearby chemical processing pilot plant for isotopic separation (enrichment). Gram amounts of plutonium wouldn't be available until spring of 1944. Industrial-scale production of plutonium only began in March 1945 when the B Reactor at the Hanford Site began operation. Plutonium-238 and human experimentation While samples of plutonium were available in small quantities and being handled by researchers, no one knew what health effects this might have. Plutonium handling mishaps occurred in 1944, causing alarm in the Manhattan Project leadership as contamination inside and outside the laboratories was becoming an issue. In August 1944, a chemist named Donald Mastick was sprayed in the face with liquid plutonium chloride, causing him to accidentally swallow some. Nose swipes taken of plutonium researchers indicated that plutonium was being breathed in. Lead Manhattan Project chemist Glenn Seaborg, discoverer of many transuranium elements including plutonium, urged that a safety program be developed for plutonium research. In a memo to Robert Stone at the Chicago Met Lab, Seaborg wrote "that a program to trace the course of plutonium in the body be initiated as soon as possible ... [with] the very highest priority." This memo was dated January 5, 1944, prior to many of the contamination events of 1944 in Building D where Mastick worked. Seaborg later claimed that he did not at all intend to imply human experimentation in this memo, nor did he learn of its use in humans until far later due to the compartmentalization of classified information. With bomb-grade enriched plutonium-239 destined for critical research and for atomic weapon production, plutonium-238 was used in early medical experiments as it is unusable as atomic weapon fuel. However, 238Pu is far more dangerous than 239Pu due to its short half-life and being a strong alpha-emitter. It was soon found that plutonium was being excreted at a very slow rate, accumulating in test subjects involved in early human experimentation. This led to severe health consequences for the patients involved. From April 10, 1945, to July 18, 1947, eighteen people were injected with plutonium as part of the Manhattan Project. Doses administered ranged from 0.095 to 5.9 microcuries (μCi). Albert Stevens, after a (mistaken) terminal cancer diagnosis which seemed to include many organs, was injected in 1945 with plutonium without his informed consent. He was referred to as patient CAL-1 and the plutonium consisted of 3.5 μCi 238Pu, and 0.046 μCi 239Pu, giving him an initial body burden of 3.546 μCi (131 kBq) total activity. The fact that he had the highly radioactive plutonium-238 (produced in the 60-inch cyclotron at the Crocker Laboratory by deuteron bombardment of natural uranium) contributed heavily to his long-term dose. Had all of the plutonium given to Stevens been the long-lived 239Pu as used in similar experiments of the time, Stevens's lifetime dose would have been significantly smaller. The short half-life of 87.7 years of 238Pu means that a large amount of it decayed during its time inside his body, especially when compared to the 24,100 year half-life of 239Pu. After his initial "cancer" surgery removed many non-cancerous "tumors", Stevens survived for about 20 years after his experimental dose of plutonium before succumbing to heart disease; he had received the highest known accumulated radiation dose of any human patient. Modern calculations of his lifetime absorbed dose give a significant 64 Sv (6400 rem) total. Weapons The first application of 238Pu was its use in nuclear weapon components made at Mound Laboratories for Lawrence Radiation Laboratory (now Lawrence Livermore National Laboratory). Mound was chosen for this work because of its experience in producing the polonium-210-fueled Urchin initiator and its work with several heavy elements in a Reactor Fuels program. Two Mound scientists spent 1959 at Lawrence in joint development while the Special Metallurgical Building was constructed at Mound to house the project. Meanwhile, the first sample of 238Pu came to Mound in 1959. The weapons project called for the production of about 1 kg/year of 238Pu over a 3-year period. However, the 238Pu component could not be produced to the specifications despite a 2-year effort beginning at Mound in mid-1961. A maximum effort was undertaken with 3 shifts a day, 6 days a week, and ramp-up of Savannah River's 238Pu production over the next three years to about 20 kg/year. A loosening of the specifications resulted in productivity of about 3%, and production finally began in 1964. Use in radioisotope thermoelectric generators Beginning on January 1, 1957, Mound Laboratories RTG inventors Jordan & Birden were working on an Army Signal Corps contract (R-65-8- 998 11-SC-03-91) to conduct research on radioactive materials and thermocouples suitable for the direct conversion of heat to electrical energy using polonium-210 as the heat source. In 1961, Capt. R. T. Carpenter had chosen 238Pu as the fuel for the first RTG (radioisotope thermoelectric generator) to be launched into space as auxiliary power for the Transit IV Navy navigational satellite. By January 21, 1963, the decision had yet to be made as to what isotope would be used to fuel the large RTGs for NASA programs. Early in 1964, Mound Laboratories scientists developed a different method of fabricating the weapon component that resulted in a production efficiency of around 98%. This made available the excess Savannah River 238Pu production for Space Electric Power use just in time to meet the needs of the SNAP-27 RTG on the Moon, the Pioneer spacecraft, the Viking Mars landers, more Transit Navy navigation satellites (precursor to today's GPS) and two Voyager spacecraft, for which all of the 238Pu heat sources were fabricated at Mound Laboratories. The radioisotope heater units were used in space exploration beginning with the Apollo Radioisotope Heaters (ALRH) warming the Seismic Experiment placed on the Moon by the Apollo 11 mission and on several Moon and Mars rovers, to the 129 LWRHUs warming the experiments on the Galileo spacecraft. An addition to the Special Metallurgical building weapon component production facility was completed at the end of 1964 for 238Pu heat source fuel fabrication. A temporary fuel production facility was also installed in the Research Building in 1969 for Transit fuel fabrication. With completion of the weapons component project, the Special Metallurgical Building, nicknamed "Snake Mountain" because of the difficulties encountered in handling large quantities of 238Pu, ceased operations on June 30, 1968, with 238Pu operations taken over by the new Plutonium Processing Building, especially designed and constructed for handling large quantities of 238Pu. Plutonium-238 is given the highest relative hazard number (152) of all 256 radionuclides evaluated by Karl Z. Morgan et al. in 1963. Nuclear powered pacemakers In the United States, when plutonium-238 became available for non-military uses, numerous applications were proposed and tested, including the cardiac pacemaker program that began on June 1, 1966, in conjunction with NUMEC. The last of these units was implanted in 1988, as lithium-powered pacemakers, which had an expected lifespan of 10 or more years without the disadvantages of radiation concerns and regulatory hurdles, made these units obsolete. , there were nine living people with nuclear-powered pacemakers in the United States, out of an original 139 recipients. When these individuals die, the pacemaker is supposed to be removed and shipped to Los Alamos where the plutonium will be recovered. In a letter to the New England Journal of Medicine discussing a woman who received a Numec NU-5 decades ago that is continuously operating, despite an original $5,000 price tag equivalent to $23,000 in 2007 dollars, the follow-up costs have been about $19,000 compared with $55,000 for a battery-powered pacemaker. Another nuclear powered pacemaker was the Medtronics “Laurens-Alcatel Model 9000”. Approximately 1600 nuclear-powered cardiac pacemakers and/or battery assemblies have been located across the United States, and are eligible for recovery by the Off-Site Source Recovery Project (OSRP) Team at Los Alamos National Laboratory (LANL). Production Reactor-grade plutonium from spent nuclear fuel contains various isotopes of plutonium. 238Pu makes up only one or two percent, but it may be responsible for much of the short-term decay heat because of its short half-life relative to other plutonium isotopes. Reactor-grade plutonium is not useful for producing 238Pu for RTGs because difficult isotopic separation would be needed. Pure plutonium-238 is prepared by neutron irradiation of neptunium-237, one of the minor actinides that can be recovered from spent nuclear fuel during reprocessing, or by the neutron irradiation of americium in a reactor. The targets are purified chemically, including dissolution in nitric acid to extract the plutonium-238. A 100 kg sample of light water reactor fuel that has been irradiated for three years contains only about 700 grams (0.7% by weight) of neptunium-237, which must be extracted and purified. Significant amounts of pure 238Pu could also be produced in a thorium fuel cycle. In the US, the Department of Energy's Space and Defense Power Systems Initiative of the Office of Nuclear Energy processes 238Pu, maintains its storage, and develops, produces, transports and manages safety of radioisotope power and heating units for both space exploration and national security spacecraft. As of March 2015, a total of of 238Pu was available for civil space uses. Out of the inventory, remained in a condition meeting NASA specifications for power delivery. Some of this pool of 238Pu was used in a multi-mission radioisotope thermoelectric generator (MMRTG) for the 2020 Mars Rover mission and two additional MMRTGs for a notional 2024 NASA mission. would remain after that, including approximately just barely meeting the NASA specification. Since isotope content in the material is lost over time to radioactive decay while in storage, this stock could be brought up to NASA specifications by blending it with a smaller amount of freshly produced 238Pu with a higher content of the isotope, and therefore energy density. U.S. production ceases and resumes The United States stopped producing bulk 238Pu with the closure of the Savannah River Site reactors in 1988. Since 1993, all of the 238Pu used in American spacecraft has been purchased from Russia. In total, have been purchased, but Russia is no longer producing 238Pu, and their own supply is reportedly running low. In February 2013, a small amount of 238Pu was successfully produced by Oak Ridge's High Flux Isotope Reactor, and on December 22, 2015, they reported the production of of 238Pu. In March 2017, Ontario Power Generation (OPG) and its venture arm, Canadian Nuclear Partners, announced plans to produce 238Pu as a second source for NASA. Rods containing neptunium-237 will be fabricated by Pacific Northwest National Laboratory (PNNL) in Washington State and shipped to OPG's Darlington Nuclear Generating Station in Clarington, Ontario, Canada where they will be irradiated with neutrons inside the reactor's core to produce 238Pu. In January 2019, it was reported that some automated aspects of its production were implemented at Oak Ridge National Laboratory in Tennessee, that are expected to triple the number of plutonium pellets produced each week. The production rate is now expected to increase from 80 pellets per week to about 275 pellets per week, for a total production of about 400 grams per year. The goal now is to optimize and scale-up the processes in order to produce an average of per year by 2025. Applications The main application of 238Pu is as the heat source in radioisotope thermoelectric generators (RTGs). The RTG was invented in 1954 by Mound scientists Ken Jordan and John Birden, who were inducted into the National Inventors Hall of Fame in 2013. They immediately produced a working prototype using a 210Po heat source, and on January 1, 1957, entered into an Army Signal Corps contract (R-65-8- 998 11-SC-03-91) to conduct research on radioactive materials and thermocouples suitable for the direct conversion of heat to electrical energy using polonium-210 as the heat source. In 1966, a study reported by SAE International described the potential for the use of plutonium-238 in radioisotope power subsystems for applications in space. This study focused on employing power conversions through the Rankine cycle, Brayton cycle, thermoelectric conversion and thermionic conversion with plutonium-238 as the primary heating element. The heat supplied by the plutonium-238 heating element was consistent between the 400 °C and 1000 °C regime but future technology could reach an upper limit of 2000 °C, further increasing the efficiency of the power systems. The Rankine cycle study reported an efficiency between 15 and 19% with inlet turbine temperatures of 1800 R, whereas the Brayton cycle offered efficiency greater than 20% with an inlet temperature of 2000 R. Thermoelectric converters offered low efficiency (3-5%) but high reliability. Thermionic conversion could provide similar efficiencies to the Brayton cycle if proper conditions reached. RTG technology was first developed by Los Alamos National Laboratory during the 1960s and 1970s to provide radioisotope thermoelectric generator power for cardiac pacemakers. Of the 250 plutonium-powered pacemakers Medtronic manufactured, twenty-two were still in service more than twenty-five years later, a feat that no battery-powered pacemaker could achieve. This same RTG power technology has been used in spacecraft such as Pioneer 10 and 11, Voyager 1 and 2, Cassini–Huygens and New Horizons, and in other devices, such as the Mars Science Laboratory and Mars 2020 Perseverance Rover, for long-term nuclear power generation. See also Atomic battery Plutonium-239 Polonium-210 References External links Story of Seaborg's discovery of Pu-238, especially pages 34–35. NLM Hazardous Substances Databank – Plutonium, Radioactive Fertile materials Isotopes of plutonium Radioisotope fuels
https://en.wikipedia.org/wiki/Neuroimmunology
Neuroimmunology is a field combining neuroscience, the study of the nervous system, and immunology, the study of the immune system. Neuroimmunologists seek to better understand the interactions of these two complex systems during development, homeostasis, and response to injuries. A long-term goal of this rapidly developing research area is to further develop our understanding of the pathology of certain neurological diseases, some of which have no clear etiology. In doing so, neuroimmunology contributes to development of new pharmacological treatments for several neurological conditions. Many types of interactions involve both the nervous and immune systems including the physiological functioning of the two systems in health and disease, malfunction of either and or both systems that leads to disorders, and the physical, chemical, and environmental stressors that affect the two systems on a daily basis. Background Neural targets that control thermogenesis, behavior, sleep, and mood can be affected by pro-inflammatory cytokines which are released by activated macrophages and monocytes during infection. Within the central nervous system production of cytokines has been detected as a result of brain injury, during viral and bacterial infections, and in neurodegenerative processes. From the US National Institute of Health: "Despite the brain's status as an immune privileged site, an extensive bi-directional communication takes place between the nervous and the immune system in both health and disease. Immune cells and neuroimmune molecules such as cytokines, chemokines, and growth factors modulate brain function through multiple signaling pathways throughout the lifespan. Immunological, physiological and psychological stressors engage cytokines and other immune molecules as mediators of interactions with neuroendocrine, neuropeptide, and neurotransmitter systems. For example, brain cytokine levels increase following stress exposure, while treatments designed to alleviate stress reverse this effect. "Neuroinflammation and neuroimmune activation have been shown to play a role in the etiology of a variety of neurological disorders such as stroke, Parkinson's and Alzheimer's disease, multiple sclerosis, pain, and AIDS-associated dementia. However, cytokines and chemokines also modulate CNS function in the absence of overt immunological, physiological, or psychological challenges. For example, cytokines and cytokine receptor inhibitors affect cognitive and emotional processes. Recent evidence suggests that immune molecules modulate brain systems differently across the lifespan. Cytokines and chemokines regulate neurotrophins and other molecules critical to neurodevelopmental processes, and exposure to certain neuroimmune challenges early in life affects brain development. In adults, cytokines and chemokines affect synaptic plasticity and other ongoing neural processes, which may change in aging brains. Finally, interactions of immune molecules with the hypothalamic-pituitary-gonadal system indicate that sex differences are a significant factor determining the impact of neuroimmune influences on brain function and behavior." Recent research demonstrates that reduction of lymphocyte populations can impair cognition in mice, and that restoration of lymphocytes restores cognitive abilities. Epigenetics Overview Epigenetic medicine encompasses a new branch of neuroimmunology that studies the brain and behavior, and has provided insights into the mechanisms underlying brain development, evolution, neuronal and network plasticity and homeostasis, senescence, the etiology of diverse neurological diseases and neural regenerative processes. It is leading to the discovery of environmental stressors that dictate initiation of specific neurological disorders and specific disease biomarkers. The goal is to "promote accelerated recovery of impaired and seemingly irrevocably lost cognitive, behavioral, sensorimotor functions through epigenetic reprogramming of endogenous regional neural stem cells". Neural stem cell fate Several studies have shown that regulation of stem cell maintenance and the subsequent fate determinations are quite complex. The complexity of determining the fate of a stem cell can be best understood by knowing the "circuitry employed to orchestrate stem cell maintenance and progressive neural fate decisions". Neural fate decisions include the utilization of multiple neurotransmitter signal pathways along with the use of epigenetic regulators. The advancement of neuronal stem cell differentiation and glial fate decisions must be orchestrated timely to determine subtype specification and subsequent maturation processes including myelination. Neurodevelopmental disorders Neurodevelopmental disorders result from impairments of growth and development of the brain and nervous system and lead to many disorders. Examples of these disorders include Asperger syndrome, traumatic brain injury, communication, speech and language disorders, genetic disorders such as fragile-X syndrome, Down syndrome, epilepsy, and fetal alcohol syndrome. Studies have shown that autism spectrum disorders (ASDs) may present due to basic disorders of epigenetic regulation. Other neuroimmunological research has shown that deregulation of correlated epigenetic processes in ASDs can alter gene expression and brain function without causing classical genetic lesions which are more easily attributable to a cause and effect relationship. These findings are some of the numerous recent discoveries in previously unknown areas of gene misexpression. Neurodegenerative disorders Increasing evidence suggests that neurodegenerative diseases are mediated by erroneous epigenetic mechanisms. Neurodegenerative diseases include Huntington's disease and Alzheimer's disease. Neuroimmunological research into these diseases has yielded evidence including the absence of simple Mendelian inheritance patterns, global transcriptional dysregulation, multiple types of pathogenic RNA alterations, and many more. In one of the experiments, a treatment of Huntington’s disease with histone deacetylases (HDAC), an enzyme that removes acetyl groups from lysine, and DNA/RNA binding anthracylines that affect nucleosome positioning, showed positive effects on behavioral measures, neuroprotection, nucleosome remodeling, and associated chromatin dynamics. Another new finding on neurodegenerative diseases involves the overexpression of HDAC6 suppresses the neurodegenerative phenotype associated with Alzheimer’s disease pathology in associated animal models. Other findings show that additional mechanisms are responsible for the "underlying transcriptional and post-transcriptional dysregulation and complex chromatin abnormalities in Huntington's disease". Neuroimmunological disorders The nervous and immune systems have many interactions that dictate overall body health. The nervous system is under constant monitoring from both the adaptive and innate immune system. Throughout development and adult life, the immune system detects and responds to changes in cell identity and neural connectivity. Deregulation of both adaptive and acquired immune responses, impairment of crosstalk between these two systems, as well as alterations in the deployment of innate immune mechanisms can predispose the central nervous system (CNS) to autoimmunity and neurodegeneration. Other evidence has shown that development and deployment of the innate and acquired immune systems in response to stressors on functional integrity of cellular and systemic level and the evolution of autoimmunity are mediated by epigenetic mechanisms. Autoimmunity has been increasingly linked to targeted deregulation of epigenetic mechanisms, and therefore, use of epigenetic therapeutic agents may help reverse complex pathogenic processes. Multiple sclerosis (MS) is one type of neuroimmunological disorder that affects many people. MS features CNS inflammation, immune-mediated demyelination and neurodegeneration. Myalgic Encephalomyelitis (also known as Chronic fatigue syndrome), is a multi-system disease that causes dysfunction of neurological, immune, endocrine and energy-metabolism systems. Though many patients show neuroimmunological degeneration, the correct roots of ME/CFS are unknown. Symptoms of ME/CFS include significantly lowered ability to participate in regular activities, stand or sit straight, inability to talk, sleep problems, excessive sensitivity to light, sound or touch and/or thinking and memory problems (defective cognitive functioning). Other common symptoms are muscle or joint pain, sore throat or night sweats. There is no treatment but symptoms may be treated. Patients that are sensitive to mold may show improvement in symptoms having moved to drier areas. Some patients in general have less severe ME, whereas others may be bedridden for life. Major themes of research The interaction of the CNS and immune system are fairly well known. Burn-induced organ dysfunction using vagus nerve stimulation has been found to attenuate organ and serum cytokine levels. Burns generally induce abacterial cytokine generation and perhaps parasympathetic stimulation after burns would decrease cardiodepressive mediator generation. Multiple groups have produced experimental evidence that support proinflammatory cytokine production being the central element of the burn-induced stress response. Still other groups have shown that vagus nerve signaling has a prominent impact on various inflammatory pathologies. These studies have laid the groundwork for inquiries that vagus nerve stimulation may influence postburn immunological responses and thus can ultimately be used to limit organ damage and failure from burn induced stress. Basic understanding of neuroimmunological diseases has changed significantly during the last ten years. New data broadening the understanding of new treatment concepts has been obtained for a large number of neuroimmunological diseases, none more so than multiple sclerosis, since many efforts have been undertaken recently to clarify the complexity of pathomechanisms of this disease. Accumulating evidence from animal studies suggests that some aspects of depression and fatigue in MS may be linked to inflammatory markers. Studies have demonstrated that Toll like-receptor (TLR4) is critically involved in neuroinflammation and T cell recruitment in the brain, contributing to exacerbation of brain injury. Research into the link between smell, depressive behavior, and autoimmunity has turned up interesting findings including the facts that inflammation is common in all of the diseases analyzed, depressive symptoms appear early in the course of most diseases, smell impairment is also apparent early in the development of neurological conditions, and all of the diseases involved the amygdale and hippocampus. Better understanding of how the immune system functions and what factors contribute to responses are being heavily investigated along with the aforementioned coincidences. Neuroimmunology is also an important topic to consider during the design of neural implants. Neural implants are being used to treat many diseases, and it is key that their design and surface chemistry do not elicit an immune response. Future directions The nervous system and immune system require the appropriate degrees of cellular differentiation, organizational integrity, and neural network connectivity. These operational features of the brain and nervous system may make signaling difficult to duplicate in severely diseased scenarios. There are currently three classes of therapies that have been utilized in both animal models of disease and in human clinical trials. These three classes include DNA methylation inhibitors, HDAC inhibitors, and RNA-based approaches. DNA methylation inhibitors are used to activate previously silenced genes. HDACs are a class of enzymes that have a broad set of biochemical modifications and can affect DNA demethylation and synergy with other therapeutic agents. The final therapy includes using RNA-based approaches to enhance stability, specificity, and efficacy, especially in diseases that are caused by RNA alterations. Emerging concepts concerning the complexity and versatility of the epigenome may suggest ways to target genomewide cellular processes. Other studies suggest that eventual seminal regulator targets may be identified allowing with alterations to the massive epigenetic reprogramming during gametogenesis. Many future treatments may extend beyond being purely therapeutic and may be preventable perhaps in the form of a vaccine. Newer high throughput technologies when combined with advances in imaging modalities such as in vivo optical nanotechnologies may give rise to even greater knowledge of genomic architecture, nuclear organization, and the interplay between the immune and nervous systems. See also Immune system Immunology Gut–brain axis Neural top down control of physiology Neuroimmune system Neurology Psychosomatic illness References Further reading (Written for the highly technical reader) Mind-Body Medicine: An Overview, US National Institutes of Health, Center for Complementary and Integrative Health technical. (Written for the general public) External links Online Resources Psychoneuroimmunology, Neuroimmunomodulation (6 chapters from this Cambridge UP book are freely available) More than 100, freely available, published research articles on neuroimmunology and related topics by Professor Michael P. Pender, Neuroimmunology Research Unit, The University of Queensland Branches of immunology Clinical neuroscience Neurology
https://en.wikipedia.org/wiki/Phosphatidylserine
Phosphatidylserine (abbreviated Ptd-L-Ser or PS) is a phospholipid and is a component of the cell membrane. It plays a key role in cell cycle signaling, specifically in relation to apoptosis. It is a key pathway for viruses to enter cells via apoptotic mimicry. Its exposure on the outer surface of a membrane marks the cell for destruction via apoptosis. Structure Phosphatidylserine is a phospholipid—more specifically a glycerophospholipid—which consists of two fatty acids attached in ester linkage to the first and second carbon of glycerol and serine attached through a phosphodiester linkage to the third carbon of the glycerol. Phosphatidylserine sourced from plants differs in fatty acid composition from that sourced from animals. It is commonly found in the inner (cytoplasmic) leaflet of biological membranes. It is almost entirely found in the inner monolayer of the membrane with only less than 10% of it in the outer monolayer. Introduction Phosphatidylserine (PS) is the major acidic phospholipid class that accounts for 13–15% of the phospholipids in the human cerebral cortex. In the plasma membrane, PS is localized exclusively in the cytoplasmic leaflet where it forms part of protein docking sites necessary for the activation of several key signaling pathways. These include the Akt, protein kinase C (PKC) and Raf-1 signaling that is known to stimulate neuronal survival, neurite growth, and synaptogenesis. Modulation of the PS level in the plasma membrane of neurons has a significant impact on these signaling processes. Biosynthesis Phosphatidylserine is formed in bacteria (such as E. coli) through a displacement of cytidine monophosphate (CMP) through a nucleophilic attack by the hydroxyl functional group of serine. CMP is formed from CDP-diacylglycerol by PS synthase. Phosphatidylserine can eventually become phosphatidylethanolamine by the enzyme PS decarboxylase (forming carbon dioxide as a byproduct). Similar to bacteria, yeast can form phosphatidylserine in an identical pathway. In mammals, phosphatidylserine is instead derived from phosphatidylethanolamine or phosphatidylcholine through one of two Ca2+-dependent head-group exchange reactions in the endoplasmic reticulum. Both reactions require a serine but product an ethanolamine or choline, respectively. These are promoted by phosphatidylserine synthase 1 (PSS1) or 2 (PSS2). Conversely, phosphatidylserine can also give rise to phosphatidylethanolamine and phosphatidylcholine, although in animals the pathway to generate phosphatidylcholine from phosphatidylserine only operates in the liver. Dietary sources The average daily phosphatidylserine intake in a Western diet is estimated to be 130mg. Phosphatidylserine may be found in meat and fish. Only small amounts are found in dairy products and vegetables, with the exception of white beans and soy lecithin. Phosphatidylserine is found in soy lecithin at about 3% of total phospholipids. Table 1. Phosphatidylserine content in different foods. Supplementation Health claims A panel of the European Food Safety Authority concluded that a cause and effect relationship cannot be established between the consumption of phosphatidylserine and "memory and cognitive functioning in the elderly", "mental health/cognitive function" and "stress reduction and enhanced memory function". This conclusion follows because bovine brain cortex- and soy-based phosphatidylserine are different substances and might, therefore, have different biological activities. Therefore, the results of studies using phosphatidylserine from different sources cannot be generalized. Cognition In May, 2003 the Food and Drug Administration gave "qualified health claim" status to phosphatidylserine thus allowing labels to state "consumption of phosphatidylserine may reduce the risk of dementia and cognitive dysfunction in the elderly" along with the disclaimer "very limited and preliminary scientific research suggests that phosphatidylserine may reduce the risk of cognitive dysfunction in the elderly." According to the FDA, there is a lack of scientific agreement amongst qualified experts that a relationship exists between phosphatidylserine and cognitive function. More recent reviews have suggested that the relationship may be more robust, though the mechanism remains unclear. A 2020 review of three clinical trials found that phosphatidylserine is likely effective for enhancing cognitive function in older people with mild cognitive impairment. Some studies have suggested that whether the phosphatidylserine is plant- or animal-derived may have significance, with the FDA's statement applying specifically to soy-derived products. Safety Initially, phosphatidylserine supplements were derived from bovine cortex. However, due to the risk of potential transfer of infectious diseases such as bovine spongiform encephalopathy (or "mad cow disease"), soy-derived supplements became an alternative. A 2002 safety report determined supplementation in elderly people at a dosage of 200mg three times daily to be safe. Concerns about the safety of soy products persist, and some manufacturers of phosphatidylserine use sunflower lecithin instead of soy lecithin as a source of raw material production. References External links DrugBank info page Phospholipids Membrane biology
https://en.wikipedia.org/wiki/Picocell
A picocell is a small cellular base station typically covering a small area, such as in-building (offices, shopping malls, train stations, stock exchanges, etc.), or more recently in-aircraft. In cellular networks, picocells are typically used to extend coverage to indoor areas where outdoor signals do not reach well, or to add network capacity in areas with very dense phone usage, such as train stations or stadiums. Picocells provide coverage and capacity in areas difficult or expensive to reach using the more traditional macrocell approach. Overview In cellular wireless networks, such as GSM, the picocell base station is typically a low-cost, small (typically the size of a ream of A4 paper), reasonably simple unit that connects to a base station controller (BSC). Multiple picocell 'heads' connect to each BSC: the BSC performs radio resource management and hand-over functions, and aggregates data to be passed to the mobile switching centre (MSC) or the gateway GPRS support node (GGSN). Connectivity between the picocell heads and the BSC typically consists of in-building wiring. Although originally deployed systems (1990s) used plesiochronous digital hierarchy (PDH) links such as E1/T1 links, more recent systems use Ethernet cabling. Aircraft use satellite links. More recent work has developed the concept towards a head unit containing not only a picocell, but also many of the functions of the BSC and some of the MSC. This form of picocell is sometimes called an access point base station or 'enterprise femtocell'. In this case, the unit contains all the capability required to connect directly to the Internet, without the need for the BSC/MSC infrastructure. This is a potentially more cost-effective approach. Picocells offer many of the benefits of "small cells" (similar to femtocells) in that they improve data throughput for mobile users and increase capacity in the mobile network. In particular, the integration of picocells with macrocells through a heterogeneous network can be useful in seamless hand-offs and increased mobile data capacity. Picocells are available for most cellular technologies including GSM, CDMA, UMTS and LTE from manufacturers including ip.access, ZTE, Huawei and Airwalk. Range Typically the range of a microcell is less than two kilometers wide, a picocell is 200 meters or less, and a femtocell is on the order of 10 meters, although AT&T calls its product, with a range of , a "microcell". AT&T uses "AT&T 3G MicroCell" as a trademark and not necessarily the "microcell" technology, however. See also Femtocell Macrocell Microcell Small cell References Mobile telecommunications 9. http://defenseelectronicsmag.com/site-files/defenseelectronicsmag.com/files/archive/rfdesign.com/mag/407rfdf1.pdf
https://en.wikipedia.org/wiki/Siblicide
Siblicide (attributed by behavioural ecologist Doug Mock to Barbara M. Braun) is the killing of an infant individual by its close relatives (full or half siblings). It may occur directly between siblings or be mediated by the parents, and is driven by the direct fitness benefits to the perpetrator and sometimes its parents. Siblicide has mainly, but not only, been observed in birds. (The word is also used as a unifying term for fratricide and sororicide in the human species; unlike these more specific terms, it leaves the sex of the victim unspecified.) Siblicidal behavior can be either obligate or facultative. Obligate siblicide is when a sibling almost always ends up being killed. Facultative siblicide means that siblicide may or may not occur, based on environmental conditions. In birds, obligate siblicidal behavior results in the older chick killing the other chick(s). In facultative siblicidal animals, fighting is frequent, but does not always lead to death of a sibling; this type of behavior often exists in patterns for different species. For instance, in the blue-footed booby, a sibling may be hit by a nest mate only once a day for a couple of weeks and then attacked at random, leading to its death. More birds are facultatively siblicidal than obligatory siblicidal. This is perhaps because siblicide takes a great amount of energy and is not always advantageous. Siblicide generally only occurs when resources, specifically food sources, are scarce. Siblicide is advantageous for the surviving offspring because they have now eliminated most or all of their competition. It is also somewhat advantageous for the parents because the surviving offspring most likely have the strongest genes, and therefore likely have the highest fitness. Some parents encourage siblicide, while others prevent it. If resources are scarce, the parents may encourage siblicide because only some offspring will survive anyway, so they want the strongest offspring to survive. By letting the offspring kill each other, it saves the parents time and energy that would be wasted on feeding offspring that most likely would not survive anyway. Models Originally proposed by Dorward (1962), the insurance egg hypothesis (IEH) has quickly become the most widely supported explanation for avian siblicide as well as the overproduction of eggs in siblicidal birds. The IEH states that the extra egg(s) produced by the parent serves as an "insurance policy" in the case of the failure of the first egg (either it did not hatch or the chick died soon after hatching). When both eggs hatch successfully, the second chick, or B-chick, is known as the marginal offspring; otherwise stated, it is marginal in the sense that it can add to or subtract from the evolutionary success of its family members. It can increase reproductive and evolutionary success in two primary ways. Firstly, it represents an extra unit of parental success if it survives along with its siblings. In the context of Hamilton's inclusive fitness theory, the marginal chick increases the total number of offspring successfully produced by the parent and therefore adds to the gene pool that the parent bird passes to the next generation. Secondly, it can serve as a replacement for any of its siblings that do not hatch or die prematurely. Inclusive fitness is defined as an animal's individual reproductive success, plus the positive and/or negative effects that animal has on its sibling's reproductive success, multiplied by the animal's degree of kinship. In instances of siblicide, the victim is usually the youngest sibling. This sibling's reproductive value can be measured by how much it enhances or detracts from the success of other siblings, therefore this individual is considered to be marginal. The marginal sibling can act as an additional element of parental success if it, as well as its siblings, survive. If an older sibling happens to die unexpectedly, the marginal sibling is there to take its place; this acts as insurance against the death of another sibling, which depends on the likelihood of the older sibling dying. Parent–offspring conflict is a theory which states that offspring can take actions to advance their own fitness while decreasing the fitness of their parents and that parents can increase their own fitness while simultaneously decreasing the fitness of their offspring. This is one of the driving forces of siblicide because it increases the fitness of the offspring by decreasing the amount of competition they have. Parents may either discourage or accept siblicide depending on whether it increases the probability of their offspring surviving to reproduce. Mathematical representation The cost and effect siblicide has on a brood's reproductive success can be broken down into an algebraic equation. is the level of parental investment in the entire brood, with an absolute maximum value MH(0 ≤M ≤M"H)." A parent investing units of parental investment (PI) in its current brood can expect a future reproductive success given by if M ≤ 0 f(M)= { fH[1-(M/MH)^θ ] if if MH≤M, is the parents' future reproductive success, if it makes no reproductive attempt. The parameter θ determines the relationship between parental investment and the cost of reproduction. The equation indicates that as increases, the future reproductive success of the parent decreases. The probability p(m) that a chick joins the breeding population after receiving M units of PI is ' if if m ≤ mv Examples In birds Cattle egrets, Bubulcus ibis, exhibit asynchronous hatching and androgen loading in the first two eggs of their normal three-egg clutch. This results in older chicks being more aggressive and having a developmental head start. If food is scarce the third chick often dies or is killed by the larger siblings and so parental effort is distributed between the remaining chicks, which are hence more likely to survive to reproduce. The extra "excess" egg is possibly laid either due to exploit the possibility of elevated food abundance (as seen in the blue-footed booby, Sula nebouxii) or due to the chance of sterility in one egg. This is suggested by studies into the common grackle, Quiscalus quiscula and the masked booby, Sula dactylatra. The theory of kin selection may be seen as a genetically mediated altruistic response within closely related individuals whereby the fitness conferred by the altruist to the recipient outweighs the cost to itself or the sibling/parent group. The fact that such a sacrifice occurs indicates an evolutionary tendency in some taxa toward improved vertical gene transmission in families or a higher percentage of the unit in reaching a reproductive age in a resource-limited environment. The closely related masked and Nazca boobies are both obligately siblicidal species, while the blue-footed booby is a facultatively siblicidal species. In a facultatively siblicidal species, aggression occurs between siblings but is not always lethal, whereas in an obligately siblicidal species, aggression between siblings always leads to the death of one of the offspring. All three species have an average brood size of two eggs, which are laid within approximately four days of each other. In the few days before the second egg hatches, the first-born chick, known as the senior chick or A-chick, enjoys a period of growth and development during which it has full access to resources provided by the parent bird. Therefore, when the junior chick (B-chick) hatches, there is a significant disparity in size and strength between it and its older sibling. In these three booby species, hatching order indicates chick hierarchy in the nest. The A-chick is dominant to the B-chick, which in turn is dominant to the C chick, etc. (when there are more than two chicks per brood). Masked booby and Nazca booby dominant A-chicks always begin pecking their younger sibling(s) as soon as they hatch; moreover, assuming it is healthy, the A-chick usually pecks its younger sibling to death or pushes it out of the nest scrape within the first two days that the junior chick is alive. Blue-footed booby A-chicks also express their dominance by pecking their younger sibling. However, unlike the obligately siblicidal masked and Nazca booby chicks, their behavior is not always lethal. A study by Lougheed and Anderson (1999) reveals that blue-footed booby senior chicks only kill their siblings in times of food shortage. Furthermore, even when junior chicks are killed, it does not happen immediately. According to Anderson, the average age of death of the junior chick in a masked booby brood is 1.8 days, while the average age of death of the junior chick in a blue-footed booby brood may be as high as 18 days. The difference in age of death in the junior chick in each booby species is indicative of the type of siblicide that the species practices. Facultatively siblicidal blue-footed booby A-chicks only kill their nest mate(s) when necessary. Obligately siblicidal masked and Nazca booby A-chicks kill their sibling no matter if resources are plentiful or not; in other words, siblicidal behavior occurs independently of environmental factors. Blue-footed boobies are less likely to commit siblicide and if they do, they commit it later after hatching than masked boobies. In a study, the chicks of blue-footed and masked boobies were switched to see if the rates of siblicide would be affected by the foster parents. It turns out that the masked boobies that were placed under the care of blue-footed booby parents committed siblicide less often than they would normally. Similarly, the blue-footed booby chicks placed with the masked booby parents committed siblicide more often than they normally did, indicating that parental intervention also affects the offspring's behavior. In another experiment which tested the effect of a synchronous brood on siblicide, three groups were created: one in which all the eggs were synchronous, one in which the eggs hatched asynchronously, and one in which asynchronous hatching was exaggerated. It was found that the synchronous brood fought more, was less likely to survive than the control group, and resulted in lower parental efficiency. The exaggerated asynchronous brood also had a lower survivorship rate than the control brood and forced parents to bring more food to the nest each day, even though not as many offspring survived. In other animals Siblicide (brood reduction) in spotted hyenas (Crocuta crocuta'') resulted in the champions achieving a long-term growth rate similar to that of singletons and thus significantly increased their expected survival. The incidence of siblicide increased as the average cohort growth rate declined. When both cubs were alive, total maternal input in siblicidal litters was significantly lower than in non-siblicidal litters. Once siblicide has occurred, the growth rates of siblicide survivors substantially increased, indicating that mothers don't reduce their maternal input after siblicide has occurred. Also, facultative siblicide can evolve when the fitness benefits gained after the removal of a sibling by the dominant offspring, exceeds the costs acquired in terms of decreasing that sibling's inclusive fitness from the death of its sibling. Some mammals sometimes commit siblicide for the purpose of gaining a larger portion of the parent's care. In spotted hyenas, pups of the same sex exhibit siblicide more often than male-female twins. Sex ratios may be manipulated in this way and the dominant status of a female and transmission of genes may be ensured through a son or daughter which inherits this solely, receiving much more parental nursing and decreased sexual competition. Siblicidal "survival of the fittest" is also exhibited in parasitic wasps, which lay multiple eggs in a host, after which the strongest larva kills its rival sibling. Another example is when mourning cloak larvae will eat non-hatched eggs. In sand tiger sharks, the first embryo to hatch from its egg capsule kills and consumes its younger siblings while still in the womb. In humans Siblicide can also be seen in humans in the form of twins in the mother's womb. One twin may grow to be an average weight, while the other is underweight. This is a result of one twin taking more nutrients from the mother than the other twin. In cases of identical twins, they may even have twin-to-twin transfusion syndrome (TTTS). This means that the twins share the same placenta and blood and nutrients can then move between twins. The twins may also be suffering from intrauterine growth restriction (IUGR), meaning that there is not enough room for both of the twins to grow. All of these factors can limit the growth of one of the twins while promoting the growth of the other. While one of the twins may not die because of these factors, it is entirely possible that their health will be compromised and lead to complications after their birth. Siblicide in humans can also manifest itself in the form of murder. This type of killing (siblicide) is rarer than other types of killings. Genetic relatedness may be an important moderator of conflict and homicide among family members, including siblings. Siblings may be less likely to kill a full sibling because that would be a decrease in their own fitness. The cost of killing a sibling is much higher than the fitness costs associated with the death of a sibling-in-law because the killer wouldn't be losing 50% of their genes. Siblicide was found to be more common in early to middle adulthood as opposed to adolescence. However, there is still a tendency for the killer to be the younger party when the victim and killer were of the same sex. The older individual was most likely to be the killer if the incident were to occur at a younger age. See also Fratricide, the killing of a brother Infanticide (zoology), a related behaviour Intrauterine cannibalism Nazca booby (displays obligate siblicide) Parent–offspring conflict Sibling abuse Sibling rivalry Sororicide, the killing of a sister References Further reading Killings by type Fratricides Homicide Selection Sibling Sibling rivalry Sociobiology Sororicides
https://en.wikipedia.org/wiki/Brivudine
Brivudine (trade names Zostex, Mevir, Brivir, among others) is an antiviral drug used in the treatment of herpes zoster ("shingles"). Like other antivirals, it acts by inhibiting replication of the target virus. Medical uses Brivudine is used for the treatment of herpes zoster in adult patients. It is taken orally once daily, in contrast to aciclovir, valaciclovir and other antivirals. A study has found that it is more effective than aciclovir, but this has been disputed because of a possible conflict of interest on part of the study authors. Contraindications The drug is contraindicated in patients undergoing immunosuppression (for example because of an organ transplant) or cancer therapy, especially with fluorouracil (5-FU) and chemically related (pro)drugs such as capecitabine and tegafur, as well as the antimycotic drug flucytosine, which is also related to 5-FU. It has not been proven to be safe in children and pregnant or breastfeeding women. Adverse effects The drug is generally well tolerated. The only common side effect is nausea (in 2% of patients). Less common side effects (<1%) include headache, increased or lowered blood cell counts (granulocytopenia, anaemia, lymphocytosis, monocytosis), increased liver enzymes, and allergic reactions. Interactions Brivudine interacts strongly and in rare cases lethally with the anticancer drug fluorouracil (5-FU), its prodrugs and related substances. Even topically applied 5-FU can be dangerous in combination with brivudine. This is caused by the main metabolite, bromovinyluracil (BVU), irreversibly inhibiting the enzyme dihydropyrimidine dehydrogenase (DPD) which is necessary for inactivating 5-FU. After a standard brivudine therapy, DPD function can be compromised for up to 18 days. This interaction is shared with the closely related drug sorivudine which also has BVU as its main metabolite. There are no other relevant interactions. Brivudine does not significantly influence the cytochrome P450 enzymes in the liver. Pharmacology Spectrum of activity The drug inhibits replication of varicella zoster virus (VZV) – which causes herpes zoster – and herpes simplex virus type 1 (HSV-1), but not HSV-2 which typically causes genital herpes. In vitro, inhibitory concentrations against VZV are 200- to 1000-fold lower than those of aciclovir and penciclovir, theoretically indicating a much higher potency of brivudine. Clinically relevant VZV strains are particularly sensitive. Mechanism of action Brivudine is an analogue of the nucleoside thymidine. The active compound is brivudine 5'-triphosphate, which is formed in subsequent phosphorylations by viral (but not human) thymidine kinase and presumably by nucleoside-diphosphate kinase. Brivudine 5'-triphosphate works because it is incorporated into the viral DNA, but then blocks the action of DNA polymerases, thus inhibiting viral replication. Pharmacokinetics Brivudine is well and rapidly absorbed from the gut and undergoes first-pass metabolism in the liver, where the enzyme thymidine phosphorylase quickly splits off the sugar component, leading to a bioavailability of 30%. The resulting metabolite is bromovinyluracil (BVU), which does not have antiviral activity. BVU is also the only metabolite that can be detected in the blood plasma. Highest blood plasma concentrations are reached after one hour. Brivudine is almost completely (>95%) bound to plasma proteins. Terminal half-life is 16 hours; 65% of the substance are found in the urine and 20% in the faeces, mainly in form of an acetic acid derivative (which is not detectable in the plasma), but also other water-soluble metabolites, which are urea derivatives. Less than 1% is excreted in form of the original compound. Chemistry The molecule has three chiral carbon atoms in the deoxyribose (sugar) part all of which have defined orientation; i.e. the drug is stereochemically pure. The substance is a white powder. Manufacturing Main supplier is Berlin Chemie, now part of Italy's Menarini Group. In Central America is provided by Menarini Centro America and Wyeth. History The substance was first synthesized by scientists at the University of Birmingham in the UK in 1976. It was shown to be a potent inhibitor of HSV-1 and VZV by Erik De Clercq at the Rega Institute for Medical Research in Belgium in 1979. In the 1980s the drug became commercially available in East Germany, where it was marketed as Helpin by a pharmaceutical company called Berlin-Chemie. Only after the indication was changed to the treatment of herpes zoster in 2001 did it become more widely available in Europe. Brivudine is approved for use in a number of European countries including Austria, Belgium, Germany, Greece, Italy, Portugal, Spain and Switzerland. Etymology The name brivudine derives from the chemical nomenclature bromo-vinyl-deoxyuridine or BVDU for short. It is sold under trade names such as Bridic, Brival, Brivex, Brivir, Brivirac, Brivox, Brivuzost, Zerpex, Zonavir, Zostex, and Zovudex. Research A Cochrane Systematic Review examined the effectiveness of multiple antiviral drugs in the treatment of herpes simplex virus epithelial keratitis. Brivudine was found to be significantly more effective than idoxuridine in increasing the number of successfully healed eyes of participants. See also Related antiviral drugs Aciclovir Valacyclovir, a prodrug form of aciclovir Famciclovir, an analogue of Penciclovir with greater oral availability Foscarnet, an intravenous antiviral for aciclovir-resistant VZV Penciclovir, a topical preparation Vaccines and other treatments Zostavax, a live virus Herpes zoster (shingles) vaccine Varivax, a live virus Varicella Zoster (chickenpox) vaccine Shingrix, a recombinant subunit vaccine for shingles VZV immune globulin, an antibody-based treatment for immune-suppressed patients with zoster References Nucleosides Pyrimidinediones Organobromides Anti-herpes virus drugs Hydroxymethyl compounds
https://en.wikipedia.org/wiki/Intracrine
Intracrine refers to a hormone that acts inside a cell, regulating intracellular events. In simple terms it means that the cell stimulates itself by cellular production of a factor that acts within the cell. Steroid hormones act through intracellular (mostly nuclear) receptors and, thus, may be considered to be intracrines. In contrast, peptide or protein hormones, in general, act as endocrines, autocrines, or paracrines by binding to their receptors present on the cell surface. Several peptide/protein hormones or their isoforms also act inside the cell through different mechanisms. These peptide/protein hormones, which have intracellular functions, are also called intracrines. The term 'intracrine' is thought to have been coined to represent peptide/protein hormones that also have intracellular actions. To better understand intracrine, we can compare it to paracrine, autocrine and endocrine. The autocrine system deals with the autocrine receptors of a cell allowing for the hormones to bind, which have been secreted from that same cell. The paracrine system is one where nearby cells get hormones from a cell, and change the functioning of those nearby cells. The endocrine system refers to when the hormones from a cell affect another cell that is very distant from the one that released the hormone. Paracrine physiology has been understood for decades now and the effects of paracrine hormones have been observed when for example, an obesity associate tumor will face the effects of local adipocytes, even if it is not in direct contact with the fat pads in concern. Endocrine physiology on the other hand is a growing field and has had a new area explored, called intracrinology. In intracrinology, the sex steroids produced locally, exert their action in the same cell where they are produced. The biological effects produced by intracellular actions are referred as intracrine effects, whereas those produced by binding to cell surface receptors are called endocrine, autocrine, or paracrine effects, depending on the origin of the hormone. The intracrine effect of some of the peptide/protein hormones are similar to their endocrine, autocrine, or paracrine effects; however, these effects are different for some other hormones. Intracrine can also refer to a hormone acting within the cell that synthesizes it. Examples of intracrine peptide hormones: There are several protein/peptide hormones that are also intracrines. Notable examples that have been described in the references include: Peptides of the renin–angiotensin system: angiotensin II and angiotensin (1-7) Fibroblast growth factor 2 Parathyroid hormone-related protein See also Local hormone Autocrine signalling References Park, Jiyoung; Euhus, David M.; Scherer, Philipp E. (August 2011). "Paracrine and Endocrine Effects of Adipose Tissue on Cancer Development and Progression". Endocrine Reviews. 32 (4): 550–570. . Labrie, Fernand; Luu-The, Van; Labrie, Claude; Bélanger, Alain; Simard, Jacques; Lin, Sheng-Xiang; Pelletier, Georges (April 2003). "Endocrine and Intracrine Sources of Androgens in Women: Inhibition of Breast Cancer and Other Roles of Androgens and Their Precursor Dehydroepiandrosterone". Endocrine Reviews. 24 (2): 152–182. . Specific Cell biology
https://en.wikipedia.org/wiki/BORGChat
BORGChat is a LAN messaging software program. It has achieved a relative state of popularity and it is considered to be a complete LAN chat program. It has been superseded by commercial products which allow voice chat, video conferencing, central monitoring and administration. An extension called "BORGVoice" adds word producing chat capabilities to BORGChat, the extension remains in alpha stage. History BORGChat was first published from Ionut Cioflan (nickname "IOn") in 2002. The name comes from the BORG race from Star Trek: The Borg is a massive society of cybernetic automatons abducted and assimilated from thousands of species. The Borg collective improves by consuming technologies, in a similar way wishes BORGChat to "assimilate". Features The software supports the following features: Public and private chat rooms (channels), support for own chat rooms Avatars with user information and online alerts Sending private messages Sending files and pictures, with pause and bandwidth management Animated smileys (emoticons) and sound effects (beep) View computers and network shares Discussion logs in the LAN Message filter, ignore messages from other users Message board with Bulletin Board Code (bold, italic, underline) Multiple chat status modes: Available/Busy/Away with customizable messages Multi language support (with the possibility of adding more languages): English, Romanian, Swedish, Spanish, Polish, Slovak, Italian, Bulgarian, German, Russian, Turkish, Ukrainian, Slovenian, Czech, Danish, French, Latvian, Portuguese, Urdu, Dutch, Hungarian, Serbian, Macedonian. See also Synchronous conferencing Comparison of LAN messengers References External links Official BORGChat website 10 Best Free Chat Rooms LAN messengers Online chat
https://en.wikipedia.org/wiki/Phenomics
Phenomics is the systematic study of traits that make up a phenotype. It was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. It is also important to remember that an organisms phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics. Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes. Applications Plant sciences In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron at Iowa State University, the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center, the University of Nebraska-Lincoln, and elsewhere. Standards, methods, tools, and instrumentation A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist to analyze 2D and 3D imaging data of plants. These methods are available to the community in various implementations, ranging from end-user ready cyber-platforms in the cloud such as DIRT and PlantIt to programming frameworks for software developers such as PlantCV. Many research groups are focused on developing systems using the Breeding API, a Standardized RESTful Web Service API Specification for communicating Plant Breeding Data. The Australian Plant Phenomics Facility (APPF), an initiative of the Australian government, has developed a number of new instruments for comprehensive and fast measurements of phenotypes in both the lab and the field. Research coordination and communities The International Plant Phenotyping Network (IPPN) is an organization that seeks to enable exchange of knowledge, information, and expertise across many disciplines involved in plant phenomics by providing a network linking members, platform operators, users, research groups, developers, and policy makers. Regional partners include, the European Plant Phenotyping Network (EPPN), the North American Plant Phenotyping Network (NAPPN), and others. The European research infrastructure for plant phenotyping, EMPHASIS, enables researchers to use facilities, services and resources for multi-scale plant phenotyping across Europe. EMPHASIS aims to promote future food security and agricultural business in a changing climate by enabling scientists to better understand plant performance and translate this knowledge into application. See also PhenomicDB, a database combining phenotypic and genetic data from several species Phenotype microarray Human Phenotype Ontology, a formal ontology of human phenotypes References Further reading Branches of biology Omics
https://en.wikipedia.org/wiki/CountrySTAT
CountrySTAT is a Web-based information technology system for food and agriculture statistics at the national and subnational levels. It provides decision-makers access to statistics across thematic areas such as production, prices, trade and consumption. This supports analysis, informed policy-making and monitoring with the goal of eradicating extreme poverty and hunger. Since 2005, the Statistics Division of the United Nations Food and Agriculture Organization (FAO) has introduced CountrySTAT in over 20 countries in Latin America, sub-Saharan Africa and Asia. Overview The CountrySTAT web system is a browser oriented statistical framework to organise, harmonise and synchronise data collections. CountrySTAT aims are to facilitate data use by policy makers and researchers. It provides statistical standards, data exchange tools and related methods without using external data sources such as databases. The data source is a text file in a specific format, called px-file. The application supports many languages. The layout can be easily changed to match the needs of users. Features The CountrySTAT web system is easy to install and to operate on a standard Windows XP professional machine. It is programmed in ASP with visual basic using internet information service and suitable windows software for graphical and statistical output for the intranet and internet environment. Criticisms The programming with VB scripts, customised DLLs and additional windows software (PC-Axis family) makes it to a platform dependently software only run with the internet information server on a Windows server machine. To use it with the internet requires an own dedicated windows server. See also FAO CountrySTAT technical documentation External links FAO Programme Committee (87th Session): Modernization of FAOSTAT – An update. Rome, 6-10 May 2002. Website of FAO CountrySTAT Web site FAOSTAT Web site FAO Statistics Division Web site National CountrySTAT Web sites CountrySTAT Philippines CountrySTAT Bhutan CountrySTAT Mali CountrySTAT Niger CountrySTAT Togo RegionSTAT UEMOA CountrySTAT Angola CountrySTAT Benin CountrySTAT Burkina Faso CountrySTAT Ivory Coast CountrySTAT Cameroon CountrySTAT Ghana CountrySTAT Kenya CountrySTAT Senegal CountrySTAT Uganda CountrySTAT United Republic of Tanzania Agricultural databases Organizations established in 1945 Food and Agriculture Organization Statistical data sets cs:Organizace pro výživu a zemědělství da:FAO de:Food and Agriculture Organization es:Organización para la Alimentación y la Agricultura eo:Organizaĵo pri Nutrado kaj Agrikulturo fr:Organisation des Nations unies pour l'alimentation et l'agriculture id:Organisasi Pangan dan Pertanian it:FAO nl:Voedsel- en Landbouworganisatie ja:国際連合食糧農業機関 nn:FAO pt:Organização das Nações Unidas para a Agricultura e a Alimentação ru:Продовольственная и сельскохозяйственная организация ООН tr:Gıda ve Tarım Teşkilatı zh:联合国粮食及农业组织
https://en.wikipedia.org/wiki/Microecosystem
Microecosystems can exist in locations which are precisely defined by critical environmental factors within small or tiny spaces. Such factors may include temperature, pH, chemical milieu, nutrient supply, presence of symbionts or solid substrates, gaseous atmosphere (aerobic or anaerobic) etc. Some examples Pond microecosystems These microecosystems with limited water volume are often only of temporary duration and hence colonized by organisms which possess a drought-resistant spore stage in the lifecycle, or by organisms which do not need to live in water continuously. The ecosystem conditions applying at a typical pond edge can be quite different from those further from shore. Extremely space-limited water ecosystems can be found in, for example, the water collected in bromeliad leaf bases and the "pitchers" of Nepenthes. Animal gut microecosystems These include the buccal region (especially cavities in the gingiva), rumen, caecum etc. of mammalian herbivores or even invertebrate digestive tracts. In the case of mammalian gastrointestinal microecology, microorganisms such as protozoa, bacteria, as well as curious incompletely defined organisms (such as certain large structurally complex Selenomonads, Quinella ovalis "Quin's Oval", Magnoovum eadii "Eadie's Oval", Oscillospira etc.) can exist in the rumen as incredibly complex, highly enriched mixed populations, (see Moir and Masson images ). This type of microecosystem can adjust rapidly to changes in the nutrition or health of the host animal (usually a ruminant such as cow, sheep, goat etc.); see Hungate's "The Rumen and its microbes 1966). Even within a small closed system such as the rumen there may exist a range of ecological conditions: Many organisms live freely in the rumen fluid whereas others require the substrate and metabolic products supplied by the stomach wall tissue with its folds and interstices. Interesting questions are also posed concerning the transfer of the strict anaerobe organisms in the gut microflora/microfauna to the next host generation. Here, mutual licking and coprophagia certainly play important roles. Soil microecosystems A typical soil microecosystem may be restricted to less than a millimeter in its total depth range owing to steep variation in humidity and/or atmospheric gas composition. The soil grain size and physical and chemical properties of the substrate may also play important roles. Because of the predominant solid phase in these systems they are notoriously difficult to study microscopically without simultaneously disrupting the fine spatial distribution of their components. Terrestrial hot-spring microecosystems These are defined by gradients of water temperature, nutrients, dissolved gases, salt concentrations etc. Along the path of terrestrial water flow the resulting temperature gradient continuum alone may provide many different minute microecosystems, starting with thermophilic bacteria such as Archaea "Archaebacteria" ( or more), followed by conventional thermophiles (), cyanobacteria (blue-green algae) such as the motile filaments of Oscillatoria (), protozoa such as Amoeba, rotifers, then green algae () etc. Of course other factors than temperature also play important roles. Hot springs can provide classic and straightforward ecosystems for microecology studies as well as providing a haven for hitherto undescribed organisms. Deep-sea microecosystems The best known contain rare specialized organisms, found only in the immediate vicinity (sometimes within centimeters) of underwater volcanic vents (or "smokers"). These ecosystems require extremely advanced diving and collection techniques for their scientific exploration. Closed microecosystem One that is sealed and completely independent of outside factors, except for temperature and light. A good example would be a plant contained in a sealed jar and submerged under water. No new factors would be able to enter this ecosystem. References Ecosystems Environmental science Ecology
https://en.wikipedia.org/wiki/Abox
In computer science, the terms TBox and ABox are used to describe two different types of statements in knowledge bases. TBox statements are the "terminology component", and describe a domain of interest by defining classes and properties as a domain vocabulary. ABox statements are the "assertion component" — facts associated with the TBox's conceptual model or ontologies. Together ABox and TBox statements make up a knowledge base or a knowledge graph. ABox statements must be TBox-compliant: they are assertions that use the vocabulary defined by the TBox. TBox statements are sometimes associated with object-oriented classes and ABox statements associated with instances of those classes. Examples of ABox and TBox statements ABox statements typically deal with concrete entities. They specify what category an entity belongs to, or what relation one entity has to another entity. Item A is-an-instance-of Category C Item A has-this-relation-to Item B Examples: Niger is-a country. Chad is-a country Niger is-next-to Chad. Agadez is-a city. Agadez is-located-in Niger. TBox statements typically (or definitions of domain categories and implied relations) such as: An entity X can be a country or a city So Dagamanet is-a neighbourhood is not a fact you can specify, though it is a fact in real life. A is-next-to B if B is-next-to A So Niger is-next-to Chad implies Chad is-next-to Niger. X is a place if X is-a city or X is-a country. So Niger is-a country implies Niger is-a place. place A contains place B if place B is-located-in A. So Agadez is-located-in Niger implies Niger contains Agadez. TBox statements tend to be more permanent within a knowledge base and are used and stored as a schema or a data model. In contrast, ABox statements are much more dynamic in nature and tend to be stored as instance data within transactional systems within databases. With the newer, NoSQL databases and especially with RDF databases (see Triplestore) the storage distinction may no longer apply. Data and models can be stored using the same approach. However, models continue to be more permanent, have a different lifecycle and are typically stored as separate graphs within such database. See also Description logic#Modeling Metadata Web Ontology Language References Ontology (information science) de:ABox
https://en.wikipedia.org/wiki/Confluency
In cell culture biology, confluence refers to the percentage of the surface of a culture dish that is covered by adherent cells. For example, 50 percent confluence means roughly half of the surface is covered, while 100 percent confluence means the surface is completely covered by the cells, and no more room is left for the cells to grow as a monolayer. The cell number refers to, trivially, the number of cells in a given region. Impact on research Many cell lines exhibit differences in growth rate or gene expression depending on the degree of confluence. Cells are typically passaged before becoming fully confluent in order to maintain their proliferation phenotype. Some cell types are not limited by contact inhibition, such as immortalized cells, and may continue to divide and form layers on top of the parent cells. To achieve optimal and consistent results, experiments are usually performed using cells at a particular confluence, depending on the cell type. Extracellular export of cell free material is also dependent on the cell confluence . Estimation Rule of thumb Comparing the amount of space covered by cells with unoccupied space using the naked eye can provide a rough estimate of confluency. Hemocytometer A hemocytometer can be used to count cells, giving the cell number. References Cell culture
https://en.wikipedia.org/wiki/UMLet
UMLet is an open-source Java-based UML tool designed for teaching the Unified Modeling Language and for quickly creating UML diagrams. It is a drawing tool rather than a modelling tool as there is no underlying dictionary or directory of reusable design objects. UMLet is distributed under the GNU General Public License. UMLet has a simple user interface that uses text-formatting codes to modify the basic shapes with decorations and annotations, so there is no forest of icons or parameter list dialogs in the user's way. This does require the user to learn yet another text markup language, but the effort is small and the markup obvious to the experienced UML designer. UMLet can export diagrams to pictures (eps, jpg), drawing formats (SVG), document formats (PDF). The clipboard can be used to copy-paste diagrams as pictures into other applications. It is possible to create custom UML elements. The basic drawing objects can be modified and used as templates which allows users to customize the app to their needs. This requires programming of the elements in Java. The most important UML diagram types are supported: class, use case, sequence, state, deployment, activity. Support for UML 2.0 features is not yet available, though the customization feature could be used to do this. It supports concepts like Martin Fowler's UmlAsSketch. Its design goals are described in the paper "Flyweight UML Modelling Tool for Software Development". Another paper compares UMLet to Rational Rose. The app's native file format is UXF, an extension of XML intended for exchanging UML models. UMLet runs stand-alone or as Eclipse plug-in on Windows, OS X and Linux. Releases version 15.0: Web: zoom, lasso, export, dark mode; hi-res export; startup version 14.3: Improved OS integration, improved Eclipse integration, XML security fix, many additional fixes version 14.1.1: New custom elements, new sequence all-in-one, bug fixes version 13.3: opaque elements, bug fixes version 13.2: improved relations version 13.1: bug fixes version 13.0: internal refactoring, context-sensitive-help version 11.3: modified security manager behaviour, new options, batch mode improved, new relation types version 11.2: word wrap for custom elements, improved anti-aliasing, better Eclipse support version 11.1: stability fixes version 11.0: list of recently opened files, drag and drop of uxf-files, updated file format version 10.4: palette drag and drop, enhanced clipboard and improved keyboard support version 10.3: updates to the user interface Limitations No direct support for templates (parameterised classes) nor design patterns, though both can be shown with workarounds No code generation - this is a design choice to keep the drawing tool fast and light. See also List of UML tools UXF UML eXchange Format for exchanging UML designs as files. References External links UMLet website UMLet on Eclipse Marketplace Free UML tools
https://en.wikipedia.org/wiki/Thioacetamide
Thioacetamide is an organosulfur compound with the formula C2H5NS. This white crystalline solid is soluble in water and serves as a source of sulfide ions in the synthesis of organic and inorganic compounds. It is a prototypical thioamide. Research Thioacetamide is known to induce acute or chronic liver disease (fibrosis and cirrhosis) in the experimental animal model. Its administration in rat induces hepatic encephalopathy, metabolic acidosis, increased levels of transaminases, abnormal coagulopathy, and centrilobular necrosis, which are the main features of the clinical chronic liver disease so thioacetamide can precisely replicate the initiation and progression of human liver disease in an experimental animal model. Coordination chemistry Thioacetamide is widely used in classical qualitative inorganic analysis as an in situ source for sulfide ions. Thus, treatment of aqueous solutions of many metal cations to a solution of thioacetamide affords the corresponding metal sulfide: M2+ + CH3C(S)NH2 + H2O → MS + CH3C(O)NH2 + 2 H+ (M = Ni, Pb, Cd, Hg) Related precipitations occur for sources of soft trivalent cations (As3+, Sb3+, Bi3+) and monovalent cations (Ag+, Cu+). Preparation Thioacetamide is prepared by treating acetamide with phosphorus pentasulfide as shown in the following idealized reaction: CH3C(O)NH2 + 1/4 P4S10 → CH3C(S)NH2 + 1/4 P4S6O4 Structure The C2NH2S portion of the molecule is planar; the C-S, C-N, and C-C distances are 1.68, 1.31, and 1.50 Å, respectively. The short C-S and C-N distances indicate multiple bonding. Safety Thioacetamide is carcinogen class 2B. It is known to produce marked hepatotoxicity in exposed animals. Toxicity values are 301 mg/kg in rats (LD50, oral administration), 300 mg/kg in mice (LD50, intraperitoneal administration). This is evidenced by enzymatic changes, which include elevation in the levels of serum alanine transaminase, aspartate transaminase and aspartic acid. References IARC Group 2B carcinogens Thioamides Hepatotoxins
https://en.wikipedia.org/wiki/Ukash
Ukash was a UK-based electronic money system that allowed users to exchange their cash for a secure code to make payments online. It was acquired by Skrill Group in April 2014 and merged into Austrian competitor paysafecard, acquired by Skrill a year earlier. All existing vouchers expired after 31 October 2015. Remaining ones could be exchanged into paysafecard PINs, in May 2016 paysafecard announced completion of the process. The system allowed users to exchange their cash for a secure code. The code was then used to make payments online, to load cards or e-wallets or for money transfer. Codes were distributed around the world by participating retail locations, kiosks and ATMs. History The service was founded in 2005. In 2013, the company supported the launch of AvoidOnlineScams.net, which offers information about how to avoid online scams and ransomware. In June 2014 Ukash launched the Ukash Travel Money Prepaid MasterCard, a reloadable prepaid MasterCard for euros and U.S. dollars that could be used anywhere that accepted MasterCard. In April 2015 Ukash became part of Skrill Group. As a result, the Ukash online cash voucher scheme was replaced with Skrill Group's paysafecard scheme on 31 October 2015. Ukash distribution stopped on 31 August 2015 and any existing vouchers could be spent until 31 October 2015. Process Ukash users were given a unique 19-digit code representing their prepaid money; this was entered when making a transfer, payment or purchase online. If the purchase was less than the value of the code a new 19-digit code could be provided by merchants able to issue ukash, just like change in an offline cash transaction. Online scams The "bearer" of Ukash could spend it online anywhere it was accepted. Some scammers were reported to have been exploiting the Ukash system for black market use by extorting codes from victims. Fraudsters promised cheap loans or other services in exchange for a fee. Some offered items for sale on sites like Gumtree but these items did not exist. Others would infect a computer with Ransomware and demand the payment using methods including Ukash. In 2012, the company issued advice to consumers on staying safe with Ukash. It said "The best way for consumers to avoid becoming victims of fraud is to guard Ukash codes like cash. Each Ukash code is unique and like cash, must be kept safe and therefore never emailed or given to anyone else over the telephone." Ukash was designed solely for making payments online and at participating merchants. Most online scams reported obtained Ukash by asking the victim to email the code or give it out over the telephone. See also E-commerce Online banking Prepayment for service Vouchers References Electronic funds transfer
https://en.wikipedia.org/wiki/BCDMH
1-Bromo-3-chloro-5,5-dimethylhydantoin (BCDMH or bromochlorodimethylhydantoin) is a chemical structurally related to hydantoin. It is a white crystalline compound with a slight bromine and acetone odor and is insoluble in water, but soluble in acetone. BCDMH is an excellent source of both chlorine and bromine as it reacts slowly with water releasing hypochlorous acid and hypobromous acid. It used as a chemical disinfectant for recreational water sanitation and drinking water purification. BCDMH works in the following manner: The initial BCDMH reacts with water (R = Dimethylhydantoin): BrClR + 2 H2O → HOBr + HOCl + RH2 Hypobromous acid partially dissociates in water: HOBr → H+ + OBr− Hypobromous acid oxidizes the substrate, itself being reduced to bromide: HOBr + Live pathogens → Br− + Dead pathogens The bromide ions are oxidized with the hypochlorous acid that was formed from the initial BCDMH: Br− + HOCl → HOBr + Cl− This produces more hypobromous acid; the hypochlorous acid itself act directly as a disinfectant in the process. Preparation This compound is prepared by first brominating, then chlorinating 5,5-dimethylhydantoin: References External links PubChem Public Chemical Database (nih.gov) External MSDS Disinfectants Organobromides Organochlorides Hydantoins
https://en.wikipedia.org/wiki/Collineation
In projective geometry, a collineation is a one-to-one and onto map (a bijection) from one projective space to another, or from a projective space to itself, such that the images of collinear points are themselves collinear. A collineation is thus an isomorphism between projective spaces, or an automorphism from a projective space to itself. Some authors restrict the definition of collineation to the case where it is an automorphism. The set of all collineations of a space to itself form a group, called the collineation group. Definition Simply, a collineation is a one-to-one map from one projective space to another, or from a projective space to itself, such that the images of collinear points are themselves collinear. One may formalize this using various ways of presenting a projective space. Also, the case of the projective line is special, and hence generally treated differently. Linear algebra For a projective space defined in terms of linear algebra (as the projectivization of a vector space), a collineation is a map between the projective spaces that is order-preserving with respect to inclusion of subspaces. Formally, let V be a vector space over a field K and W a vector space over a field L. Consider the projective spaces PG(V) and PG(W), consisting of the vector lines of V and W. Call D(V) and D(W) the set of subspaces of V and W respectively. A collineation from PG(V) to PG(W) is a map α : D(V) → D(W), such that: α is a bijection. A ⊆ B ⇔ α(A) ⊆ α(B) for all A, B in D(V). Axiomatically Given a projective space defined axiomatically in terms of an incidence structure (a set of points P, lines L, and an incidence relation I specifying which points lie on which lines, satisfying certain axioms), a collineation between projective spaces thus defined then being a bijective function f between the sets of points and a bijective function g between the set of lines, preserving the incidence relation. Every projective space of dimension greater than or equal to three is isomorphic to the projectivization of a linear space over a division ring, so in these dimensions this definition is no more general than the linear-algebraic one above, but in dimension two there are other projective planes, namely the non-Desarguesian planes, and this definition allows one to define collineations in such projective planes. For dimension one, the set of points lying on a single projective line defines a projective space, and the resulting notion of collineation is just any bijection of the set. Collineations of the projective line For a projective space of dimension one (a projective line; the projectivization of a vector space of dimension two), all points are collinear, so the collineation group is exactly the symmetric group of the points of the projective line. This is different from the behavior in higher dimensions, and thus one gives a more restrictive definition, specified so that the fundamental theorem of projective geometry holds. In this definition, when V has dimension two, a collineation from PG(V) to PG(W) is a map , such that: The zero subspace of V is mapped to the zero subspace of W. V is mapped to W. There is a nonsingular semilinear map β from V to W such that, for all v in V, This last requirement ensures that collineations are all semilinear maps. Types The main examples of collineations are projective linear transformations (also known as homographies) and automorphic collineations. For projective spaces coming from a linear space, the fundamental theorem of projective geometry states that all collineations are a combination of these, as described below. Projective linear transformations Projective linear transformations (homographies) are collineations (planes in a vector space correspond to lines in the associated projective space, and linear transformations map planes to planes, so projective linear transformations map lines to lines), but in general not all collineations are projective linear transformations. The group of projective linear transformations (PGL) is in general a proper subgroup of the collineation group. Automorphic collineations An is a map that, in coordinates, is a field automorphism applied to the coordinates. Fundamental theorem of projective geometry If the geometric dimension of a pappian projective space is at least 2, then every collineation is the product of a homography (a projective linear transformation) and an automorphic collineation. More precisely, the collineation group is the projective semilinear group, which is the semidirect product of homographies by automorphic collineations. In particular, the collineations of are exactly the homographies, as R has no non-trivial automorphisms (that is, Gal(R/Q) is trivial). Suppose φ is a nonsingular semilinear map from V to W, with the dimension of V at least three. Define by saying that for all Z in D(V). As φ is semilinear, one easily checks that this map is properly defined, and furthermore, as φ is not singular, it is bijective. It is obvious now that α is a collineation. We say that α is induced by φ. The fundamental theorem of projective geometry states the converse: Suppose V is a vector space over a field K with dimension at least three, W is a vector space over a field L, and α is a collineation from PG(V) to PG(W). This implies K and L are isomorphic fields, V and W have the same dimension, and there is a semilinear map φ such that φ induces α. For , the collineation group is the projective semilinear group, PΓL – this is PGL, twisted by field automorphisms; formally, the semidirect product , where k is the prime field for K. Linear structure Thus for K a prime field ( or ), we have , but for K not a prime field (such as or for ), the projective linear group is in general a proper subgroup of the collineation group, which can be thought of as "transformations preserving a projective semi-linear structure". Correspondingly, the quotient group corresponds to "choices of linear structure", with the identity (base point) being the existing linear structure. Given a projective space without an identification as the projectivization of a linear space, there is no natural isomorphism between the collineation group and PΓL, and the choice of a linear structure (realization as projectivization of a linear space) corresponds to a choice of subgroup , these choices forming a torsor over Gal(K/k). History The idea of a line was abstracted to a ternary relation determined by collinearity (points lying on a single line). According to Wilhelm Blaschke it was August Möbius that first abstracted this essence of geometrical transformation: What do our geometric transformations mean now? Möbius threw out and fielded this question already in his Barycentric Calculus (1827). There he spoke not of transformations but of permutations [Verwandtschaften], when he said two elements drawn from a domain were permuted when they were interchanged by an arbitrary equation. In our particular case, linear equations between homogeneous point coordinates, Möbius called a permutation [Verwandtschaft] of both point spaces in particular a collineation. This signification would be changed later by Chasles to homography. Möbius’ expression is immediately comprehended when we follow Möbius in calling points collinear when they lie on the same line. Möbius' designation can be expressed by saying, collinear points are mapped by a permutation to collinear points, or in plain speech, straight lines stay straight. Contemporary mathematicians view geometry as an incidence structure with an automorphism group consisting of mappings of the underlying space that preserve incidence. Such a mapping permutes the lines of the incidence structure, and the notion of collineation persists. As mentioned by Blaschke and Klein, Michel Chasles preferred the term homography to collineation. A distinction between the terms arose when the distinction was clarified between the real projective plane and the complex projective line. Since there are no non-trivial field automorphisms of the real number field, all the collineations are homographies in the real projective plane, however due to the field automorphism of complex conjugation, not all collineations of the complex projective line are homographies. In applications such as computer vision where the underlying field is the real number field, homography and collineation can be used interchangeably. Anti-homography The operation of taking the complex conjugate in the complex plane amounts to a reflection in the real line. With the notation z∗ for the conjugate of z, an anti-homography is given by Thus an anti-homography is the composition of conjugation with a homography, and so is an example of a collineation which is not an homography. For example, geometrically, the mapping amounts to circle inversion. The transformations of inversive geometry of the plane are frequently described as the collection of all homographies and anti-homographies of the complex plane. Notes References External links Projective geometry
https://en.wikipedia.org/wiki/L-reduction
In computer science, particularly the study of approximation algorithms, an L-reduction ("linear reduction") is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of polynomial reductions in the studies of computational complexity of decision problems. The term L reduction is sometimes used to refer to log-space reductions, by analogy with the complexity class L, but this is a different concept. Definition Let A and B be optimization problems and cA and cB their respective cost functions. A pair of functions f and g is an L-reduction if all of the following conditions are met: functions f and g are computable in polynomial time, if x is an instance of problem A, then f(x) is an instance of problem B, if y' is a solution to f(x), then g(y' ) is a solution to x, there exists a positive constant α such that , there exists a positive constant β such that for every solution y' to f(x) . Properties Implication of PTAS reduction An L-reduction from problem A to problem B implies an AP-reduction when A and B are minimization problems and a PTAS reduction when A and B are maximization problems. In both cases, when B has a PTAS and there is an L-reduction from A to B, then A also has a PTAS. This enables the use of L-reduction as a replacement for showing the existence of a PTAS-reduction; Crescenzi has suggested that the more natural formulation of L-reduction is actually more useful in many cases due to ease of usage. Proof (minimization case) Let the approximation ratio of B be . Begin with the approximation ratio of A, . We can remove absolute values around the third condition of the L-reduction definition since we know A and B are minimization problems. Substitute that condition to obtain Simplifying, and substituting the first condition, we have But the term in parentheses on the right-hand side actually equals . Thus, the approximation ratio of A is . This meets the conditions for AP-reduction. Proof (maximization case) Let the approximation ratio of B be . Begin with the approximation ratio of A, . We can remove absolute values around the third condition of the L-reduction definition since we know A and B are maximization problems. Substitute that condition to obtain Simplifying, and substituting the first condition, we have But the term in parentheses on the right-hand side actually equals . Thus, the approximation ratio of A is . If , then , which meets the requirements for PTAS reduction but not AP-reduction. Other properties L-reductions also imply P-reduction. One may deduce that L-reductions imply PTAS reductions from this fact and the fact that P-reductions imply PTAS reductions. L-reductions preserve membership in APX for the minimizing case only, as a result of implying AP-reductions. Examples Dominating set: an example with α = β = 1 Token reconfiguration: an example with α = 1/5, β = 2 See also MAXSNP Approximation-preserving reduction PTAS reduction References G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, M. Protasi. Complexity and Approximation. Combinatorial optimization problems and their approximability properties. 1999, Springer. Reduction (complexity) Approximation algorithms
https://en.wikipedia.org/wiki/Lamellipodium
The lamellipodium (: lamellipodia) (from Latin lamella, related to , "thin sheet", and the Greek radical pod-, "foot") is a cytoskeletal protein actin projection on the leading edge of the cell. It contains a quasi-two-dimensional actin mesh; the whole structure propels the cell across a substrate. Within the lamellipodia are ribs of actin called microspikes, which, when they spread beyond the lamellipodium frontier, are called filopodia. The lamellipodium is born of actin nucleation in the plasma membrane of the cell and is the primary area of actin incorporation or microfilament formation of the cell. Description Lamellipodia are found primarily in all mobile cells, such as the keratinocytes of fish and frogs, which are involved in the quick repair of wounds. The lamellipodia of these keratinocytes allow them to move at speeds of 10–20 μm / min over epithelial surfaces. When separated from the main part of a cell, a lamellipodium can still crawl about freely on its own. Lamellipodia are a characteristic feature at the front, leading edge, of motile cells. They are believed to be the actual motor which pulls the cell forward during the process of cell migration. The tip of the lamellipodium is the site where exocytosis occurs in migrating mammalian cells as part of their clathrin-mediated endocytic cycle. This, together with actin-polymerisation there, helps extend the lamella forward and thus advance the cell's front. It thus acts as a steering device for cells in the process of chemotaxis. It is also the site from which particles or aggregates attached to the cell surface migrate in a process known as cap formation. Structure Structurally, the barbed ends of the microfilaments (localized actin monomers in an ATP-bound form) face the "seeking" edge of the cell, while the pointed ends (localized actin monomers in an ADP-bound form) face the lamella behind. This creates treadmilling throughout the lamellipodium, which aids in the retrograde flow of particles throughout. Arp2/3 complexes are present at microfilament-microfilament junctions in lamellipodia, and help create the actin meshwork. Arp 2/3 can only join onto previously existing microfilaments, but once bound it creates a site for the extension of new microfilaments, which creates branching. Another molecule that is often found in polymerizing actin with Arp2/3 is cortactin, which appears to link tyrosine kinase signalling to cytoskeletal reorganization in the lamellipodium and its associated structures. Rac and Cdc42 are two Rho-family GTPases which are normally cytosolic but can also be found in the cell membrane under certain conditions. When Cdc42 is activated, it can interact with Wiskott–Aldrich syndrome protein (WASp) family receptors, in particular N-WASp, which then activates Arp2/3. This stimulates actin branching and increases cell motility. Rac1 induces cortactin to localize to the cell membrane, where it simultaneously binds F-actin and Arp2/3. The result is a structural reorganization of the lamellipodium and ensuing cell motility. Rac promotes lamellipodia while cdc42 promotes filopodia. Ena/VASP proteins are found at the leading edge of lamellipodia, where they promote actin polymerization necessary for lamellipodial protrusion and chemotaxis. Further, Ena/VASP prevents the action of capping protein, which halts actin polymerization. References External links MBInfo - Lamellipodia MBInfo - Lamellipodia Assembly Video tour of cell motility Cell movement Cytoskeleton Actin-based structures de:Lamellipodium
https://en.wikipedia.org/wiki/Glycyrrhizol
Glycyrrhizol A is a prenylated pterocarpan and an isoflavonoid derivative. It is a compound isolated from the root of the Chinese licorice plant (Glycyrrhiza uralensis). It may has in vitro antibacterial properties. In one study, the strongest antibacterial activity was observed against Streptococcus mutans, an organism known to cause tooth decay in humans. References Pterocarpans Antibiotics Phenols Methoxy compounds
https://en.wikipedia.org/wiki/GpsOne
gpsOne is the brand name for a cellphone chipset manufactured by Qualcomm for mobile phone tracking. It uses A-GPS or Assisted-GPS to locate the phone more quickly, accurately and reliably than by GPS alone, especially in places with poor GPS reception. Current uses gpsOne is primarily used today for Enhanced-911 E911 service, allowing a cell phone to relay its location to emergency dispatchers, thus overcoming one of the traditional shortcomings of cellular phone technology. Using a combination of GPS satellite signals and the cell sites themselves, gpsOne plots the location with greater accuracy than traditional GPS systems in areas where satellite reception is problematic due to buildings or terrain. Geotagging - addition of location information to the pictures taken with a camera phone. Location-based information delivery, (i.e. local weather and traffic alerts). Verizon Wireless uses gpsOne to support its VZ Navigator automotive navigation system. Verizon disables gpsOne in some phones for other applications as compared to AT&T and T-Mobile. gpsOne in other systems besides Verizon can be used with any third-party applications. Future uses Some vendors are also looking at GPS phone technology as a method of implementing location-based solutions, such as: Employers can track vehicles or employees, allowing quick response from the nearest representative. Restaurants, clubs, theatres and other venues could relay SMS special offers to patrons within a certain range. When using a phone as a 'wallet' and making e-payments, the user's location can be verified as an additional layer of security against cloning. For example, John Doe in AverageTown USA is most likely not purchasing a candy bar from a machine at LAX if he was logged paying for the subway token in NYC, and calling his wife from the Empire State Building. Location-based games. Functions gpsOne can operate in four modes: Standalone - The handset has no connection to the network, and uses only the GPS satellite signals it can currently receive to try to establish a location. Mobile Station Based (MSB) - The handset is connected to the network, and uses the GPS signals and a location signal from the network. Mobile Station Assisted (MSA) - The handset is connected to the network, uses GPS signals and a location signal, then relays its 'fix' to the server. Which then uses the signal strength from the phone to the network towers to further plot the user's position. Users can still maintain voice communication in this scenario, but not 'Internet/Network service', (i.e. Web browser, IM, streaming TV etc.) Mobile Station Hybrid - Same as above, but network functionality remains. Normally only in areas with exceptional coverage. Adoption Since introduction in 2000, the gpsOne chipset has been adopted by 40+ vendors, and is used in more than 250 cellphone models worldwide. More than 300 million gpsOne enabled handsets are currently on the market, making it one of the most widely deployed solutions. External links Product website The gpsOne XTRA MSB assistance data format: Vinnikov & Pshehotskaya (2020): Deciphering of the gpsOne File Format for Assisted GPS Service, Advances in Intelligent Systems and Computing 1184:377-386 Vinnikov, Pshehotskaya and Gritsevich (2021): Partial Decoding of the GPS Extended Prediction Orbit File, 2021 29th Conference of Open Innovations Association Mobile telecommunications Global Positioning System Qualcomm
https://en.wikipedia.org/wiki/Indiscernibles
In mathematical logic, indiscernibles are objects that cannot be distinguished by any property or relation defined by a formula. Usually only first-order formulas are considered. Examples If a, b, and c are distinct and {a, b, c} is a set of indiscernibles, then, for example, for each binary formula , we must have Historically, the identity of indiscernibles was one of the laws of thought of Gottfried Leibniz. Generalizations In some contexts one considers the more general notion of order-indiscernibles, and the term sequence of indiscernibles often refers implicitly to this weaker notion. In our example of binary formulas, to say that the triple (a, b, c) of distinct elements is a sequence of indiscernibles implies Applications Order-indiscernibles feature prominently in the theory of Ramsey cardinals, Erdős cardinals, and zero sharp. See also Identity of indiscernibles Rough set References Model theory
https://en.wikipedia.org/wiki/Directivity
In electromagnetics, directivity is a parameter of an antenna or optical system which measures the degree to which the radiation emitted is concentrated in a single direction. It is the ratio of the radiation intensity in a given direction from the antenna to the radiation intensity averaged over all directions. Therefore, the directivity of a hypothetical isotropic radiator is 1, or 0 dBi. An antenna's directivity is greater than its gain by an efficiency factor, radiation efficiency. Directivity is an important measure because many antennas and optical systems are designed to radiate electromagnetic waves in a single direction or over a narrow-angle. By the principle of reciprocity, the directivity of an antenna when receiving is equal to its directivity when transmitting. The directivity of an actual antenna can vary from 1.76 dBi for a short dipole to as much as 50 dBi for a large dish antenna. Definition The directivity, , of an antenna is defined for all incident angles of an antenna. The term "directive gain" is deprecated by IEEE. If an angle relative to the antenna is not specified, then directivity is presumed to refer to the axis of maximum radiation intensity. Here and are the zenith angle and azimuth angle respectively in the standard spherical coordinate angles; is the radiation intensity, which is the power per unit solid angle; and is the total radiated power. The quantities and satisfy the relation that is, the total radiated power is the power per unit solid angle integrated over a spherical surface. Since there are 4π steradians on the surface of a sphere, the quantity represents the average power per unit solid angle. In other words, directivity is the radiation intensity of an antenna at a particular coordinate combination divided by what the radiation intensity would have been had the antenna been an isotropic antenna radiating the same amount of total power into space. Directivity, if a direction is not specified, is the maximal directive gain value found among all possible solid angles: In antenna arrays In an antenna array the directivity is a complicated calculation in the general case. For a linear array the directivity will always be less than or equal to the number of elements. For a standard linear array (SLA), where the element spacing is , the directivity is equal to the inverse of the square of the 2-norm of the array weight vector, under the assumption that the weight vector is normalized such that its sum is unity. In the case of a uniformly weighted (un-tapered) SLA, this reduces to simply N, the number of array elements. For a planar array, the computation of directivity is more complicated and requires consideration of the positions of each array element with respect to all the others and with respect to wavelength. For a planar rectangular or hexagonally spaced array with non-isotropic elements, the maximum directivity can be estimated using the universal ratio of effective aperture to directivity, , where dx and dy are the element spacings in the x and y dimensions and is the "illumination efficiency" of the array that accounts for tapering and spacing of the elements in the array. For an un-tapered array with elements at less than spacing, . Note that for an un-tapered standard rectangular array (SRA), where , this reduces to . For an un-tapered standard rectangular array (SRA), where , this reduces to a maximum value of . The directivity of a planar array is the product of the array gain, and the directivity of an element (assuming all of the elements are identical) only in the limit as element spacing becomes much larger than lambda. In the case of a sparse array, where element spacing , is reduced because the array is not uniformly illuminated. There is a physically intuitive reason for this relationship; essentially there are a limited number of photons per unit area to be captured by the individual antennas. Placing two high gain antennas very close to each other (less than a wavelength) does not buy twice the gain, for example. Conversely, if the antenna are more than a wavelength apart, there are photons that fall between the elements and are not collected at all. This is why the physical aperture size must be taken into account. Let's assume a 16×16 un-tapered standard rectangular array (which means that elements are spaced at .) The array gain is dB. If the array were tapered, this value would go down. The directivity, assuming isotropic elements, is 25.9dBi. Now assume elements with 9.0dBi directivity. The directivity is not 33.1dBi, but rather is only 29.2dBi. The reason for this is that the effective aperture of the individual elements limits their directivity. So, . Note, in this case because the array is un-tapered. Why the slight difference from 29.05 dBi? The elements around the edge of the array aren't as limited in their effective aperture as are the majority of elements. Now let's move the array elements to spacing. From the above formula, we expect the directivity to peak at . The actual result is 34.6380 dBi, just shy of the ideal 35.0745 dBi we expected. Why the difference from the ideal? If the spacing in the x and y dimensions is , then the spacing along the diagonals is , thus creating tiny regions in the overall array where photons are missed, leading to . Now go to spacing. The result now should converge to N times the element gain, or + 9 dBi = 33.1 dBi. The actual result is in fact, 33.1 dBi. For antenna arrays, the closed form expression for Directivity for progressively phased array of isotropic sources will be given by, where, is the total number of elements on the aperture; represents the location of elements in Cartesian co-ordinate system; is the complex excitation coefficient of the -element; is the phase component (progressive phasing); is the wavenumber; is the angular location of the far-field target; is the Euclidean distance between the and element on the aperture, and Further studies on directivity expressions for various cases, like if the sources are omnidirectional (even in the array environment) like if the prototype element-pattern takes the form , and not restricting to progressive phasing can be done from. Relation to beam width The beam solid angle, represented as , is defined as the solid angle which all power would flow through if the antenna radiation intensity were constant at its maximal value. If the beam solid angle is known, then maximum directivity can be calculated as which simply calculates the ratio of the beam solid angle to the solid angle of a sphere. The beam solid angle can be approximated for antennas with one narrow major lobe and very negligible minor lobes by simply multiplying the half-power beamwidths (in radians) in two perpendicular planes. The half-power beamwidth is simply the angle in which the radiation intensity is at least half of the peak radiation intensity. The same calculations can be performed in degrees rather than in radians: where is the half-power beamwidth in one plane (in degrees) and is the half-power beamwidth in a plane at a right angle to the other (in degrees). In planar arrays, a better approximation is For an antenna with a conical (or approximately conical) beam with a half-power beamwidth of degrees, then elementary integral calculus yields an expression for the directivity as . Expression in decibels The directivity is rarely expressed as the unitless number but rather as a decibel comparison to a reference antenna: The reference antenna is usually the theoretical perfect isotropic radiator, which radiates uniformly in all directions and hence has a directivity of 1. The calculation is therefore simplified to Another common reference antenna is the theoretical perfect half-wave dipole, which radiates perpendicular to itself with a directivity of 1.64: Accounting for polarization When polarization is taken under consideration, three additional measures can be calculated: Partial directive gain Partial directive gain is the power density in a particular direction and for a particular component of the polarization, divided by the average power density for all directions and all polarizations. For any pair of orthogonal polarizations (such as left-hand-circular and right-hand-circular), the individual power densities simply add to give the total power density. Thus, if expressed as dimensionless ratios rather than in dB, the total directive gain is equal to the sum of the two partial directive gains. Partial directivity Partial directivity is calculated in the same manner as the partial directive gain, but without consideration of antenna efficiency (i.e. assuming a lossless antenna). It is similarly additive for orthogonal polarizations. Partial gain Partial gain is calculated in the same manner as gain, but considering only a certain polarization. It is similarly additive for orthogonal polarizations. In other areas The term directivity is also used with other systems. With directional couplers, directivity is a measure of the difference in dB of the power output at a coupled port, when power is transmitted in the desired direction, to the power output at the same coupled port when the same amount of power is transmitted in the opposite direction. In acoustics, it is used as a measure of the radiation pattern from a source indicating how much of the total energy from the source is radiating in a particular direction. In electro-acoustics, these patterns commonly include omnidirectional, cardioid and hyper-cardioid microphone polar patterns. A loudspeaker with a high degree of directivity (narrow dispersion pattern) can be said to have a high Q. References Further reading Antennas (radio) Radio electronics
https://en.wikipedia.org/wiki/Mowlem
Mowlem was one of the largest construction and civil engineering companies in the United Kingdom. Carillion bought the firm in 2006. History The firm was founded by John Mowlem in 1822, and was continued as a partnership by successive generations of the Mowlem and Burt families, including George Burt, and Sir John Mowlem Burt. The company was awarded a Royal Warrant in 1902 and went public on the London Stock Exchange in 1924. During the Second World War the company was one of the contractors engaged in building the Mulberry harbour units. A long-standing national contractor, Mowlem developed a network of regional contracting businesses including Rattee and Kett of Cambridge (bought in 1926); E. Thomas of the west country (bought in 1965) and the formation of a northern region based in Leeds in 1970. The network was further augmented by the acquisition of Ernest Ireland of Bath (bought in 1977), and the acquisition of McTay Engineering of Bromborough, together with its shipbuilding subsidiary McTay Marine (also bought in the late 1970s). In 1971 the company expanded overseas purchasing a 40% shareholding in an Australian contractor, Barclay Brothers, and later taking 100% ownership. The Australian business, re-branded Barclay Mowlem, expanded into all other Australian mainland states, except South Australia, and into Asia. Mowlem acquired SGB Group, a supplier of scaffolding, in 1986. Mowlem also bought Unit Construction in 1986, giving the firm a substantial presence in private housebuilding - within two years, sales were up to an annual rate of 1,200. The ensuing recession led to losses of over £180m between 1991 and 1993 and banking covenants came under pressure. The housing division was sold to Beazer in 1994. Mowlem was bought by Carillion in February 2006. Major projects Major projects undertaken by or involving Mowlem included: Billingsgate Fish Market completed in 1874 Clerkenwell Road completed in 1878 Smithfield Fruit Market completed in 1882 Imperial Institute completed in 1887 Woolwich Ferry terminals opened in 1889 Liverpool Street station and the Great Eastern Hotel completed in 1891 Institution of Civil Engineers building completed in 1911 Admiralty Arch completed in 1912 Port of London Authority Building completed in 1919 Bush House completed in 1923 London Post Office Railway completed in 1927 Piccadilly Circus tube station completed in 1928 Battersea Power Station completed in 1933 Mulberry harbour units completed in 1943 Reconstruction works at Buckingham Palace in 1943 following bomb damage Reconstruction of the House of Commons in 1947 also following bomb damage William Girling Reservoir completed in 1951 Hunterston A nuclear power station completed in 1957 Strand underpass completed in 1962 Millbank Tower completed in 1963 Reconstruction of 10 Downing Street in 1963 Marine terminal for a joint venture of Esso and Pappas Petroleum in Thessaloniki completed in 1965 New altar for Westminster Abbey in 1966 London Bridge completed in 1972 Natwest Tower completed in 1979 Mount Pleasant Airfield completed in 1986 Docklands Light Railway completed in 1987 Manchester Metrolink completed in 1991 Refurbishment of Thames House completed in 1994 Refurbishment of the Albert Memorial completed in 1998 Expansion of James Cook University Hospital completed in 2003 Spinnaker Tower completed in 2005 Twickenham Stadium South Stand completed in 2006 Dublin Port Tunnel completed in 2006 Mowlem was also the owner and developer of London City Airport completed in 1986. See also John Mowlem - Biography of the founder of the company George Burt - Biography of his successor as manager of the company Edgar Beck - Biography of chairman then president between 1961-2000 Frank Baines History of John Mowlem unpublished typescript history held at London Metropolitan Archives References Sources Mowlem 1822–1972 – Mowlem Public Relations brochure, 1972 British companies established in 1822 Construction and civil engineering companies of the United Kingdom Companies formerly listed on the London Stock Exchange 1822 establishments in England Defunct construction and civil engineering companies British companies disestablished in 2006 Construction and civil engineering companies established in 1822 2006 disestablishments in England Construction and civil engineering companies disestablished in 2006
https://en.wikipedia.org/wiki/Real3D
Real3D, Inc. was a maker of arcade graphics boards, a spin-off from Lockheed Martin. The company made several 3D hardware designs that were used by Sega, the most widely used being the graphics hardware in the Sega Model 2 and Model 3 arcade systems. A partnership with Intel and SGI led to the Intel740 graphics card, which was not successful in the market. Rapid changes in the marketplace led to the company being sold to Intel in 1999. History The majority of Real3D was formed by research and engineering divisions originally part of GE Aerospace. Their experience traces its way back to the Project Apollo Visual Docking Simulator, the first full-color 3D computer generated image system. GE sold similar systems of increasing complexity through the 1970s, but were never as large as other companies in the simulator space, like Singer Corporation or CAE. When "neutron" Jack Welch took over General Electric in 1981 he demanded that every division in the company be 1st or 2nd in its industry, or face being sold off. GE Aerospace lasted longer than many other divisions, but was eventually sold off to Martin Marietta in 1992. In 1995, Martin Marietta and Lockheed merged to form Lockheed Martin Corporation, the world’s largest defense contractor. Following the merger, Lockheed Martin decided to market their graphics technology for civilian use. In January 1995 they set up Real3D and formed a relationship with Sega. This led to the company's most successful product run, designing the 3D hardware using in over 200,000 Sega Model2 and Model3 arcade systems, two of the most popular systems in history. The company also formed a partnership with Intel and Chips and Technologies to introduce similar technology as an add-in card for PC's, a project known as "Auburn". This project became a showcase for the Accelerated Graphics Port system being introduced by Intel, which led to several design decisions that hampered the resulting products. Released in 1998 as the Intel740, the system lasted less than a year in the market before being sold off under the StarFighter and Lightspeed brandnames. By 1999 both relationships were ending, and Lockheed Martin was focussing on its military assets. On 1 October 1999 the company closed, and its assets were sold to Intel on the 14th. ATI hired many of the remaining employees for a new Orlando office. 3dfx Interactive had sued Real3D on a patent basis, and Intel's purchase moved the lawsuits to the new owner. Intel settled the issue by selling all of the intellectual property back to 3dFX. By this point, nVidia had acquired all of SGI's graphics development resources, which included a 10% share in Real3D. This led to series of lawsuits, joined by ATI. The two companies were involved in lawsuits over Real3D's patents until a 2001 cross-licensing settlement. References External links Book: Funding a Revolution Wave-Report.com GameAI American companies established in 1995 American companies disestablished in 1999 Computer companies established in 1995 Computer companies disestablished in 1999 Defunct computer companies of the United States Defunct computer hardware companies Former Lockheed Martin companies Graphics hardware companies Intel acquisitions Intel graphics
https://en.wikipedia.org/wiki/Sassolite
Sassolite is a borate mineral, specifically the mineral form of boric acid. It is usually white to gray, and colourless in transmitted light. It can also take on a yellow colour from sulfur impurities, or brown from iron oxides. History and occurrence Its mineral form was first described in 1800, and was named after Sasso Pisano, Castelnuovo Val di Cecina, Pisa Province, Tuscany, Italy where it was found. The mineral may be found in lagoons throughout Tuscany and Sasso. It is also found in the Lipari Islands and the US state of Nevada. It occurs in volcanic fumaroles and hot springs, deposited from steam, as well as in bedded sedimentary evaporite deposits. See also List of minerals Borax References External links Borate minerals Triclinic minerals Luminescent minerals Minerals in space group 2
https://en.wikipedia.org/wiki/Trypsinization
Trypsinization is the process of cell dissociation using trypsin, a proteolytic enzyme which breaks down proteins, to dissociate adherent cells from the vessel in which they are being cultured. When added to cell culture, trypsin breaks down the proteins that enable the cells to adhere to the vessel. Trypsinization is often used to pass cells to a new vessel. When the trypsinization process is complete the cells will be in suspension and appear rounded. For experimental purposes, cells are often cultivated in containers that take the form of plastic flasks or plates. In such flasks, cells are provided with a growth medium comprising the essential nutrients required for proliferation, and the cells adhere to the container and each other as they grow. This process of cell culture or tissue culture requires a method to dissociate the cells from the container and each other. Trypsin, an enzyme commonly found in the digestive tract, can be used to "digest" the proteins that facilitate adhesion to the container and between cells. Once cells have detached from their container it is necessary to deactivate the trypsin, unless the trypsin is synthetic, as cell surface proteins will also be cleaved over time and this will affect cell functioning. Serum can be used to inactivate trypsin, as it contains protease inhibitors. Because of the presence of these inhibitors, the serum must be removed before treatment of a growth vessel with trypsin and must not be added again to the growth vessel until cells have detached from their growth surface - this detachment can be confirmed by visual observation using a microscope. Trypsinization is often used to permit passage of adherent cells to a new container, observation for experimentation, or reduction of the degree of confluency in a culture flask through the removal of a percentage of the cells. References Cell culture
https://en.wikipedia.org/wiki/C3H6
{{DISPLAYTITLE:C3H6}} The molecular formula C3H6 (molar mass: 42.08 g/mol, exact mass: 42.0470 u) may refer to: Cyclopropane Propylene, also known as propene
https://en.wikipedia.org/wiki/D3O
D3O is an ingredient brand specialising in advanced rate-sensitive impact protection technologies, materials and products. It comprises a portfolio of more than 30 technologies and materials including set foams, formable foams, set elastomers and formable elastomers. D3O is an engineering, design and technology-focused company based in London, UK, with offices in China and the US. D3O is sold in more than 50 countries. It is used in sports and motorcycle gear; protective cases for consumer electronics including phones; industrial workwear; and military protection including helmet pads and limb protectors. History In 1999, the materials scientists Richard Palmer and Philip Green experimented with a dilatant liquid with non-Newtonian properties. Unlike water, it was free flowing when stationary but became instantly rigid upon impact. As keen snowboarders, Palmer and Green drew inspiration from snow and decided to replicate its matrix-like quality to develop a flexible material that incorporated the dilatant fluid. After experimenting with numerous materials and formulas, they invented a flexible, pliable material that locked together and solidified in the event of a collision. When incorporated into clothing, the material moved with the wearer while providing comprehensive protection. Palmer and Green successfully filed a patent application, which they used as the foundation for commercialising their invention and setting up a business in 1999. D3O® was used commercially for the first time by the United States Ski Team and the Canada ski team at the 2006 Olympic Winter Games. D3O® first entered the motorcycle market in 2009 when the ingredient was incorporated into CE-certified armour for the apparel brand Firstgear. Philip Green left D3O in 2006, and in 2009 founder Richard Palmer brought in Stuart Sawyer as interim CEO. Palmer took a sabbatical in 2010 and left the business in 2011, at which point executive leadership was officially handed over to Sawyer, who has remained in the position since. In 2014, D3O received one of the Queen’s Awards for Enterprise and was awarded £237,000 by the Technology Strategy Board – now known as Innovate UK – to develop a shock absorption helmet system prototype for the defence market to reduce the risk of traumatic brain injury. The following year, Sawyer secured £13 million in private equity funding from venture capital investor Beringea, allowing D3O to place more emphasis on product development and international marketing. D3O opened headquarters in London which include full-scale innovation and test laboratories and house its global business functions. With exports to North America making up an increasing part of its business, the company set up a new operating base located within the Virginia Tech Corporate Research Center (VTCRC), a research park for high-technology companies located in Blacksburg, Virginia. The same year, D3O consumer electronics brand partner Gear4 became the UK’s number 1 phone case brand in volume and value. Gear 4 has since become present in consumer electronics retail stores worldwide including Verizon, AT&T and T-Mobile. In 2017, D3O became part of the American National Standards Institute (ANSI)/International Safety Equipment Association (ISEA) committee which developed the first standard in North America to address the risk to hands from impact injuries: ANSI/ISEA 138-2019, American National Standard for Performance and Classification for Impact Resistant Hand Protection. D3O was acquired in September 2021 by independent private-equity fund Elysian Capital III LP. The acquisition saw previous owners Beringea US & UK and Entrepreneurs Fund exit the business after six years of year-on-year growth. D3O applications D3O has various applications such as in electronics (low-profile impact protection for phones, laptops and other electronic devices), sports (protective equipment), motorcycle riding gear, defence (helmet liners and body protection; footwear) and industrial workwear (personal protective equipment such as gloves, knee pads and metatarsal guards for boots), In 2020, D3O became the specified helmet suspension pad supplier for the US Armed Forces' Integrated Helmet Protection System (IHPS) Suspension System. Product development D3O uses patented and proprietary technologies to create both standard and custom products. In-house rapid prototyping and testing laboratories ensure each D3O development is tested to CE standards for sports and motorcycle applications, ISEA 138 for industrial applications and criteria set by government agencies for defence applications. Sponsorship D3O sponsors athletes including: Downhill mountain bike rider Tahnée Seagrave Seth Jones, ice hockey defenseman and alternate captain for the Columbus Blue Jackets in the NHL Motorcycle racer Michael Dunlop, 19-times winner of the Isle of Man TT The Troy Lee Designs team of athletes including three-times Red Bull Rampage winner Brandon Semenuk Enduro rider Rémy Absalon, 12-times Megavalanche winner. Awards and recognition D3O has received the following awards and recognition: 2014: Queen’s Award for Enterprise 2016: Inclusion in the Sunday Times Tech Track 100 ‘Ones to Watch’ list 2017: T3 Awards together with Three: Best Mobile Accessory 2018: British Yachting Awards – clothing innovation 2019: ISPO Award – LP2 Pro 2020: Red Dot - Snickers Ergo Craftsmen Kneepads 2022/2023: ISPO Textrends Award - Accessories & Trim 2023: IF Design Award - D3O Ghost Reactiv Body Protection 2023: ISPO Award – D3O® Ghost™ back protector References Materials Non-Newtonian fluids Motorcycle apparel
https://en.wikipedia.org/wiki/12AT7
12AT7 (also known in Europe by the Mullard–Philips tube designation of ECC81) is a miniature 9-pin medium-gain (60) dual-triode vacuum tube popular in guitar amplifiers. It belongs to a large family of dual triode vacuum tubes which share the same pinout (EIA 9A), including in particular the very commonly used low-mu 12AU7 and high-mu 12AX7. The 12AT7 has somewhat lower voltage gain than the 12AX7, but higher transconductance and plate current, which makes it suitable for high frequency applications. Originally the tube was intended for operation in VHF circuits, such as TV sets and FM tuners, as an oscillator/frequency converter, but it also found wide use in audio as a driver and phase-inverter in vacuum tube push–pull amplifier circuits. This tube is essentially two 6AB4/EC92s in a single envelope. Unlike the situation with the 6C4 and 12AU7, both the 6AB4 and the 12AT7 are described by manufacturer's data sheets as R.F. devices operating up to VHF frequencies. The tube has a center-tapped filament so it can be used in either 6.3V 300mA or 12.6V 150mA heater circuits. the 12AT7 was manufactured in Russia (Electro-Harmonix brand), Slovakia (JJ Electronic), and China. See also 12AU7 12AX7 - includes a comparison of similar twin-triode designs List of vacuum tubes References External links 12AT7 twin triode data sheet from General Electric Reviews of 12at7 tubes. Vacuum tubes Guitar amplification tubes
https://en.wikipedia.org/wiki/Electrofusion
Electrofusion is a method of joining MDPE, HDPE and other plastic pipes using special fittings that have built-in electric heating elements which are used to weld the joint together. The pipes to be joined are cleaned, inserted into the electrofusion fitting (with a temporary clamp if required) and a voltage (typically 40V) is applied for a fixed time depending on the fitting in use. The built in heater coils then melt the inside of the fitting and the outside of the pipe wall, which weld together producing a very strong homogeneous joint. The assembly is then left to cool for a specified time. Electrofusion welding is beneficial because it does not require the operator to use dangerous or sophisticated equipment. After some preparation, the electrofusion welder will guide the operator through the steps to take. Welding heat and time is dependent on the type and size of the fitting. All electrofusion fittings are not created equal – precise positioning of the energising coils of wire in each fitting ensures uniform melting for a strong joint and the minimisation of welding and cooling time. The operator must be qualified according to the local and national laws. In Australia, an electrofusion course can be done within 8 hours. Electrofusion welding training focuses on the importance of accurately fusing EF fittings. Both manual and automatic methods of calculating electrofusion time gives operators the skills they need in the field. There is much to learn about the importance of preparation, timing, pressure, temperature, cool down time and handling, etc. Training and certification are very important in this field of welding, as the product can become dangerous under certain circumstances. There has been cases of major harm and death, including when molten polyethylene spurts out of the edge of a mis-aligned weld, causing skin burns. Another case was due to a tapping saddle being incorrectly installed on a gas line, causing the death of the two welders in the trench due to gas inhalation. There are many critical parts to electrofusion welding that can cause weld failures, most of which can be greatly reduced by using welding clamps, and correct scraping equipment. To keep their qualification current, a trained operator can get their fitting tested, which involves cutting open the fitting and examining the integrity of the weld. References Piping Plumbing
https://en.wikipedia.org/wiki/Fieldnotes
Fieldnotes refer to qualitative notes recorded by scientists or researchers in the course of field research, during or after their observation of a specific organism or phenomenon they are studying. The notes are intended to be read as evidence that gives meaning and aids in the understanding of the phenomenon. Fieldnotes allow researchers to access the subject and record what they observe in an unobtrusive manner. One major disadvantage of taking fieldnotes is that they are recorded by an observer and are thus subject to (a) memory and (b) possibly, the conscious or unconscious bias of the observer. It is best to record fieldnotes while making observations in the field or immediately after leaving the site to avoid forgetting important details. Some suggest immediately transcribing one's notes from a smaller pocket-sized notebook to something more legible in the evening or as soon as possible. Errors that occur from transcription often outweigh the errors which stem from illegible writing in the actual "field" notebook. Fieldnotes are particularly valued in descriptive sciences such as ethnography, biology, ecology, geology, and archaeology, each of which has long traditions in this area. Structure The structure of fieldnotes can vary depending on the field. Generally, there are two components of fieldnotes: descriptive information and reflective information. Descriptive information is factual data that is being recorded. Factual data includes time and date, the state of the physical setting, social environment, descriptions of the subjects being studied and their roles in the setting, and the impact that the observer may have had on the environment. Reflective information is the observer's reflections about the observation being conducted. These reflections are ideas, questions, concerns, and other related thoughts. Fieldnotes can also include sketches, diagrams, and other drawings. Visually capturing a phenomenon requires the observer to pay more attention to every detail so as not to overlook anything. An author does not necessarily need to possess great artistic abilities to craft an exceptional note. In many cases, a rudimentary drawing or sketch can greatly assist in later data collection and synthesis. Increasingly, photographs may be included as part of a fieldnote when collected in a digital format. Others may further subdivide the structure of fieldnotes. Nigel Rapport said that fieldnotes in anthropology transition rapidly among three types. Inscription – where the writer records notes, impressions, and potentially important keywords. Transcription – where the author writes down dictated local text Description – a reflective type of writing that synthesizes previous observations and analysis for a later situation in which a more coherent conclusion can be made of the notes. Value Fieldnotes are extremely valuable for scientists at each step of their training. In an article on fieldnotes, James Van Remsen Jr. discussed the tragic loss of information from birdwatchers in his study area that could have been taking detailed fieldnotes but neglected to do so. This comment points to a larger issue regarding how often one should be taking fieldnotes. In this case, Remsen was upset because of the multitudes of "eyes and ears" that could have supplied potentially important information for his bird surveys but instead remained with the observers. Scientists like Remsen believe can be easily wasted if notes are not taken. Currently, nature phone apps and digital citizen science databases (like eBird) are changing the form and frequency of field data collection and may contribute to de-emphasizing the importance of hand-written notes. Apps may open up new possibilities for citizen science, but taking time to handwrite fieldnotes can help with the synthesis of details that one may not remember as well from data entry in an app. Writing in such a detailed manner may contribute to the personal growth of a scientist. Nigel Rapport, an anthropological field writer, said that fieldnotes are filled with the conventional realities of "two forms of life": local and academic. The lives are different and often contradictory but are often brought together through the efforts of a "field writer". The academic side refers to one's professional involvements, and fieldnotes take a certain official tone. The local side reflects more of the personal aspects of a writer and so the fieldnotes may also relate more to personal entries. In biology and ecology Taking fieldnotes in biology and other natural sciences will differ slightly from those taken in social sciences, as they may be limited to interactions regarding a focal species and/or subject. An example of an ornithological fieldnote was reported by Remsen (1977) regarding a sighting of a Cassin's sparrow, a relatively rare bird for the region where it was found. Grinnell method of note-taking An important teacher of efficient and accurate note-taking is Joseph Grinnell. The Grinnell technique has been regarded by many ornithologists as one of the best standardized methods for taking accurate fieldnotes. The technique has four main parts: A field-worthy notebook where one records direct observations as they are being observed. A larger more substantial journal containing written entries on observations and information, transcribed from the smaller field notebook as soon as possible. Species accounts of the notes taken on specific species. A catalog to record the location and date of collected specimens. In social sciences Grounded theory Methods for analyzing and integrating fieldnotes into qualitative or quantitative research are continuing to develop. Grounded theory is a method for integrating data in qualitative research done primarily by social scientists. This may have implications for fieldnotes in the natural sciences as well. Considerations when recording fieldnotes Decisions about what is recorded and how can have a significant impact on the ultimate findings derived from the research. As such, creating and adhering to a systematic method for recording fieldnotes is an important consideration for a qualitative researcher. American social scientist Robert K. Yin recommended the following considerations as best practices when recording qualitative field notes. Create vivid images: Focus on recording vivid descriptions of actions that take place in the field, instead of recording an interpretation of them. This is particularly important early in the research process. Immediately trying to interpret events can lead to premature conclusions that can prevent later insight when more observation has occurred. Focusing on the actions taking place in the field, instead of trying to describe people or scenes, can be a useful tool to minimize personal stereotyping of the situation. The verbatim principle: Similar to the vivid images, the goal is to accurately record what is happening in the field, not a personal paraphrasing (and possible unconscious stereotyping) of those events. Additionally, in social science research that involves studying culture, it is important to faithfully capture language and habits as a first step toward full understanding. Include drawings and sketches: These can quickly and accurately capture important aspects of field activity that are difficult to record in words and can be very helpful for recall when reviewing fieldnotes. Develop one's own transcribing language: While no one technique of transcribing (or "jotting") is perfect, most qualitative researchers develop a systematic approach to their own note-taking. Considering the multiple competing demands on attention (the simultaneous observation, processing, and recording of rich qualitative data in an unfamiliar environment), perfecting a system that can be automatically used and that will be interpretable later allows one to allocate one's full attention to observation. The ability to distinguish notes about events themselves from other notes to oneself is a key feature. Prior to engaging in qualitative research for the first time, practicing a transcribing format beforehand can improve the likelihood of successful observation. Convert fieldnotes to full notes daily: Prior to discussing one's observations with anyone else, one should set aside time each day to convert fieldnotes. At the very least, any unclear abbreviations, illegible words, or unfinished thoughts should be completed that would be uninterpretable later. In addition, the opportunity to collect one's thoughts and reflect on that day's events can lead to recalling additional details, uncovering emerging themes, leading to new understanding, and helping plan for future observations. This is also a good time to add the day's notes to one's total collection in an organized manner. Verify notes during collection: Converting fieldnotes as described above will likely lead the researcher to discover key points and themes that can then be checked while still present in the field. If conflicting themes are emerging, further data collection can be directed in a manner to help resolve the discrepancy. Obtain permission to record: While electronic devices and audiovisual recording can be useful tools in performing field research, there are some common pitfalls to avoid. Ensure that permission is obtained for the use of these devices beforehand and ensure that the devices to be used for recording have been previously tested and can be used inconspicuously. Keep a personal journal in addition to fieldnotes: As the researcher is the main instrument, insight into one's own reactions to and initial interpretations of events can help the researcher identify any undesired personal biases that might have influenced the research. This is useful for reflexivity. See also Geological survey Lab notebook Land patent Public Land Survey System Surveying References Further reading External links An online database of Charles Darwin's field notes Field research Documents
https://en.wikipedia.org/wiki/Avenanthramide
Avenanthramides (anthranilic acid amides, formerly called "avenalumins") are a group of phenolic alkaloids found mainly in oats (Avena sativa), but also present in white cabbage butterfly eggs (Pieris brassicae and P. rapae), and in fungus-infected carnation (Dianthus caryophyllus). A number of studies demonstrate that these natural products have anti-inflammatory, antioxidant, anti-itch, anti-irritant, and antiatherogenic activities. Oat kernel extracts with standardized levels of avenanthramides are used for skin, hair, baby, and sun care products. The name avenanthramides was coined by Collins when he reported the presence of these compounds in oat kernels. It was later found that three avenanthramides were the open-ring amides of avenalumins I, II, and III, which were previously reported as oat phytoalexins by Mayama and co-workers. History Oat has been used for personal care purposes since antiquity. Indeed, wild oats (Avena sativa) was used in skin care in Egypt and the Arabian peninsula 2000 BC. Oat baths were a common treatment of insomnia, anxiety, and skin diseases such as eczema and burns. In Roman times, its use as a medication for dermatological issues was reported by Pliny, Columella, and Theophrastus. In the 19th century, oatmeal baths were often used to treat many cutaneous conditions, especially pruritic inflammatory eruptions. In the 1930s, the literature provided further evidence about the cleansing action of oat along with its ability to relieve itching and protect skin. Colloidal oatmeal In 2003, colloidal oatmeal was officially approved as a skin protectant by the FDA. However, little thought had been given to the active ingredient in oats responsible for the anti-inflammatory effect until more attention was paid to avenanthramides, which were first isolated and characterized in the 1980s by Collins. Since then, many congeners have been characterized and purified, and it is known that avenanthramides have antioxidant, anti-inflammatory, and anti-atherosclerotic properties, and may be used as a treatment for people with inflammatory, allergy, or cardiovascular diseases. In 1999 studies made by Tufts University showed that avenanthramides are bioavailable and remain bioactive in humans after consumption. More recent studies made by the University of Minnesota showed that the antioxidant and anti-inflammatory activities can be increased through the consumption of 0.4 to 9.2 mg/day of avenanthramides over eight weeks. The International Nomenclature of Cosmetic Ingredients (INCI) originally referred to an oat extract with a standardized level of avenanthramides as "Avena sativa kernel extract," but recently they have also accepted the INCI name "avenanthramides" to describe an extract containing 80% of these oat phenolic alkaloids. Function in Avena sativa A. sativa produces avenanthramides as defensive phytoalexins against infiltration by fungal plant pathogens. They were discovered as defensive chemicals especially concentrated in lesions of Puccinia coronata var. avenae f. sp. avenae (and at that time named "avenalumins"). Medical and personal care uses Anti-inflammatory and anti-itch activity Studies made by Sur (2008) provide evidence that avenanthramides significantly reduce the inflammatory response. Inflammation is a complex and self-protection reaction that occurs in the body against foreign substance, cell damage, infections, and pathogens. The inflammatory responses are controlled through a group called cytokines that is produced by the inflammatory cells. Furthermore, the expression of cytokines are regulated through inhibition of nuclear transcription factor kappa B (NF-κB). Many studies have demonstrated that avenanthramides can reduce the production of pro-inflammatory cytokines such as IL-6, IL-8, and MCP-1 by inhibiting NF-κB activation that is responsible for activating the genes of inflammatory response. Thus, these oat polyphenols mediate the decrease of inflammation by inhibiting the cytokine release. In addition, it was found that avenanthramides inhibit neurogenic inflammation, which is defined as an inflammation triggered by the nervous system that causes vasodilation, edema, warmth, and hypersensitivity. Also, avenanthramides significantly reduce the itching response, and its efficiency is comparable to the anti-itch effect produced by hydrocortisone. Redness reduction Avenanthramides have effective antihistaminic activity; they significantly reduce itch and redness compared with untreated areas. Suggested mechanism of action According to Sur (2008), the anti-inflammatory effect of the avenanthramides is due to the inhibition of the NF-κB activation in NF-κB dependent cytokine. Nuclear factor-kappa β (NF-κB) is responsible for regulating the transcription of DNA and participates in the activation of genes related to inflammatory and immune responses. Consequently, suppressing the NF-κB limits the proliferation of cancer cells and reduces the level of inflammation. Avenanthramides are able to inhibit the release of inflammatory cytokines that are present in pruritic skin diseases that cause itchiness. In addition, its anti-inflammatory activity may prevent the vicious itch-scratch cycle and reduce the scratching-induced secondary inflammation that often occur in atopic dermatitis and eczema, preventing the skin from disrupting its barrier. Avenanthramides also have a chemical structure similar to the drug tranilast, which has anti-histaminic action. The anti-itch activity of avenanthramides may be associated with the inhibition of histamine response. Taken together, these results show the effect of avenanthramides as powerful anti-inflammatory agents and their importance in dermatologic applications. Antioxidant activity Avenanthramides are known to have potent antioxidant activity, acting primarily by donating a hydrogen atom to a radical. An antioxidant is “any substance that, when present at low concentrations compared to those of an oxidisable substrate, significantly delays or prevents oxidation of that substrate” ( Halliwell, 1990). These phytochemicals are able to combat the oxidative stress present in the body that is responsible for causing cancer and cardiovascular disease. Among the avenanthramides, there are different antioxidant capacities, where C has the highest capacity, followed by B and A. Dietary supplement Avenanthramides extracted from oats show potent antioxidant properties in vitro and in vivo, and according to studies made by Dimberg (1992), its antioxidant activity is many times greater than other antioxidants such as caffeic acid and vanillin. Aven-C is one of the most significant avenanthramides present in the oats, and it is responsible for oats' antioxidant activity. The effects of the avenanthramide-enriched extract of oats has been investigated in animals, and a diet of 20 mg avenanthramide per kilogram body weight in rats has been shown to increase the superoxide dismutase (SOD) activity in skeletal muscle, liver, and kidneys. Also, a diet based on avenanthramides enhances glutathione peroxidase activity in heart and skeletal muscles, protecting the organism from oxidative damages. Nomenclature Avenanthramides consist of conjugates of one of three phenylpropanoids (p-coumaric, ferulic, or caffeic acid) and anthranilic acid (or a hydroxylated and/or methoxylated derivative of anthranilic acid) Collins and Dimberg have used different systems of nomenclature to describe the Avenanthramides in their publications. Collins assigned a system that classifies avenanthramides using alphabetic descriptors, while Dimberg assigned upper case letters to the anthranilate derivate and lower case to the accompanying phenylpropanoid, such as “c” for caffeic acid, “f” for ferulic acid, or “p” for anthranilic acid p-coumaric acid. Later, Dimberg's system was modified to use a numeric descriptor for the anthranilic acid. The following avenanthramides are most abundant in oats: avenanthramide A (also called 2p, AF-1 or Bp), avenanthramide B (also called 2f, AF-2 or Bf), avenanthramide C (also called 2c, AF-6 or Bc), avenanthramide O (also called 2pd), avenanthramide P (also called 2fd), and avenanthramide Q (also called 2 cd). Biosynthesis There is evidence that the amount of avenanthramides found in the grains is related to genotype, environment, crop year and location, and tissue (Matsukawa et al., 2000). The environmental factors are not clearly known, but it is believed that lower levels of avenanthramides are produced in oats when they are grown in a dry environment, which disfavors crown rust, a kind of fungus that has been shown to stimulate avenanthramides production in oats grains. Chemical stability pH Avenanthramides are not all sensitive to pH and temperature. This was well illustrated in a study conducted on avenanthramides A, B and C. In this study it was found that avenanthramide A (2p) concentration was essentially unchanged in sodium phosphate buffer after three hours at either room temperature or at 95 °C. Avenanthramides B (2f) appeared to be more sensitive to the higher temperature at pH 7 and 12. Avenanthramides C (2c) underwent chemical reorganization at pH 12 at both temperatures and diminished by more than 85% at 95 °C, even at pH 7 (Dimberg et al., 2001). UV Avenanthramides are also affected by ultra-violet (UV) light. Dimberg found that the three avenanthramides tested (A, B, and C) remained in the trans conformation after 18 hours of exposure to UV light at 254 nm. On the other hand, Collins reported that the avenanthramides isomerize upon exposure to daylight or UV light. Synthetic avenanthramides Avenanthramides can be artificially synthesized. Avenanthramides A, B, D, and E were synthesized by Collins (1989), using chromatography methods, and adapting Bain and Smalley's procedure (1968). All four synthetic substances were identical to the ones extracted from oats. References Antibiotics Antipruritics Phytoalexins Oats
https://en.wikipedia.org/wiki/Meiocyte
A meiocyte is a type of cell that differentiates into a gamete through the process of meiosis. Through meiosis, the diploid meiocyte divides into four genetically different haploid gametes. The control of the meiocyte through the meiotic cell cycle varies between different groups of organisms. Yeast The process of meiosis has been extensively studied in model organisms, such as yeast. Because of this, the way in which the meiocyte is controlled through the meiotic cell cycle is best understood in this group of organisms. A yeast meiocyte that is undergoing meiosis must pass through a number of checkpoints in order to complete the cell cycle. If a meiocyte divides and this division results in a mutant cell, the mutant cell will undergo apoptosis and, therefore, will not complete the cycle. In natural populations of the yeast Saccharomyces cerevisiae, diploid meiocytes produce haploid cells that then mainly undergo either clonal reproduction, or selfing (intratetrad mating) to form progeny diploid meiocytes. When the ancestry of natural S. cerevisiae strains was analyzed, it was determined that formation of diploid meiocytes by outcrossing (as opposed to inbreeding or selfing) occurs only about once every 50,000 cell divisions. These findings suggest that the principal adaptive function of meiocytes may not be related to the production of genetic diversity that occurs infrequently by outcrossing, but rather may be mainly related to recombinational repair of DNA damage (that can occur in meiocytes at each mating cycle). Animal The animal meiotic cell cycle is very much like that of yeast. Checkpoints within the animal meiotic cell cycle serve to stop mutant meiocytes from progressing further within the cycle. Like yeast meiocytes, if an animal meiocyte differentiates into a mutant cell, the cell will undergo apoptosis. Plant The meiotic cell cycle in plants is very different from that of yeast and animal cells. In plant studies, mutations have been identified that affect meiocyte formation or the process of meiosis. Most meiotic mutant plant cells complete the meiotic cell cycle and produce abnormal microspores. It appears that plant meiocytes do not undergo any checkpoints within the meiotic cell cycle and can, thus, proceed through the cycle regardless of any defect. By studying the abnormal microspores, the progression of the plant meiocyte through the meiotic cell cycle can be investigated further. Mammalian infertility Researching meiosis in mammals plays a crucial role in understanding human infertility. Meiosis research within mammal populations is restricted due to the fundamental nature of meiosis. In order to study mammalian meiosis, a culture technique that would allow for this process to be observed live under a microscope would need to be identified. By viewing live mammalian meiosis, one can observe the behavior of mutant meiocytes that may possibly compromise infertility within the particular organism. However, because of the size and small number of meiocytes, collecting samples of these cells has been difficult and is currently being researched. References Cell cycle
https://en.wikipedia.org/wiki/Krypton-85
Krypton-85 (85Kr) is a radioisotope of krypton. Krypton-85 has a half-life of 10.756 years and a maximum decay energy of 687 keV. It decays into stable rubidium-85. Its most common decay (99.57%) is by beta particle emission with maximum energy of 687 keV and an average energy of 251 keV. The second most common decay (0.43%) is by beta particle emission (maximum energy of 173 keV) followed by gamma ray emission (energy of 514 keV). Other decay modes have very small probabilities and emit less energetic gamma rays. Krypton-85 is mostly synthetic, though it is produced naturally in trace quantities by cosmic ray spallation. In terms of radiotoxicity, 440 Bq of 85Kr is equivalent to 1 Bq of radon-222, without considering the rest of the radon decay chain. Presence in Earth atmosphere Natural production Krypton-85 is produced in small quantities by the interaction of cosmic rays with stable krypton-84 in the atmosphere. Natural sources maintain an equilibrium inventory of about 0.09 PBq in the atmosphere. Anthropogenic production As of 2009 the total amount in the atmosphere is estimated at 5500 PBq due to anthropogenic sources. At the end of the year 2000, it was estimated to be 4800 PBq, and in 1973, an estimated 1961 PBq (53 megacuries). The most important of these human sources is nuclear fuel reprocessing, as krypton-85 is one of the seven common medium-lived fission products. Nuclear fission produces about three atoms of krypton-85 for every 1000 fissions (i.e., it has a fission yield of 0.3%). Most or all of this krypton-85 is retained in the spent nuclear fuel rods; spent fuel on discharge from a reactor contains between 0.13–1.8 PBq/Mg of krypton-85. Some of this spent fuel is reprocessed. Current nuclear reprocessing releases the gaseous 85Kr into the atmosphere when the spent fuel is dissolved. It would be possible in principle to capture and store this krypton gas as nuclear waste or for use. The cumulative global amount of krypton-85 released from reprocessing activity has been estimated as 10,600 PBq as of 2000. The global inventory noted above is smaller than this amount due to radioactive decay; a smaller fraction is dissolved into the deep oceans. Other man-made sources are small contributors to the total. Atmospheric nuclear weapons tests released an estimated 111–185 PBq. The 1979 accident at the Three Mile Island nuclear power plant released about . The Chernobyl accident released about 35 PBq, and the Fukushima Daiichi accident released an estimated 44–84 PBq. The average atmospheric concentration of krypton-85 was approximately 0.6 Bq/m3 in 1976, and has increased to approximately 1.3 Bq/m3 as of 2005. These are approximate global average values; concentrations are higher locally around nuclear reprocessing facilities, and are generally higher in the northern hemisphere than in the southern hemisphere. For wide-area atmospheric monitoring, krypton-85 is the best indicator for clandestine plutonium separations. Krypton-85 releases increase the electrical conductivity of atmospheric air. Meteorological effects are expected to be stronger closer to the source of the emissions. Uses in industry Krypton-85 is used in arc discharge lamps commonly used in the entertainment industry for large HMI film lights as well as high-intensity discharge lamps. The presence of krypton-85 in discharge tube of the lamps can make the lamps easy to ignite. Early experimental krypton-85 lighting developments included a railroad signal light designed in 1957 and an illuminated highway sign erected in Arizona in 1969. A 60 μCi (2.22 MBq) capsule of krypton-85 was used by the random number server HotBits (an allusion to the radioactive element being a quantum mechanical source of entropy), but was replaced with a 5 μCi (185 kBq) Cs-137 source in 1998. Krypton-85 is also used to inspect aircraft components for small defects. Krypton-85 is allowed to penetrate small cracks, and then its presence is detected by autoradiography. The method is called "krypton gas penetrant imaging". The gas penetrates smaller openings than the liquids used in dye penetrant inspection and fluorescent penetrant inspection. Krypton-85 was used in cold-cathode voltage regulator electron tubes, such as the type 5651. Krypton-85 is also used for Industrial Process Control mainly for thickness and density measurements as an alternative to Sr-90 or Cs-137. Krypton-85 is also used as a charge neutralizer in aerosol sampling systems. References Fission products Krypton-085
https://en.wikipedia.org/wiki/Autostereoscopy
Autostereoscopy is any method of displaying stereoscopic images (adding binocular perception of 3D depth) without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewer's eyes are located. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, and may include Integral imaging, but notably do not include volumetric display or holographic displays. Technology Many organizations have developed autostereoscopic 3D displays, ranging from experimental displays in university departments to commercial products, and using a range of different technologies. The method of creating autostereoscopic flat panel video displays using lenses was mainly developed in 1985 by Reinhard Boerner at the Heinrich Hertz Institute (HHI) in Berlin. Prototypes of single-viewer displays were already being presented in the 1990s, by Sega AM3 (Floating Image System) and the HHI. Nowadays, this technology has been developed further mainly by European and Japanese companies. One of the best-known 3D displays developed by HHI was the Free2C, a display with very high resolution and very good comfort achieved by an eye tracking system and a seamless mechanical adjustment of the lenses. Eye tracking has been used in a variety of systems in order to limit the number of displayed views to just two, or to enlarge the stereoscopic sweet spot. However, as this limits the display to a single viewer, it is not favored for consumer products. Currently, most flat-panel displays employ lenticular lenses or parallax barriers that redirect imagery to several viewing regions; however, this manipulation requires reduced image resolutions. When the viewer's head is in a certain position, a different image is seen with each eye, giving a convincing illusion of 3D. Such displays can have multiple viewing zones, thereby allowing multiple users to view the image at the same time, though they may also exhibit dead zones where only a non-stereoscopic or pseudoscopic image can be seen, if at all. Parallax barrier A parallax barrier is a device placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic image or multiscopic image without the need for the viewer to wear 3D glasses. The principle of the parallax barrier was independently invented by Auguste Berthier, who published first but produced no practical results, and by Frederic E. Ives, who made and exhibited the first known functional autostereoscopic image in 1901. About two years later, Ives began selling specimen images as novelties, the first known commercial use. In the early 2000s, Sharp developed the electronic flat-panel application of this old technology to commercialization, briefly selling two laptops with the world's only 3D LCD screens. These displays are no longer available from Sharp but are still being manufactured and further developed from other companies. Similarly, Hitachi has released the first 3D mobile phone for the Japanese market under distribution by KDDI. In 2009, Fujifilm released the FinePix Real 3D W1 digital camera, which features a built-in autostereoscopic LCD measuring diagonal. The Nintendo 3DS video game console family uses a parallax barrier for 3D imagery; on a newer revision, the New Nintendo 3DS, this is combined with an eye tracking system. Integral photography and lenticular arrays The principle of integral photography, which uses a two-dimensional (X–Y) array of many small lenses to capture a 3-D scene, was introduced by Gabriel Lippmann in 1908. Integral photography is capable of creating window-like autostereoscopic displays that reproduce objects and scenes life-size, with full parallax and perspective shift and even the depth cue of accommodation, but the full realization of this potential requires a very large number of very small high-quality optical systems and very high bandwidth. Only relatively crude photographic and video implementations have yet been produced. One-dimensional arrays of cylindrical lenses were patented by Walter Hess in 1912. By replacing the line and space pairs in a simple parallax barrier with tiny cylindrical lenses, Hess avoided the light loss that dimmed images viewed by transmitted light and that made prints on paper unacceptably dark. An additional benefit is that the position of the observer is less restricted, as the substitution of lenses is geometrically equivalent to narrowing the spaces in a line-and-space barrier. Philips solved a significant problem with electronic displays in the mid-1990s by slanting the cylindrical lenses with respect to the underlying pixel grid. Based on this idea, Philips produced its WOWvx line until 2009, running up to 2160p (a resolution of 3840×2160 pixels) with 46 viewing angles. Lenny Lipton's company, StereoGraphics, produced displays based on the same idea, citing a much earlier patent for the slanted lenticulars. Magnetic3d and Zero Creative have also been involved. Compressive light field displays With rapid advances in optical fabrication, digital processing power, and computational models for human perception, a new generation of display technology is emerging: compressive light field displays. These architectures explore the co-design of optical elements and compressive computation while taking particular characteristics of the human visual system into account. Compressive display designs include dual and multilayer devices that are driven by algorithms such as computed tomography and Non-negative matrix factorization and non-negative tensor factorization. Autostereoscopic content creation and conversion Tools for the instant conversion of existing 3D movies to autostereoscopic were demonstrated by Dolby, Stereolabs and Viva3D. Other Dimension Technologies released a range of commercially available 2D/3D switchable LCDs in 2002 using a combination of parallax barriers and lenticular lenses. SeeReal Technologies has developed a holographic display based on eye tracking. CubicVue exhibited a color filter pattern autostereoscopic display at the Consumer Electronics Association's i-Stage competition in 2009. There are a variety of other autostereo systems as well, such as volumetric display, in which the reconstructed light field occupies a true volume of space, and integral imaging, which uses a fly's-eye lens array. The term automultiscopic display has recently been introduced as a shorter synonym for the lengthy "multi-view autostereoscopic 3D display", as well as for the earlier, more specific "parallax panoramagram". The latter term originally indicated a continuous sampling along a horizontal line of viewpoints, e.g., image capture using a very large lens or a moving camera and a shifting barrier screen, but it later came to include synthesis from a relatively large number of discrete views. Sunny Ocean Studios, located in Singapore, has been credited with developing an automultiscopic screen that can display autostereo 3D images from 64 different reference points. A fundamentally new approach to autostereoscopy called HR3D has been developed by researchers from MIT's Media Lab. It would consume half as much power, doubling the battery life if used with devices like the Nintendo 3DS, without compromising screen brightness or resolution; other advantages include a larger viewing angle and maintaining the 3D effect when the screen is rotated. Movement parallax: single view vs. multi-view systems Movement parallax refers to the fact that the view of a scene changes with movement of the head. Thus, different images of the scene are seen as the head is moved from left to right, and from up to down. Many autostereoscopic displays are single-view displays and are thus not capable of reproducing the sense of movement parallax, except for a single viewer in systems capable of eye tracking. Some autostereoscopic displays, however, are multi-view displays, and are thus capable of providing the perception of left–right movement parallax. Eight and sixteen views are typical for such displays. While it is theoretically possible to simulate the perception of up–down movement parallax, no current display systems are known to do so, and the up–down effect is widely seen as less important than left–right movement parallax. One consequence of not including parallax about both axes becomes more evident as objects increasingly distant from the plane of the display are presented: as the viewer moves closer to or farther away from the display, such objects will more obviously exhibit the effects of perspective shift about one axis but not the other, appearing variously stretched or squashed to a viewer not positioned at the optimal distance from the display. Vergence-accommodation conflict Autostereoscopic displays display stereoscopic content without matching focal depth, thereby exhibiting vergence-accommodation conflict. References External links Tridelity Viva3D VisuMotion Explanation of 3D Autostereoscopic Monitors Overview of different Autostereoscopic LCD displays Rendering for an Interactive 360° Light Field Display, a demonstration of Autostereoscopy using a spinning mirror, a holographic diffuser, and a high speed video projector demonstrated at SIGGRAPH 2007 Behind-the-scenes video about production for autostereoscopic displays 3D Without Glasses - The Future of 3D Technology? Diffraction Influence on the Field of View and Resolution of Three-Dimensional Integral Imaging Stereoscopy 3D imaging Display technology Photographic techniques
https://en.wikipedia.org/wiki/Refractometer
A refractometer is a laboratory or field device for the measurement of an index of refraction (refractometry). The index of refraction is calculated from the observed refraction angle using Snell's law. For mixtures, the index of refraction then allows to determine the concentration using mixing rules such as the Gladstone–Dale relation and Lorentz–Lorenz equation. Refractometry Standard refractometers measure the extent of light refraction (as part of a refractive index) of transparent substances in either a liquid or solid-state; this is then used in order to identify a liquid sample, analyze the sample's purity, and determine the amount or concentration of dissolved substances within the sample. As light passes through the liquid from the air it will slow down and create a ‘bending’ illusion, the severity of the ‘bend’ will depend on the amount of substance dissolved in the liquid. For example, the amount of sugar in a glass of water. Types There are four main types of refractometers: traditional handheld refractometers, digital handheld refractometers, laboratory or Abbe refractometers (named for the instrument's inventor and based on Ernst Abbe's original design of the 'critical angle') and inline process refractometers. There is also the Rayleigh Refractometer used (typically) for measuring the refractive indices of gases. In laboratory medicine, a refractometer is used to measure the total plasma protein in a blood sample and urine specific gravity in a urine sample. In drug diagnostics, a refractometer is used to measure the specific gravity of human urine. In gemology, the gemstone refractometer is one of the fundamental pieces of equipment used in a gemological laboratory. Gemstones are transparent minerals and can therefore be examined using optical methods. Refractive index is a material constant, dependent on the chemical composition of a substance. The refractometer is used to help identify gem materials by measuring their refractive index, one of the principal properties used in determining the type of a gemstone. Due to the dependence of the refractive index on the wavelength of the light used (i.e. dispersion), the measurement is normally taken at the wavelength of the sodium line D-line (NaD) of ~589 nm. This is either filtered out from daylight or generated with a monochromatic light-emitting diode (LED). Certain stones such as rubies, sapphires, tourmalines and topaz are optically anisotropic. They demonstrate birefringence based on the polarisation plane of the light. The two different refractive indexes are classified using a polarisation filter. Gemstone refractometers are available both as classic optical instruments and as electronic measurement devices with a digital display. In marine aquarium keeping, a refractometer is used to measure the salinity and specific gravity of the water. In the automobile industry, a refractometer is used to measure the coolant concentration. In the machine industry, a refractometer is used to measure the amount of coolant concentrate that has been added to the water-based coolant for the machining process. In homebrewing, a brewing refractometer is used to measure the specific gravity before fermentation to determine the amount of fermentable sugars which will potentially be converted to alcohol. Brix refractometers are often used by hobbyists for making preserves including jams, marmalades and honey. In beekeeping, a brix refractometer is used to measure the amount of water in honey. Automatic Automatic refractometers automatically measure the refractive index of a sample. The automatic measurement of the refractive index of the sample is based on the determination of the critical angle of total reflection. A light source, usually a long-life LED, is focused onto a prism surface via a lens system. An interference filter guarantees the specified wavelength. Due to focusing light to a spot at the prism surface, a wide range of different angles is covered. As shown in the figure "Schematic setup of an automatic refractometer" the measured sample is in direct contact with the measuring prism. Depending on its refractive index, the incoming light below the critical angle of total reflection is partly transmitted into the sample, whereas for higher angles of incidence the light is totally reflected. This dependence of the reflected light intensity from the incident angle is measured with a high-resolution sensor array. From the video signal taken with the CCD sensor the refractive index of the sample can be calculated. This method of detecting the angle of total reflection is independent on the sample properties. It is even possible to measure the refractive index of optically dense strongly absorbing samples or samples containing air bubbles or solid particles . Furthermore, only a few microliters are required and the sample can be recovered. This determination of the refraction angle is independent of vibrations and other environmental disturbances. Influence of wavelength The refractive index of a given sample varies with wavelength for all materials. This dispersion relation is nonlinear and is characteristic for every material. In the visible range, a decrease of the refractive index comes with increasing wavelength. In glass prisms very little absorption is observable. In the infrared wavelength range several absorption maxima and fluctuations in the refractive index appear. To guarantee a high quality measurement with an accuracy of up to 0.00002 in the refractive index the wavelength has to be determined correctly. Therefore, in modern refractometers the wavelength is tuned to a bandwidth of +/-0.2 nm to ensure correct results for samples with different dispersions. Influence of temperature Temperature has a very important influence on the refractive index measurement. Therefore, the temperature of the prism and the temperature of the sample have to be controlled with high precision. There are several subtly-different designs for controlling the temperature; but there are some key factors common to all, such as high-precision temperature sensors and Peltier devices to control the temperature of the sample and the prism. The temperature control of these devices should be designed so that the variation in sample temperature is small enough that it will not cause a detectable refractive-index change. External water baths were used in the past but are no longer needed. Extended possibilities of automatic refractometers Automatic refractometers are microprocessor-controlled electronic devices. This means they can have a high degree of automation and also be combined with other measuring devices Flow cells There are different types of sample cells available, ranging from a flow cell for a few microliters to sample cells with a filling funnel for fast sample exchange without cleaning the measuring prism in between. The sample cells can also be used for the measurement of poisonous and toxic samples with minimum exposure to the sample. Micro cells require only a few microliters volume, assure good recovery of expensive samples and prevent evaporation of volatile samples or solvents. They can also be used in automated systems for automatic filling of the sample onto the refractometer prism. For convenient filling of the sample through a funnel, flow cells with a filling funnel are available. These are used for fast sample exchange in quality control applications. Automatic sample feeding Once an automatic refractometer is equipped with a flow cell, the sample can either be filled by means of a syringe or by using a peristaltic pump. Modern refractometers have the option of a built-in peristaltic pump. This is controlled via the instrument's software menu. A peristaltic pump opens the way to monitor batch processes in the laboratory or perform multiple measurements on one sample without any user interaction. This eliminates human error and assures a high sample throughput. If an automated measurement of a large number of samples is required, modern automatic refractometers can be combined with an automatic sample changer. The sample changer is controlled by the refractometer and assures fully automated measurements of the samples placed in the vials of the sample changer for measurements. Multiparameter measurements Today's laboratories do not only want to measure the refractive index of samples, but several additional parameters like density or viscosity to perform efficient quality control. Due to the microprocessor control and a number of interfaces, automatic refractometers are able to communicate with computers or other measuring devices, e.g. density meters, pH meters or viscosity meters, to store refractive index data and density data (and other parameters) into one database. Software features Automatic refractometers do not only measure the refractive index, but offer a lot of additional software features, like Instrument settings and configuration via software menu Automatic data recording into a database User-configurable data output Export of measuring data into Microsoft Excel data sheets Statistical functions Predefined methods for different kinds of applications Automatic checks and adjustments Check if sufficient amount of sample is on the prism Data recording only if the results are plausible Pharma documentation and validation Refractometers are often used in pharmaceutical applications for quality control of raw intermediate and final products. The manufacturers of pharmaceuticals have to follow several international regulations like FDA 21 CFR Part 11, GMP, Gamp 5, USP<1058>, which require a lot of documentation work. The manufacturers of automatic refractometers support these users providing instrument software fulfills the requirements of 21 CFR Part 11, with user levels, electronic signature and audit trail. Furthermore, Pharma Validation and Qualification Packages are available containing Qualification Plan (QP) Design Qualification (DQ) Risk Analysis Installation Qualification (IQ) Operational Qualification (OQ) Check List 21 CFR Part 11 / SOP Performance Qualification (PQ) Scales typically used Brix Oechsle scale Plato scale Baumé scale See also Ernst Abbe Refractive index Gemology Must weight Winemaking Harvest (wine) Gravity (beer) High-fructose corn syrup Cutting fluid German inventors and discoverers High refractive index polymers References Further reading External links Refractometer – Gemstone Buzz uses, procedure & limitations. Rayleigh Refractometer: Operational Principles Refractometers and refractometry explains how refractometers work. Measuring instruments Scales Beekeeping tools Food analysis
https://en.wikipedia.org/wiki/Deceleron
The deceleron, or split aileron, was developed in the late 1940s by Northrop, originally for use on the F-89 Scorpion fighter. It is a two-part aileron that can be deflected as a unit to provide roll control, or split open to act as an air brake. Decelerons are used on the Fairchild Republic A-10 Thunderbolt II and the Northrop Grumman B-2 Spirit flying wing. In differential use they impart yaw moment, potentially obviating the rudder and vertical stabilizer control surface, although requiring active flight control. See also Spoileron References XF-89 Research Report External links Aircraft controls
https://en.wikipedia.org/wiki/Dipsogen
A dipsogen is an agent that causes thirst. (From Greek: δίψα (dipsa), "thirst" and the suffix -gen, "to create".) Physiology Angiotensin II is thought to be a powerful dipsogen, and is one of the products of the renin–angiotensin pathway, a biological homeostatic mechanism for the regulation of electrolytes and water. External links 'Fluid Physiology' by Kerry Brandis (from http://www.anaesthesiamcq.com) Physiology
https://en.wikipedia.org/wiki/ObjectARX
ObjectARX (AutoCAD Runtime eXtension) is an API for customizing and extending AutoCAD. The ObjectARX SDK is published by Autodesk and freely available under license from Autodesk. The ObjectARX SDK consists primarily of C++ headers and libraries that can be used to build Windows DLLs that can be loaded into the AutoCAD process and interact directly with the AutoCAD application. ObjectARX modules use the file extensions .arx and .dbx instead of the more common .dll. ObjectARX is the most powerful of the various AutoCAD APIs, and the most difficult to master. The typical audience for the ObjectARX SDK includes professional programmers working either as commercial application developers or as in-house developers at companies using AutoCAD. New versions of the ObjectARX SDK are released with each new AutoCAD release, and ObjectARX modules built with a specific SDK version are typically limited to running inside the corresponding version of AutoCAD. Recent versions of the ObjectARX SDK include support for the .NET platform by providing managed wrapper classes for native objects and functions. The native classes and libraries that are made available via the ObjectARX API are also used internally by the AutoCAD code. As a result of this tight linkage with AutoCAD itself, the libraries are very compiler specific, and work only with the same compiler that Autodesk uses to build AutoCAD. Historically, this has required ObjectARX developers to use various versions of Microsoft Visual Studio, with different versions of the SDK requiring different versions of Visual Studio. Although ObjectARX is specific to AutoCAD, Open Design Alliance announced in 2008 a new API called DRX (included in their DWGdirect library) that attempts to emulate the ObjectARX API in products like IntelliCAD that use the DWGdirect libraries. References See also Autodesk Developer Network Autodesk AutoCAD Application programming interfaces
https://en.wikipedia.org/wiki/Furoxan
Furoxan or 1,2,5-oxadiazole 2-oxide is a heterocycle of the isoxazole family and an amine oxide derivative of furazan. It is a nitric oxide donor. As such, furoxan and its derivatives are actively researched as potential new drugs (Ipramidil) and insensitive high density explosives (4,4’-Dinitro-3,3’-diazenofuroxan). Furoxanes can be formed by dimerization of nitrile oxides. References Amine oxides Oxadiazoles
https://en.wikipedia.org/wiki/Sister
A sister is a woman or a girl who shares parents or a parent with another individual; a female sibling. The male counterpart is a brother. Although the term typically refers to a familial relationship, it is sometimes used endearingly to refer to non-familial relationships. A full sister is a first-degree relative. Overview The English word sister comes from Old Norse which itself derives from Proto-Germanic *swestēr, both of which have the same meaning, i.e. sister. Some studies have found that sisters display more traits indicating jealousy around their siblings than their male counterparts, brothers. In some cultures, sisters are afforded a role of being under the protection by male siblings, especially older brothers, from issues ranging from bullies or sexual advances by womanizers. In some quarters, the term sister has gradually broadened its colloquial meaning to include individuals stipulating kinship. In response, in order to avoid equivocation, some publishers prefer the usage of female sibling over sister. Males with a twin sister sometimes view her as their female alter ego, or what they would have been like if they had two X chromosomes. A study in Perth, Australia found that girls having only youngers brothers resulted in a chastity effect: losing their virginity on average more than a year later than average. This has been hypothesized as being attributed to the pheromones in their brothers' sweat and household-related errands. Sororal relationships Various studies have shown that older sisters are likely to give a varied gender role to their younger siblings, as well as being more likely to develop a close bond with their younger siblings. Older sisters are more likely to play with their younger siblings. Younger siblings display a more needy behavior when in close proximity to their older sister and are more likely to be tolerant of an older sister's bad behavior. Boys with only an older sister are more likely to display stereotypically male behavior, and such masculine boys increased their masculine behavior with the more sisters they have. The reverse is true for young boys with several sisters, as they tend to be feminine, however, they outgrow this by the time they approach pubescence. Boys with older sisters were less likely to be delinquent or have emotional and behavioral disorders. A younger sister is less likely to be scolded by older siblings than a younger brother. The most common recreational activity between older brother/younger sister pairs is art drawing. Some studies also found a correlation between having an older sister and constructive discussions about safe sexual practices. Some studies have shown that men without sisters are more likely to be ineffectual at courtship and romantic relationships. Fictional works about sisters Films What Ever Happened to Baby Jane? (1962) Hannah and Her Sisters (1986) Hanging Up (2000) Frozen (2013) Little Women (2019) Literature Little Women by Louisa May Alcott Laura Lee Hope's Bobbsey Twins novels, which included two sets of fraternal twins: 12-year-old Nan and Bert, and six-year-old Flossie and Freddie In Her Shoes (2002), by Jennifer Weiner Television Hope & Faith, American sitcom Sisters What I Like About You Sister, Sister Games Jessica & Zofia Blazkowicz, Wolfenstein: Youngblood Mileena & Kitana, Mortal Kombat Kat and Ana, WarioWare See also Brother Sisterhood (disambiguation) Religious sister References External links Kinship and descent Terms for women
https://en.wikipedia.org/wiki/Brother
A brother (: brothers or brethren) is a man or boy who shares one or more parents with another; a male sibling. The female counterpart is a sister. Although the term typically refers to a familial relationship, it is sometimes used endearingly to refer to non-familial relationships. A full brother is a first degree relative. Overview The term brother comes from the Proto-Indo-European *bʰréh₂tēr, which becomes Latin frater, of the same meaning. Sibling warmth or affection between male siblings has been correlated to some more negative effects. In pairs of brothers, higher sibling warmth is related to more risk taking behaviour, although risk taking behaviour is not related to sibling warmth in any other type of sibling pair. The cause of this phenomenon in which sibling warmth is only correlated with risk taking behaviours in brother pairs still is unclear. This finding does, however, suggest that although sibling conflict is a risk factor for risk taking behaviour, sibling warmth does not serve as a protective factor. Some studies suggest that girls having an older brother delays the onset of menarche by roughly one year. Research also suggests that the likelihood of being gay increases with the more older brothers a man has. Some analyzers have suggested that a man's attractiveness to a heterosexual woman may increase with the more he resembles her brother, while his unattractiveness may increase the more his likeness diverges from her brother. Females with a twin or very close-in-age brother, sometimes view him as their male alter ego, or what they would have been like, if they had a Y chromosomes. Fraternal relationship The book Nicomachean Ethics, Book VIII written by Aristotle in 350 B.C.E., offers a way in which people should view the relationships between biological brothers. The relationship of brothers is laid out with the following quote: "The friendship of brothers has the characteristics found in that of comrades and in general between people who are like each other, is as much as they belong more to each other and start with a love for each other from their very birth, and in as much as those born to the same parents and brought up together and similarly educated are more akin in character; and the test of time has been applied most fully and convincingly in their case". For these reasons, it is the job of the older brother to influence the ethics of the younger brother by being a person of good action. Aristotle says "by imitating and reenacting the acts of good people, a child becomes habituated to good action". Over time the younger brother will develop the good actions of the older brother as well and be like him. Aristotle also adds this on the matter of retaining the action of doing good once imitated: "Once the habits of ethics or immorality become entrenched, they are difficult to break." The good habits that are created by the influence of the older brother become habit in the life of the younger brother and turn out to be seemingly permanent. It is the role of the older brother to be a positive influence on the development of the younger brother's upbringing when it comes to the education of ethics and good actions. When positive characteristics are properly displayed to the younger brother by the older brother, these habits and characteristics are imitated and foster an influential understanding of good ethics and positive actions. Famous brothers Gracchi, Ancient Roman reformers George Washington Adams, John Adams II, and Charles Francis Adams Sr., politicians Ben Affleck and Casey Affleck, actors The Alexander Brothers; musicians Alec Baldwin, William Baldwin, Stephen Baldwin, Daniel Baldwin, also known as the Baldwin brothers; actors John and Lionel Barrymore, actors Chang and Eng Bunker, the original Siamese twins George W. Bush, Jeb Bush, Neil Bush and Marvin Bush, sons of George H. W. Bush David Carradine, Keith Carradine, and Robert Carradine, American actors Bill Clinton, 42nd President of the United States, and Roger Clinton Jr., his younger half-brother Joel and Ethan Coen; filmmakers Stephen Curry and Seth Curry; current NBA point guards in the Western Conference Dizzy and Daffy Dean, Major League Baseball pitchers Mark DeBarge, Randy DeBarge, El DeBarge, James DeBarge, and Bobby DeBarge, the male members of the singing group DeBarge Emilio Estevez and Charlie Sheen, actors Isaac Everly and Phil Everly, The Everly Brothers, singers Liam Gallagher and Noel Gallagher, members of Oasis (band) Barry Gibb, Robin Gibb, and Maurice Gibb, members of the Brothers Gibb or "Bee Gees" singing group John Gotti, Eugene "Gene" Gotti, Peter Gotti and Richard V. Gotti, New York "made men" with the Gambino crime family Frederick Dent Grant, Ulysses S. Grant Jr., and Jesse Root Grant Jacob Grimm and Wilhelm Grimm, known as the Brothers Grimm, German academics and folk tale collectors Matt Hardy and Jeff Hardy, professional wrestlers Herbert Hoover Jr. and Allan Hoover Pau and Marc Gasol, professional basketball players O'Kelly Isley Jr., Rudolph Isley, and Ronald Isley, Ernie Isley, Marvin Isley, and Vernon Isley, members of The Isley Brothers singer-songwriting group and band, which also included their brother-in-law, Chris Jasper Jackie Jackson, Tito Jackson, Jermaine Jackson, Marlon Jackson, Michael Jackson and Randy Jackson, members of The Jackson 5 and later The Jacksons Jesse and Frank James, Old West outlaws John, Robert and Ted Kennedy, politicians Edward M. Kennedy Jr. and Patrick J. Kennedy, politicians Terry Labonte and Bobby Labonte, race car drivers Robert Todd Lincoln, Edward Baker Lincoln, William Wallace Lincoln and Tad Lincoln, sons of Abraham Lincoln Loud Brothers, piano designers and manufacturers Eli and Peyton Manning, National Football League quarterbacks Mario and Luigi, video game characters John McCain, U.S. Senator and two-time presidential candidate, and Joe McCain, American stage actor, newspaper reporter Justin, Travis, and Griffin McElroy, podcasters Billy Leon McCrary and Benny Loyd McCrary, wrestlers known as The McGuire Twins Harold Nixon, Richard Nixon, Donald Nixon, Arthur Nixon, and Edward Nixon Alan Osmond, Wayne Osmond, Merrill Osmond, Jay Osmond and Donny Osmond, members of The Osmonds Logan Paul and Jake Paul, YouTubers, internet personalities, and actors Neil and Ronald Reagan Ringling brothers, circus performers, owners, and show runners John D. Rockefeller and William Rockefeller, co-founders of Standard Oil and members of the Rockefeller family Cornelius Roosevelt and James I. Roosevelt Theodore Roosevelt Jr., Kermit Roosevelt, Archibald Bulloch Roosevelt, and Quentin Roosevelt James Roosevelt, Elliot Roosevelt, Franklin Delano Roosevelt Jr., and John Aspinwall Roosevelt Russo brothers, filmmakers, producers, and directors Daniel Sedin and Henrik Sedin, professional hockey players Wallace Shawn and Allen Shawn, writer and composer of The Fever Bobby Shriver, Timothy Shriver, Mark Shriver, and Anthony Shriver Thomas "Tommy" Smothers and Richard "Dick" Smothers, performing artists known as the Smothers Brothers Prabowo Subianto and Hashim Djojohadikusumo, politicians Fred Trump Jr., Donald Trump, and Robert Trump Vincent van Gogh, painter, and Theo van Gogh, art dealer J. J. Watt, T. J. Watt, Derek Watt, National Football League Players Damon Wayans, Dwayne Wayans, Keenan Ivory Wayans, Marlon Wayans, Shawn Wayans, performing artists, directors and producers Bob Weinstein and Harvey Weinstein, film producers Brian Wilson, Dennis Wilson, and Carl Wilson, members of The Beach Boys Marvin Winans, Carvin Winans, Michael Winans, and Ronald Winans, members of The Winans, singers and musicians Orville Wright and Wilbur Wright, known as the Wright brothers, pioneer aviators Agus Harimurti Yudhoyono and Edhie Baskoro Yudhoyono, politicians Other works about brothers In the Bible: Cain and Abel, the sons of Adam and Eve Jacob and Esau, the sons of Isaac and Rebecca Moses and Aaron, prophets Sts. Peter and Andrew, apostles Sts. James and John, apostles Sts. Thomas and his unnamed twin brother My Brother, My Brother, and Me, podcast Saving Private Ryan (1998), film Simon & Simon, television series Supernatural, American television series The Brothers Karamazov, novel The Wayans Bros., television series Bonanza (1959–1973), television series In the Ramayana: Rama, Lakshmana, Bharata, and Shatrughna In the Mahabharata: The Pandavas – Yudhishthira, Arjuna, Bhima, Sahadeva and Nakula The Kauravas – One hundred brothers including Duryodhana, Dushasana and Vikarna, among others See also Brotherhood (disambiguation) Sister Stepsibling References External links Terms for men Kinship and descent Sibling
https://en.wikipedia.org/wiki/Diradical
In chemistry, a diradical is a molecular species with two electrons occupying molecular orbitals (MOs) which are degenerate. The term "diradical" is mainly used to describe organic compounds, where most diradicals are extremely reactive and in fact rarely isolated. Diradicals are even-electron molecules but have one fewer bond than the number permitted by the octet rule. Examples of diradical species can also be found in coordination chemistry, for example among bis(1,2-dithiolene) metal complexes. Spin states Diradicals are usually triplets. The phrases singlet and triplet are derived from the multiplicity of states of diradicals in electron spin resonance: a singlet diradical has one state (S = 0, Ms = 2*0+1 = 1, ms = 0) and exhibits no signal in EPR and a triplet diradical has 3 states (S = 1, Ms = 2*1+1 = 3, ms = -1; 0; 1) and shows in EPR 2 peaks (if no hyperfine splitting). The triplet state has total spin quantum number S = 1 and is paramagnetic. Therefore, diradical species display a triplet state when the two electrons are unpaired and display the same spin. When the unpaired electrons with opposite spin are antiferromagnetically coupled, diradical species can display a singlet state (S = 0) and be diamagnetic. Examples Stable, isolable, diradicals include singlet oxygen and triplet oxygen. Other important diradicals are certain carbenes, nitrenes, and their main group elemental analogues. Lesser known diradicals are nitrenium ions, carbon chains and organic so-called non-Kekulé molecules in which the electrons reside on different carbon atoms. Main group cyclic structures can also exhibit diradicals, such as disulfur dinitride, or diradical character, such as diphosphadiboretanes. In inorganic chemistry, both homoleptic and heteroleptic 1,2-dithiolene complexes of d8 transition metal ions show a large degree of diradical character in the ground state. References Further reading Organic chemistry Inorganic chemistry Magnetism
https://en.wikipedia.org/wiki/Transgene
A transgene is a gene that has been transferred naturally, or by any of a number of genetic engineering techniques, from one organism to another. The introduction of a transgene, in a process known as transgenesis, has the potential to change the phenotype of an organism. Transgene describes a segment of DNA containing a gene sequence that has been isolated from one organism and is introduced into a different organism. This non-native segment of DNA may either retain the ability to produce RNA or protein in the transgenic organism or alter the normal function of the transgenic organism's genetic code. In general, the DNA is incorporated into the organism's germ line. For example, in higher vertebrates this can be accomplished by injecting the foreign DNA into the nucleus of a fertilized ovum. This technique is routinely used to introduce human disease genes or other genes of interest into strains of laboratory mice to study the function or pathology involved with that particular gene. The construction of a transgene requires the assembly of a few main parts. The transgene must contain a promoter, which is a regulatory sequence that will determine where and when the transgene is active, an exon, a protein coding sequence (usually derived from the cDNA for the protein of interest), and a stop sequence. These are typically combined in a bacterial plasmid and the coding sequences are typically chosen from transgenes with previously known functions. Transgenic or genetically modified organisms, be they bacteria, viruses or fungi, serve many research purposes. Transgenic plants, insects, fish and mammals (including humans) have been bred. Transgenic plants such as corn and soybean have replaced wild strains in agriculture in some countries (e.g. the United States). Transgene escape has been documented for GMO crops since 2001 with persistence and invasiveness. Transgenetic organisms pose ethical questions and may cause biosafety problems. History The idea of shaping an organism to fit a specific need is not a new science. However, until the late 1900s farmers and scientists could breed new strains of a plant or organism only from closely related species because the DNA had to be compatible for offspring to be able to reproduce. In the 1970 and 1980s, scientists passed this hurdle by inventing procedures for combining the DNA of two vastly different species with genetic engineering. The organisms produced by these procedures were termed transgenic. Transgenesis is the same as gene therapy in the sense that they both transform cells for a specific purpose. However, they are completely different in their purposes, as gene therapy aims to cure a defect in cells, and transgenesis seeks to produce a genetically modified organism by incorporating the specific transgene into every cell and changing the genome. Transgenesis will therefore change the germ cells, not only the somatic cells, in order to ensure that the transgenes are passed down to the offspring when the organisms reproduce. Transgenes alter the genome by blocking the function of a host gene; they can either replace the host gene with one that codes for a different protein, or introduce an additional gene. The first transgenic organism was created in 1974 when Annie Chang and Stanley Cohen expressed Staphylococcus aureus genes in Escherichia coli. In 1978, yeast cells were the first eukaryotic organisms to undergo gene transfer. Mouse cells were first transformed in 1979, followed by mouse embryos in 1980. Most of the very first transmutations were performed by microinjection of DNA directly into cells. Scientists were able to develop other methods to perform the transformations, such as incorporating transgenes into retroviruses and then infecting cells; using electroinfusion, which takes advantage of an electric current to pass foreign DNA through the cell wall; biolistics, which is the procedure of shooting DNA bullets into cells; and also delivering DNA into the newly fertilized egg. The first transgenic animals were only intended for genetic research to study the specific function of a gene, and by 2003, thousands of genes had been studied. Use in plants A variety of transgenic plants have been designed for agriculture to produce genetically modified crops, such as corn, soybean, rapeseed oil, cotton, rice and more. , these GMO crops were planted on 170 million hectares globally. Golden rice One example of a transgenic plant species is golden rice. In 1997, five million children developed xerophthalmia, a medical condition caused by vitamin A deficiency, in Southeast Asia alone. Of those children, a quarter million went blind. To combat this, scientists used biolistics to insert the daffodil phytoene synthase gene into Asia indigenous rice cultivars. The daffodil insertion increased the production of β-carotene. The product was a transgenic rice species rich in vitamin A, called golden rice. Little is known about the impact of golden rice on xerophthalmia because anti-GMO campaigns have prevented the full commercial release of golden rice into agricultural systems in need. Transgene escape The escape of genetically-engineered plant genes via hybridization with wild relatives was first discussed and examined in Mexico and Europe in the mid-1990s. There is agreement that escape of transgenes is inevitable, even "some proof that it is happening". Up until 2008 there were few documented cases. Corn Corn sampled in 2000 from the Sierra Juarez, Oaxaca, Mexico contained a transgenic 35S promoter, while a large sample taken by a different method from the same region in 2003 and 2004 did not. A sample from another region from 2002 also did not, but directed samples taken in 2004 did, suggesting transgene persistence or re-introduction. A 2009 study found recombinant proteins in 3.1% and 1.8% of samples, most commonly in southeast Mexico. Seed and grain import from the United States could explain the frequency and distribution of transgenes in west-central Mexico, but not in the southeast. Also, 5.0% of corn seed lots in Mexican corn stocks expressed recombinant proteins despite the moratorium on GM crops. Cotton In 2011, transgenic cotton was found in Mexico among wild cotton, after 15 years of GMO cotton cultivation. Rapeseed (canola) Transgenic rapeseed Brassicus napus – hybridized with a native Japanese species, Brassica rapa – was found in Japan in 2011 after having been identified in 2006 in Québec, Canada. They were persistent over a six-year study period, without herbicide selection pressure and despite hybridization with the wild form. This was the first report of the introgression—the stable incorporation of genes from one gene pool into another—of an herbicide-resistance transgene from Brassica napus into the wild form gene pool. Creeping bentgrass Transgenic creeping bentgrass, engineered to be glyphosate-tolerant as "one of the first wind-pollinated, perennial, and highly outcrossing transgenic crops", was planted in 2003 as part of a large (about 160 ha) field trial in central Oregon near Madras, Oregon. In 2004, its pollen was found to have reached wild growing bentgrass populations up to 14 kilometres away. Cross-pollinating Agrostis gigantea was even found at a distance of 21 kilometres. The grower, Scotts Company could not remove all genetically engineered plants, and in 2007, the U.S. Department of Agriculture fined Scotts $500,000 for noncompliance with regulations. Risk assessment The long-term monitoring and controlling of a particular transgene has been shown not to be feasible. The European Food Safety Authority published a guidance for risk assessment in 2010. Use in mice Genetically modified mice are the most common animal model for transgenic research. Transgenic mice are currently being used to study a variety of diseases including cancer, obesity, heart disease, arthritis, anxiety, and Parkinson's disease. The two most common types of genetically modified mice are knockout mice and oncomice. Knockout mice are a type of mouse model that uses transgenic insertion to disrupt an existing gene's expression. In order to create knockout mice, a transgene with the desired sequence is inserted into an isolated mouse blastocyst using electroporation. Then, homologous recombination occurs naturally within some cells, replacing the gene of interest with the designed transgene. Through this process, researchers were able to demonstrate that a transgene can be integrated into the genome of an animal, serve a specific function within the cell, and be passed down to future generations. Oncomice are another genetically modified mouse species created by inserting transgenes that increase the animal's vulnerability to cancer. Cancer researchers utilize oncomice to study the profiles of different cancers in order to apply this knowledge to human studies. Use in Drosophila Multiple studies have been conducted concerning transgenesis in Drosophila melanogaster, the fruit fly. This organism has been a helpful genetic model for over 100 years, due to its well-understood developmental pattern. The transfer of transgenes into the Drosophila genome has been performed using various techniques, including P element, Cre-loxP, and ΦC31 insertion. The most practiced method used thus far to insert transgenes into the Drosophila genome utilizes P elements. The transposable P elements, also known as transposons, are segments of bacterial DNA that are translocated into the genome, without the presence of a complementary sequence in the host's genome. P elements are administered in pairs of two, which flank the DNA insertion region of interest. Additionally, P elements often consist of two plasmid components, one known as the P element transposase and the other, the P transposon backbone. The transposase plasmid portion drives the transposition of the P transposon backbone, containing the transgene of interest and often a marker, between the two terminal sites of the transposon. Success of this insertion results in the nonreversible addition of the transgene of interest into the genome. While this method has been proven effective, the insertion sites of the P elements are often uncontrollable, resulting in an unfavorable, random insertion of the transgene into the Drosophila genome. To improve the location and precision of the transgenic process, an enzyme known as Cre has been introduced. Cre has proven to be a key element in a process known as recombinase-mediated cassette exchange (RMCE). While it has shown to have a lower efficiency of transgenic transformation than the P element transposases, Cre greatly lessens the labor-intensive abundance of balancing random P insertions. Cre aids in the targeted transgenesis of the DNA gene segment of interest, as it supports the mapping of the transgene insertion sites, known as loxP sites. These sites, unlike P elements, can be specifically inserted to flank a chromosomal segment of interest, aiding in targeted transgenesis. The Cre transposase is important in the catalytic cleavage of the base pairs present at the carefully positioned loxP sites, permitting more specific insertions of the transgenic donor plasmid of interest. To overcome the limitations and low yields that transposon-mediated and Cre-loxP transformation methods produce, the bacteriophage ΦC31 has recently been utilized. Recent breakthrough studies involve the microinjection of the bacteriophage ΦC31 integrase, which shows improved transgene insertion of large DNA fragments that are unable to be transposed by P elements alone. This method involves the recombination between an attachment (attP) site in the phage and an attachment site in the bacterial host genome (attB). Compared to usual P element transgene insertion methods, ΦC31 integrates the entire transgene vector, including bacterial sequences and antibiotic resistance genes. Unfortunately, the presence of these additional insertions has been found to affect the level and reproducibility of transgene expression. Use in livestock and aquaculture One agricultural application is to selectively breed animals for particular traits: Transgenic cattle with an increased muscle phenotype has been produced by overexpressing a short hairpin RNA with homology to the myostatin mRNA using RNA interference. Transgenes are being used to produce milk with high levels of proteins or silk from the milk of goats. Another agricultural application is to selectively breed animals, which are resistant to diseases or animals for biopharmaceutical production. Future potential The application of transgenes is a rapidly growing area of molecular biology. As of 2005 it was predicted that in the next two decades, 300,000 lines of transgenic mice will be generated. Researchers have identified many applications for transgenes, particularly in the medical field. Scientists are focusing on the use of transgenes to study the function of the human genome in order to better understand disease, adapting animal organs for transplantation into humans, and the production of pharmaceutical products such as insulin, growth hormone, and blood anti-clotting factors from the milk of transgenic cows. As of 2004 there were five thousand known genetic diseases, and the potential to treat these diseases using transgenic animals is, perhaps, one of the most promising applications of transgenes. There is a potential to use human gene therapy to replace a mutated gene with an unmutated copy of a transgene in order to treat the genetic disorder. This can be done through the use of Cre-Lox or knockout. Moreover, genetic disorders are being studied through the use of transgenic mice, pigs, rabbits, and rats. Transgenic rabbits have been created to study inherited cardiac arrhythmias, as the rabbit heart markedly better resembles the human heart as compared to the mouse. More recently, scientists have also begun using transgenic goats to study genetic disorders related to fertility. Transgenes may be used for xenotransplantation from pig organs. Through the study of xeno-organ rejection, it was found that an acute rejection of the transplanted organ occurs upon the organ's contact with blood from the recipient due to the recognition of foreign antibodies on endothelial cells of the transplanted organ. Scientists have identified the antigen in pigs that causes this reaction, and therefore are able to transplant the organ without immediate rejection by removal of the antigen. However, the antigen begins to be expressed later on, and rejection occurs. Therefore, further research is being conducted. Transgenic microorganisms capable of producing catalytic proteins or enzymes which increase the rate of industrial reactions. Ethical controversy Transgene use in humans is currently fraught with issues. Transformation of genes into human cells has not been perfected yet. The most famous example of this involved certain patients developing T-cell leukemia after being treated for X-linked severe combined immunodeficiency (X-SCID). This was attributed to the close proximity of the inserted gene to the LMO2 promoter, which controls the transcription of the LMO2 proto-oncogene. See also Hybrid Fusion protein Gene pool Gene flow Introgression Nucleic acid hybridization Mouse models of breast cancer metastasis References Further reading Genetic engineering Gene delivery
https://en.wikipedia.org/wiki/QuickWin
QuickWin was a library from Microsoft that made it possible to compile command line MS-DOS programs as Windows 3.1 applications, displaying their output in a window. Since the release of Windows NT, Microsoft has included support for console applications in the Windows operating system itself via the Windows Console, eliminating the need for QuickWin. But Intel Visual Fortran still uses that library. Borland's equivalent in Borland C++ 5 was called EasyWin. There is a program called QuickWin on CodeProject, which does a similar thing. See also Command-line interface References Computer libraries
https://en.wikipedia.org/wiki/KZAK-LD
KZAK-LD, virtual channel 49 (UHF digital channel 35), is a low-power Nuestra Visión-affiliated television station licensed to Boise, Idaho, United States. The station is owned by Cocola Broadcasting, and was formerly leased by NBC affiliate KTVB (channel 7) for analog retransmission of its then-news focused subchannel. Digital channels The station's digital signal is multiplexed: See also 24/7 (news channel) References ZAK-LD Television channels and stations established in 1993 Low-power television stations in Idaho ATSC 3.0 television stations
https://en.wikipedia.org/wiki/Filopodia
Filopodia (: filopodium) are slender cytoplasmic projections that extend beyond the leading edge of lamellipodia in migrating cells. Within the lamellipodium, actin ribs are known as microspikes, and when they extend beyond the lamellipodia, they're known as filopodia. They contain microfilaments (also called actin filaments) cross-linked into bundles by actin-bundling proteins, such as fascin and fimbrin. Filopodia form focal adhesions with the substratum, linking them to the cell surface. Many types of migrating cells display filopodia, which are thought to be involved in both sensation of chemotropic cues, and resulting changes in directed locomotion. Activation of the Rho family of GTPases, particularly cdc42 and their downstream intermediates, results in the polymerization of actin fibers by Ena/Vasp homology proteins. Growth factors bind to receptor tyrosine kinases resulting in the polymerization of actin filaments, which, when cross-linked, make up the supporting cytoskeletal elements of filopodia. Rho activity also results in activation by phosphorylation of ezrin-moesin-radixin family proteins that link actin filaments to the filopodia membrane. Filopodia have roles in sensing, migration, neurite outgrowth, and cell-cell interaction. To close a wound in vertebrates, growth factors stimulate the formation of filopodia in fibroblasts to direct fibroblast migration and wound closure. In macrophages, filopodia act as phagocytic tentacles, pulling bound objects towards the cell for phagocytosis. In infections Filopodia are also used for movement of bacteria between cells, so as to evade the host immune system. The intracellular bacteria Ehrlichia are transported between cells through the host cell filopodia induced by the pathogen during initial stages of infection. Filopodia are the initial contact that human retinal pigment epithelial (RPE) cells make with elementary bodies of Chlamydia trachomatis, the bacteria that causes Chlamydia. Viruses have been shown to be transported along filopodia toward the cell body, leading to cell infection. Directed transport of receptor-bound epidermal growth factor (EGF) along filopodia has also been described, supporting the proposed sensing function of filopodia. SARS-CoV-2, the strain of coronavirus responsible for COVID-19, produces filopodia in infected cells. In brain cells In developing neurons, filopodia extend from the growth cone at the leading edge. In neurons deprived of filopodia by partial inhibition of actin filaments polymerization, growth cone extension continues as normal, but direction of growth is disrupted and highly irregular. Filopodia-like projections have also been linked to dendrite creation when new synapses are formed in the brain. A study deploying protein imaging of adult mice showed that filopodia in the explored regions were by an order of magnitude more abundant than previously believed, comprising about 30% of all dendritic protrusions. At their tips, they contain "silent synapses" that are inactive until recruited as part of neural plasticity and flexible learning or memories, previously thought to be present mainly in the developing pre-adult brain and to die off with time. References External links MBInfo - Filopodia MBInfo - Filopodia Assembly New Form of Cinema: Cellular Film, proposal for documentaries with cellular imaging Cell movement Cytoskeleton Cell biology Neurons Actin-based structures de:Filopodium
https://en.wikipedia.org/wiki/CollabNet
CollabNet VersionOne is a software firm headquartered in Alpharetta, Georgia, United States. It was Founded by Tim O’Reilly, Brian Behlendorf, and Bill Portelli. CollabNet VersionOne products and services belong to the industry categories of value stream management, DevOps, agile management, application lifecycle management (ALM), and enterprise version control. These products are used by companies and government organizations to reduce the time it takes to create and release software. About The company was founded to improve the methods of software creation and delivery. Today DevOps is extending to the application of value stream management practices. This is a business-to-business software company. The company's customers are global enterprises and government organizations that use the products to apply a cohesive approach to software development and management throughout application development life-cycles. The company's customers are in diverse industries such as finance, healthcare, government, high-tech, and others in 100 countries. CollabNet VersionOne partners are composed of other technology providers that enable certain product capabilities and extend the coverage of products, as well sales and delivery partners. The company also teams with #YesWeCode, a Dream Corps initiative designed to bring free technology training and industry connections to 100,000 young people in communities of color and increase local talent for the technology industry. The company also offers training and education in its categories, from Scrum certifications and Agile training to value stream management. Many training courses and certifications are open to the public, requiring no experience with the company's products. It is widely understood in the software industry that Scrum and Agile are foundational for modern software development teams. History The company was originally founded as CollabNet in 1999 by Tim O’Reilly, Brian Behlendorf, and Bill Portelli, who also served as the company's chief executive officer. The founding mission was to create software that helps organizations manage and improve software development processes and make them more efficient while producing higher quality software. Vector Capital became a major investor of the company in 2014. In May 2015, Flint Brenton became president and chief executive officer with Portelli remaining on the board of directors. The company remains privately owned. CollabNet merged with VersionOne in 2017, becoming CollabNet VersionOne, and began expanding its enterprise value stream management endeavors. TPG Capital acquired CollabNet VersionOne from Vector Capital, announcing investments in the company up to $500 million over the next years. Previous additions include the 2010 acquisition of Danube Technologies, a company specializing in Agile/Scrum management software tools (including ScrumWorks Pro) and consulting and training services for organizations implementing Agile. CollabNet also acquired Codesion in 2010. Codesion specialized in cloud development. The company has historically focused on innovating on its own and through partnerships, from early ALM, to solutions for government use, to the cloud, to DevOps and Value Stream Management. In January 2020, CollabNet VersionOne (CollabNet) and XebiaLabs announced that the two companies had merged. In April of that year, Arxan joined, the merger of the three companies being known by the name Digital.ai. Products The company offers several products for agile management, DevOps, value stream management, application lifecycle management (ALM), and enterprise version control. The company's major products include VersionOne, Continuum, TeamForge, TeamForge SCM, and VS. See also Agile software development Continuous Integration Continuous delivery DevOps Toolchain Scrum (software development) Value Stream Mapping References External links Value Stream Management Tools Forrester Collaborative software Software companies established in 1999 Free software companies Software companies based in Georgia (U.S. state) Companies based in Fulton County, Georgia Software companies of the United States 1999 establishments in Georgia (U.S. state)
https://en.wikipedia.org/wiki/Pattress
A pattress or pattress box or fitting box (in the United States and Canada, electrical wall switch box, electrical wall outlet box, electrical ceiling box, switch box, outlet box, electrical box, etc.) is the container for the space behind electrical fittings such as power outlet sockets, light switches, or fixed light fixtures. Pattresses may be designed for either surface mounting (with cabling running along the wall surface) or for embedding in the wall or skirting board. Some electricians use the term "pattress box" to describe a surface-mounted box, although simply the term "pattress" suffices. The term "flush box" is used for a mounting box that goes inside the wall, although some use the term "wall box". Boxes for installation within timber/plasterboard walls are usually called "cavity boxes" or "plasterboard boxes". A ceiling-mounted pattress (most often used for light fixtures) is referred to as a "ceiling pattress" or "ceiling box". British English speakers also tend to say "pattress box" instead of just "pattress". Pattress is alternatively spelt "patress" and Wiktionary lists both spellings. The word "pattress", despite being attested from the late 19th century, is still rarely found in dictionaries. It is etymologically derived from pateras (Latin for bowls, saucers). The term is not used by electricians in the United States. Pattresses Pattresses contain devices for input (switches) and output (sockets and fixtures), with transfer managed by junction boxes. A pattress may be made of metal or plastic. In the United Kingdom, surface-mounted boxes in particular are often made from urea-formaldehyde resin or alternatively PVC and usually white. Wall boxes are commonly made of thin galvanised metal. A pattress box is made to standard dimensions and may contain embedded bushings (in standard positions) for the attachment of wiring devices (switches and sockets). Internal pattress boxes themselves do not include the corresponding faceplates, since the devices to be contained in the box specify the required faceplate. External pattress boxes may offer include corresponding faceplates, limiting the devices to be contained in the box. Although cables may be joined inside pattress boxes, due simply to their presence at convenient points in the wiring, their main purpose is to accommodate switches and sockets. They allow switches and sockets to be recessed into the wall for a better appearance. Enclosures primarily for joining wires are called junction boxes. New work boxes New work boxes are designed to be installed in a new installation. They are typically designed with nail or screw holes to attach directly to wall studs. Old work boxes Old work boxes are designed to attach to already-installed wall material (usually drywall). The boxes will almost always have two or more parsellas (from Latin: "small wing or part). The parsellas flip out when the box screws are screwed, securing the box to the wall with the help of the four or more tabs on the front of the box. Alternative systems In some countries, for instance in Germany, wall boxes for electrical fittings generally are not actual rectangular boxes at all but standard-sized round recessed containers. This has the advantage that the corresponding round holes can be simply drilled out with a hole saw rather than needing the cutting-out of a rectangular cavity to accommodate the wall box. Even with those round-hole systems, the faceplates that cover them are mostly rectangular however. Image gallery See also Wall anchor plates are also known as pattress plates. Junction box, an enclosure housing electrical connections Electrical wiring in the United Kingdom Electrical wiring in North America References External links DIY Wiki Pattress page – more information on (British) pattresses and terminology Cables Electrical wiring
https://en.wikipedia.org/wiki/Nacrite
Nacrite Al2Si2O5(OH)4 is a clay mineral that is polymorphous (or polytypic) with kaolinite. It crystallizes in the monoclinic system. X-ray diffraction analysis is required for positive identification. Nacrite was first described in 1807 for an occurrence in Saxony, Germany. The name is from nacre in reference to the dull luster of the surface of nacrite masses scattering light with slight iridescences resembling those of the mother of pearls secreted by oysters. References Clay minerals group Polymorphism (materials science) Monoclinic minerals Minerals in space group 9
https://en.wikipedia.org/wiki/Telecommand
A telecommand or telecontrol is a command sent to control a remote system or systems not directly connected (e.g. via wires) to the place from which the telecommand is sent. The word is derived from tele = remote (Greek), and command = to entrust/order (Latin). Systems that need remote measurement and reporting of information of interest to the system designer or operator require the counterpart of telecommand, telemetry. The telecommand can be done in real time or not depending on the circumstances (in space, delay may be of days), as was the case of Marsokhod. Examples Control of a TV from the sofa Remote guidance of weapons or missiles Control of a satellite from a ground station Flying a radio-controlled airplane Transmission of commands For a Telecommand (TC) to be effective, it must be compiled into a pre-arranged format (which may follow a standard structure), modulated onto a carrier wave which is then transmitted with adequate power to the remote system. The remote system will then demodulate the digital signal from the carrier, decode the TC, and execute it. Transmission of the carrier wave can be by ultrasound, infra-red or other electromagnetic means. Infrared Infrared light makes up the invisible section of the electromagnetic spectrum. This light, also classified as heat, transmits signals between the transmitter and receiver of the remote system. Telecommand systems usually include a physical remote, which contains four key parts: buttons, integrated circuit, button contacts, and a light-emitting diode. When the buttons on a remote are pressed they touch and close their corresponding contacts below them within the remote. This completes the necessary circuit on the circuit board along with a change in electrical resistance, which is detected by the integrated circuit. Based on the change in electrical resistance, the integrated circuit distinguishes which button was pushed and sends a corresponding binary code to the light-emitting diode (LED) usually located at the front of the remote. To transfer the information from the remote to the receiver, the LED turns the electrical signals into an invisible beam of infrared light that corresponds with the binary code and sends this light to the receiver. The receiver then detects the light signal via a photodiode and it is transformed into an electrical signal for the command and is sent to the receiver’s integrated circuit/microprocessor to process and complete the command. The strength of the transmitting LED can vary and determines the required positioning accuracy of the remote in relevance to the receiver. Infrared remotes have a maximum range of approximately 30 feet and require the remote control or transmitter and receiver to be within a line of sight. Ultrasonic Ultrasonic is a technology used more frequently in the past for telecommand. Inventor Robert Adler is known for inventing the remote control which did not require batteries and used ultrasonic technology. There are four aluminum rods inside the transmitter that produce high frequency sounds when they are hit at one end. Each rod is a different length, which enables them to produce varying sound pitches, which control the receiving unit. This technology was widely used but had certain issues such as dogs being bothered by the high frequency sounds. New applications Often the smaller new remote controlled airplanes and helicopters are incorrectly advertised as radio controlled devices (see Radio control) but they are either controlled via infra-red transmission or electromagnetically guided. Both of these systems are part of the telecommand area. Encryption To prevent unauthorised access to the remote system, TC encryption may be employed. Secret sharing may be used. See also Radio control Teleoperation Telerobotics Telemetry References Remote control
https://en.wikipedia.org/wiki/Cefradine
Cefradine (INN) or cephradine (BAN) is a first generation cephalosporin antibiotic. Indications Respiratory tract infections (such as tonsillitis, pharyngitis, and lobar pneumonia) caused by group A beta-hemolytic streptococci and S. pneumoniae (formerly D. pneumonia). Otitis media caused by group A beta-hemolytic streptococci, S. pneumoniae, H. influenzae, and staphylococci. Skin and skin structure infections caused by staphylococci (penicillin-susceptible and penicillin-resistant) and beta-hemolytic streptococci. Urinary tract infections, including prostatitis, caused by E. coli, P. mirabilis and Klebsiella species. Formulations Cefradine is distributed in the form of capsules containing 250 mg or 500 mg, as a syrup containing 250 mg/5 ml, or in vials for injection containing 500 mg or 1 g. It is not approved by the FDA for use in the United States. Synthesis Birch reduction of D-α-phenylglycine led to diene (2). This was N-protected using tert-butoxycarbonylazide and activated for amide formation via the mixed anhydride method using isobutylchloroformate to give 3. Mixed anhydride 3 reacted readily with 7-aminodesacetoxycephalosporanic acid to give, after deblocking, cephradine (5). Production names The antibiotic is produced under many brand names across the world. Bangladesh: Ancef, Ancef forte, Aphrin, Avlosef, Cefadin, Cephadin, Cephran, Cephran-DS, Cusef, Cusef DS, Dicef , Dicef forte, Dolocef, Efrad, Elocef, Extracef, Extracef-DS, Intracef, Kefdrin, Lebac, Lebac Forte, Medicef, Mega-Cef, Megacin, Polycef, Procef, Procef, Procef forte, Rocef, Rocef Forte DS, Sefin, Sefin DS, Sefnin, Sefrad, Sefrad DS, Sefril, Sefril-DS, Sefro, Sefro-HS, Sephar, Sephar-DS, Septa, Sinaceph, SK-Cef, Sk-Cef DS, Supracef and Supracef-F, Torped, Ultrasef, Vecef, Vecef-DS, Velogen, Sinaceph, Velox China: Cefradine, Cephradine, Kebili, Saifuding, Shen You, Taididing, Velosef, Xianyi, and Xindadelei Colombia: Cefagram, Cefrakov, Cefranil , Cefrex, and Kliacef Egypt: Cefadrin, Cefadrine, Cephradine, Cephraforte, Farcosef, Fortecef, Mepadrin, Ultracef, and Velosef France: Dexef Hong Kong: Cefradine and ChinaQualisef-250 Indonesia: Dynacef, Velodine, and Velodrom Lebanon: Eskacef, Julphacef, and Velosef Lithuania: Tafril Myanmar: Sinaceph Oman: Ceframed, Eskasef, Omadine, and Velocef Pakistan: Abidine, Ada-Cef, Ag-cef, Aksosef, Amspor, Anasef, Antimic, Atcosef, Bactocef, Biocef, Biodine, Velora, Velosef Peru: Abiocef, Cefradinal, Cefradur, Cefrid, Terbodina II, Velocef, Velomicin Philippines: Altozef, Racep, Senadex, Solphride, Yudinef, Zefadin, Zefradil, and Zolicef Poland: Tafril Portugal: Cefalmin, Cefradur South Africa: Cefril A South Korea: Cefradine and Tricef Taiwan: Cefadin, Cefamid, Cefin, Cekodin, Cephradine, Ceponin, Lacef, Licef-A, Lisacef, Lofadine, Recef, S-60, Sefree, Sephros, Topcef, Tydine, Unifradine, and U-Save UK: Cefradune (Kent) Vietnam: Eurosefro and Incef See also Cephapirin Cephacetrile Cefamandole Ampicillin (Has the same chemical formula) Notes References Cephalosporin antibiotics Enantiopure drugs
https://en.wikipedia.org/wiki/PComb3H
pComb3H, a derivative of pComb3 optimized for expression of human fragments, is a phagemid used to express proteins such as zinc finger proteins and antibody fragments on phage pili for the purpose of phage display selection. For the purpose of phage production, it contains the bacterial ampicillin resistance gene (for B-lactamase), allowing the growth of only transformed bacteria. References Molecular biology Plasmids
https://en.wikipedia.org/wiki/ETwinning
The eTwinning action is an initiative of the European Commission that aims to encourage European schools to collaborate using Information and Communication Technologies (ICT) by providing the necessary infrastructure (online tools, services, support). Teachers registered in the eTwinning action are enabled to form partnerships and develop collaborative, pedagogical school projects in any subject area with the sole requirements to employ ICT to develop their project and collaborate with teachers from other European countries. Formation The project was founded in 2005 under the European Union's e-Learning program and it has been integrated in the Lifelong Learning program since 2007. eTwinning is part of Erasmus+, the EU program for education, training, and youth. History The eTwinning action was launched in January 2005. Its main objectives complied with the decision by the Barcelona European Council in March 2002 to promote school twinning as an opportunity for all students to learn and practice ICT skills and to promote awareness of the multicultural European model of society. More than 13,000 schools were involved in eTwinning within its first year. In 2008, over 50,000 teachers and 4,000 projects have been registered, while a new eTwinning platform was launched. As of January 2018, over 70,000 projects are running in classrooms across Europe. By 2021, more than 226,000 schools in taken part in this work. In early 2009, the eTwinning motto changed from "School partnerships in Europe" to "The community for schools in Europe". In 2022, eTwinning moved to a new platform. Participating countries Member States of the European Union are part of eTwinning: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and The Netherlands. Overseas territories and countries are also eligible. In addition, Albania, Bosnia and Herzegovina, North Macedonia, Iceland, Liechtenstein, Norway, Serbia and Turkey can also take part. Seven countries from the European neighbourhood (including Armenia, Azerbaijan, Georgia, Moldova and Ukraine) are also part of eTwinning via the eTwinning Plus scheme, as well as countries which are part of the Eastern Partnership, and Tunisia and Jordan (which are part of the Euro-Mediterranean Partnership, EUROMED). Operation The main concept behind eTwinning is that schools are paired with another school elsewhere in Europe and they collaboratively develop a project, also known as eTwinning project. The two schools then communicate online (for example, by e-mail or video conferencing) to collaborate, share and learn from each other. eTwinning encourages and develops ICT skills as the main activities inherently use information technology. Being 'twinned' with a foreign school also encourages cross-cultural exchanges of knowledge, fosters students' intercultural awareness, and improves their communication skills. eTwinning projects can last from one week to several months, and can go on to create permanent relationships between schools. Primary and secondary schools within the European Union member states can participate, in addition to schools from Turkey, Norway and Iceland. In contrast with other European programs, such as the Comenius program, all communication is via the internet; therefore there is no need for grants. Along the same lines, face-to-face meetings between partners schools are not required, although they are not prohibited. European schoolnet has been granted the role of Central Support Service (CSS) at European level. eTwinning is also supported by a network of National Support Services. References Gilleran, A. (2007) eTwinning - A New Path for European Schools, eLearning Papers European Schoolnet (2007) Learning with eTwinning: A Handbook for Teachers 2007 European Schoolnet (2006) Learning with eTwinning European Schoolnet (2008) eTwinning: Adventures in language and culture Konstantinidis, A. (2012). Implementing Learning-Oriented Assessment in an eTwinning Online Course for Greek Teachers. MERLOT Journal of Online Learning and Teaching, 8(1), 45-62 External links The official portal for eTwinning (available in 28 languages) European Schoolnet German eTwinning website British Council eTwinning Greek eTwinning website eTwinning - Italy Spanish eTwinning website French eTwinning website Press Release for 2008 etwinning prizes Video clips eTwinning YouTube channel Education in the European Union Educational organizations based in Europe Educational projects Educational technology non-profits Information technology organizations based in Europe Information technology projects
https://en.wikipedia.org/wiki/Neovascularization
Neovascularization is the natural formation of new blood vessels (neo- + vascular + -ization), usually in the form of functional microvascular networks, capable of perfusion by red blood cells, that form to serve as collateral circulation in response to local poor perfusion or ischemia. Growth factors that inhibit neovascularization include those that affect endothelial cell division and differentiation. These growth factors often act in a paracrine or autocrine fashion; they include fibroblast growth factor, placental growth factor, insulin-like growth factor, hepatocyte growth factor, and platelet-derived endothelial growth factor. There are three different pathways that comprise neovascularization: (1) vasculogenesis, (2) angiogenesis, and (3) arteriogenesis. Three pathways of neovascularization Vasculogenesis Vasculogenesis is the de novo formation of blood vessels. This primarily occurs in the developing embryo with the development of the first primitive vascular plexus, but also occurs to a limited extent with post-natal vascularization. Embryonic vasculogenesis occurs when endothelial cells precursors (hemangioblasts) begin to proliferate and migrate into avascular areas. There, they aggregate to form the primitive network of vessels characteristic of embryos. This primitive vascular system is necessary to provide adequate blood flow to cells, supplying oxygen and nutrients, and removing metabolic wastes. Angiogenesis Angiogenesis is the most common type of neovascularization seen in development and growth, and is important to both physiological and pathological processes. Angiogenesis occurs through the formation of new vessels from pre-existing vessels. This occurs through the sprouting of new capillaries from post-capillary venules, requiring precise coordination of multiple steps and the participation and communication of multiple cell types. The complex process is initiated in response to local tissue ischemia or hypoxia, leading to the release of angiogenic factors such as VEGF and HIF-1. This leads to vasodilatation and an increase in vascular permeability, leading to sprouting angiogenesis or intussusceptive angiogenesis. Arteriogenesis Arteriogenesis is the process of flow-related remodelling of existing vasculature to create collateral arteries. This can occur in response to ischemic vascular diseases or increase demand (e.g. exercise training). Arteriogenesis is triggered through nonspecific factors, such as shear stress and blood flow. Ocular pathologies Corneal neovascularization Corneal neovascularization is a condition where new blood vessels invade into the cornea from the limbus. It is triggered when the balance between angiogenic and antiangiogenic factors are disrupted that otherwise maintain corneal transparency. The immature new blood vessels can lead to persistent inflammation and scaring, lipid exudation into the corneal tissues, and a reduction in corneal transparency, which can affect visual acuity. Retinopathy of prematurity Retinopathy of prematurity is a condition that occurs in premature babies. In premature babies, the retina has not completely vascularized. Rather than continuing in the normal in utero fashion, the vascularization of the retina is disrupted, leading to an abnormal proliferation of blood vessels between the areas of vascularized and avascular retina. These blood vessels grow in abnormal ways and can invade into the vitreous humor, where they can hemorrhage or cause retinal detachment in neonates. Diabetic retinopathy Diabetic retinopathy, which can develop into proliferative diabetic retinopathy, is a condition where capillaries in the retina become occluded, which creates areas of ischemic retina and triggering the release of angiogenic growth factors. This retinal ischemia stimulates the proliferation of new blood vessels from pre-existing retinal venules. It is the leading cause of blindness of working age adults. Age-related macular degeneration In persons who are over 65 years old, age-related macular degeneration is the leading cause of severe vision loss. A subtype of age-related macular degeneration, wet macular degeneration, is characterized by the formation of new blood vessels that originate in the choroidal vasculature and extend into the subretinal space. Choroidal neovascularization In ophthalmology, choroidal neovascularization is the formation of a microvasculature within the innermost layer of the choroid of the eye. Neovascularization in the eye can cause a type of glaucoma (neovascularization glaucoma) if the new blood vessels' bulk blocks the constant outflow of aqueous humour from inside the eye. Neovascularization and therapy Ischemic heart disease Cardiovascular disease is the leading cause of death in the world. Ischemic heart disease develops when stenosis and occlusion of coronary arteries develops, leading to reduced perfusion of the cardiac tissues. There is ongoing research exploring techniques that might be able to induce healthy neovascularization of ischemic cardiac tissues. See also Choroidal neovascularization Corneal neovascularization Revascularization Rubeosis iridis Inosculation References Angiogenesis Medical terminology
https://en.wikipedia.org/wiki/Noctilien
Noctilien is the night bus service in Paris and its agglomeration. It is managed by the Île-de-France Mobilités (formerly the STIF), the Île-de-France regional public transit authority, and operated by RATP (with 32 lines) and Transilien SNCF (with 20 lines). It replaced the previous Noctambus service on the night of 20/21 September 2005, providing for a larger number of lines than before and claiming to be better adapted to night-time transport needs. In place of the previous hub-and-spoke scheme where all buses terminated at and departed from the heart of Paris: Châtelet , Noctilien's new service includes buses operating between banlieues (communes surrounding Paris proper) as well as outbound lines running from Paris' four main railway stations: Gare de l'Est, Gare de Lyon, Gare Montparnasse and Gare Saint-Lazare. In addition, these four stations are also connected to each other by a regular night bus service. All in all, Noctilien operates 52 bus lines, from the end of the rail network and day bus service (around 00:30) until their resumption early in the morning (around 05:30), over the whole of Paris and the Île-de-France region. It is made up of: 2 circular lines: & running between Paris' major train stations ; 6 transversal lines: from to running between different suburbs of Paris via its center at Châtelet ; 21 radial lines (the other 2 digits lines, except N71) running between major Paris stations and more or less its near suburbs ; 2 radial long distance lines: & (subcontracted by the RATP) running between Paris and its remote suburbs ; 19 radial long distance lines (the other 3 digits lines, except N135) running between Paris and its remote suburbs (with often a partly motorway route) and managed by the Transilien SNCF ; 2 ring lines in the suburbs: by RATP & by Transilien SNCF. Like Transilien, the name "Noctilien" is formed by analogy with "Francilien" — the French demonym for residents of Île-de-France. Noctilien lines The time intervals indicated here may depend on the day of week -- service is reinforced on Friday and Saturday nights and on days that precede bank holidays. - Inner (clockwise) circle line from and to Gare de l'Est via Gare de Lyon → Gare Montparnasse → Gare Saint-Lazare . Every 14 minutes. - Outer (counterclockwise) circle line from and to Gare Montparnasse via Gare de Lyon → Gare de l'Est → Gare Saint-Lazare . Every 14 minutes. - Pont de Neuilly ↔ Château de Vincennes . Every 30 minutes. - Pont de Sèvres ↔ Romainville - Carnot. Every 45 minutes. - Mairie d'Issy ↔ Bobigny - Pablo Picasso . Every 20 minutes. - Mairie de Saint-Ouen ↔ La Croix de Berny . Every 30 minutes. - Asnières − Gennevilliers - Gabriel Péri ↔ Villejuif - Louis Aragon . Every 30 minutes. - Pont de Levallois ↔ Mairie de Montreuil . Every 22 minutes. - Châtelet ↔ Longjumeau - Hôpital. Every 30 minutes. - Châtelet ↔ Juvisy . Every 20 minutes. - Châtelet ↔ Chelles-Gournay . Every 30 minutes. - Châtelet ↔ Sartrouville . Every 30 minutes. - Gare de Lyon ↔ Paris Orly Airport (South Terminal). Every 30 minutes. - Gare de Lyon ↔ Boissy-Saint-Léger . Every 30 minutes. - Gare de Lyon ↔ Villiers-sur-Marne (via Vincennes & Nogent-sur-Marne). Every 30 minutes. - Gare de Lyon ↔ Torcy . Every 30 minutes. - Gare de Lyon ↔ Villiers-sur-Marne (via Maisons-Alfort & Saint-Maur-des-Fossés). Every 30 minutes. - Gare de l'Est ↔ Villeparisis – Mitry-le-Neuf . Every 30 minutes. - Gare de l'Est ↔ Aulnay-sous-Bois - Garonor . Every 20 minutes. - Gare de l'Est ↔ Gare de Sarcelles-Saint-Brice . Every 20 minutes. - Gare de l'Est ↔ Garges-Sarcelles . Every 20 minutes. - Gare de l'Est ↔ Montfermeil - Hôpital. Every 30 minutes. - Gare Saint-Lazare ↔ Gare d'Enghien . Every 30 minutes. - Gare Saint-Lazare ↔ Gare de Cormeilles-en-Parisis . Every 30 minutes. - Gare Saint-Lazare ↔ Nanterre - Anatole France. Every 30 minutes. - Gare Montparnasse ↔ Clamart - Georges Pompidou. Every 30 minutes. - Gare Montparnasse ↔ Rungis International Market. Every 35 minutes. - Gare Montparnasse ↔ École Polytechnique - Vauve. Every 30 minutes. - Gare Montparnasse ↔ Gare de Chaville-Rive-Droite . Every 30 minutes. - Rungis International Market ↔ Val de Fontenay . Every 26 minutes. - Châtelet ↔ Saint-Rémy-lès-Chevreuse . Every 60 minutes. - Gare de Lyon ↔ Marne-la-Vallée - Chessy (Disneyland Paris). Every 60-80 minutes. - Gare de Lyon ↔ Brétigny . Every 60 minutes. - Gare de Lyon ↔ Melun . Every 60 minutes. - Gare de Lyon ↔ Juvisy . Every 60 minutes. - Gare de Lyon ↔ Combs-la-Ville - Quincy . Every 60 minutes. - Boissy-Saint-Léger ↔ Corbeil-Essonnes . Every 60-65 minutes. N137 - Gare de Lyon ↔ Fontainebleau-Avon (Montereau-Fault-Yonne On week-ends) N138 - Gare de Lyon ↔ Coulommiers . Every 60 minutes. - Gare de l'Est ↔ Paris Charles de Gaulle (CDG) Airport (All Terminals). Every 60 minutes. - Gare de l'Est ↔ Gare de Meaux . Every 60 minutes. - Gare de l'Est ↔ Tournan . Every 60 minutes. - Gare de l'Est ↔ Paris Charles de Gaulle (CDG) Airport (Semi-direct link to all Terminals). Every 30 minutes. - Gare de Lyon ↔ Corbeil-Essonnes . Every 60 minutes. - Gare Montparnasse ↔ Gare de La Verrière (Gare de Rambouillet On week-end). Every 60 minutes. N146 - Gare de l'Est ↔ Gare de Survilliers-Fosses . Every 60 minutes. - Gare Saint-Lazare ↔ Cergy Le Haut . Every 30 minutes. - Gare Saint-Lazare ↔ Gare de Mantes-la-Jolie . Every 60 minutes. - Gare Saint-Lazare ↔ Cergy Le Haut . Every 60-70 minutes. - Gare Saint-Lazare ↔ Saint Germain-en-Laye . Every 60 minutes. - Gare Saint-Lazare ↔ Montigny – Beauchamp . Every 70 minutes. N155 - Gare Saint-Lazare ↔ Gare de Poissy . Every 60 minutes. Line numbering scheme Each bus line number starts with for Noctilien followed by a two or three digit number: 2 digits starting with "N0" for the two "circular" routes 2 digits starting with "N1" for the "transversal" routes 2 digits starting with "N2" for buses running from Châtelet 2 digits starting with "N3" for buses running from Gare de Lyon 2 digits starting with "N4" for buses running from Gare de l'Est 2 digits starting with "N5" for buses running from Gare Saint-Lazare 2 digits starting with "N6" for buses running from Gare Montparnasse 3 digits starting with "N1" for the long distance buses running to the outer suburbs. References External links Official website (old) Routes, schedules (official website) Transport in Île-de-France RATP Group Transport in Paris Transport in Hauts-de-Seine Night bus service Bus transport in France
https://en.wikipedia.org/wiki/Geosophy
Geosophy is a concept introduced to geography by J.K. Wright in 1947. The word is a compound of ‘geo’ (Greek for earth) and ‘sophia’ (Greek for wisdom). Wright defined it thus: Geosophy ... is the study of geographical knowledge from any or all points of view. It is to geography what historiography is to history; it deals with the nature and expression of geographical knowledge both past and present—with what Whittlesey has called ‘man’s sense of [terrestrial] space’. Thus it extends far beyond the core area of scientific geographical knowledge or of geographical knowledge as otherwise systematized by geographers. Taking into account the whole peripheral realm, it covers the geographical ideas, both true and false, of all manner of people—not only geographers, but farmers and fishermen, business executives and poets, novelists and painters, Bedouins and Hottentots—and for this reason it necessarily has to do in large degree with subjective conceptions. (Wright 1947) This has been summarised as: the study of the world as people conceive of and imagine it (McGreevy 1987) Belief systems as they relate to human interaction with the Earth's environments. (attributed to Professor Innes Park 1995) Superstition Geosophy is sometimes used as a synonym for the study of earth mysteries. References Keighren, Innes M. “Geosophy, imagination, and terrae incognitae: exploring the intellectual history of John Kirtland Wright.” Journal of Historical Geography 31, no. 3 (2005): 546–62. McGreevy, P. 1987 Imagining the future at Niagara Falls. Annals of the Association of American Geographers 77 (1):48–62 Wright, J.K. 1947. Terrae Incognitae: The Place of Imagination in Geography Annals of the Association of American Geographers 37: 1–15. Geography terminology
https://en.wikipedia.org/wiki/CeNTech
The Center for Nanotechnology is one of the first centers for nanotechnology. It is located in Münster, North Rhine-Westphalia, Germany. It offers many possibilities for research, education, start-ups and companies in nanotechnology. Hence it works together with the University of Münster (WWU), the Max Planck Institute for Molecular Biomedicine and many more research institutions. External links CeNTech Homepage Nanotechnology institutions Münster Research institutes in Germany University of Münster
https://en.wikipedia.org/wiki/Q-exponential
In combinatorial mathematics, a q-exponential is a q-analog of the exponential function, namely the eigenfunction of a q-derivative. There are many q-derivatives, for example, the classical q-derivative, the Askey-Wilson operator, etc. Therefore, unlike the classical exponentials, q-exponentials are not unique. For example, is the q-exponential corresponding to the classical q-derivative while are eigenfunctions of the Askey-Wilson operators. Definition The q-exponential is defined as where is the q-factorial and is the q-Pochhammer symbol. That this is the q-analog of the exponential follows from the property where the derivative on the left is the q-derivative. The above is easily verified by considering the q-derivative of the monomial Here, is the q-bracket. For other definitions of the q-exponential function, see , , and . Properties For real , the function is an entire function of . For , is regular in the disk . Note the inverse, . Addition Formula The analogue of does not hold for real numbers and . However, if these are operators satisfying the commutation relation , then holds true. Relations For , a function that is closely related is It is a special case of the basic hypergeometric series, Clearly, Relation with Dilogarithm has the following infinite product representation: On the other hand, holds. When , By taking the limit , where is the dilogarithm. In physics The Q-exponential function is also known as the quantum dilogarithm. References Q-analogs Exponentials
https://en.wikipedia.org/wiki/Halazepam
Halazepam is a benzodiazepine derivative that was marketed under the brand names Paxipam in the United States, Alapryl in Spain, and Pacinone in Portugal. Medical uses Halazepam was used for the treatment of anxiety. Adverse effects Adverse effects include drowsiness, confusion, dizziness, and sedation. Gastrointestinal side effects have also been reported including dry mouth and nausea. Pharmacokinetics and pharmacodynamics Pharmacokinetics and pharmacodynamics were listed in Current Psychotherapeutic Drugs published on June 15, 1998 as follows: Regulatory Information Halazepam is classified as a schedule 4 controlled substance with a corresponding code 2762 by the Drug Enforcement Administration (DEA). Commercial production Halazepam was invented by Schlesinger Walter in the U.S. It was marketed as an anti-anxiety agent in 1981. However, Halazepam is not commercially available in the United States because it was withdrawn by its manufacturer for poor sales. See also Benzodiazepines Nordazepam Diazepam Chlordiazepoxide Quazepam, fletazepam, triflubazam — benzodiazepines with trifluoromethyl group attached References External links Inchem - Halazepam Withdrawn drugs Benzodiazepines Chloroarenes GABAA receptor positive allosteric modulators Lactams Trifluoromethyl compounds