content
stringlengths
27
653k
pred_label
stringclasses
2 values
pred_score_pos
float64
0.5
1
Blackberry-Blueberry Tart W/Cornmeal Crust Total Time Prep 15 mins Cook 20 mins Ingredients Nutrition 1. Preheat the oven to 350 degrees Farenheit. 2. To make the crust: In a medium bowl, whisk together the flour, cornmeal, baking powder and salt. Set aside. In a medium bowl, mix the butter and 1/2 cup sugar together until smooth. Incorporate the eggs and extracts and beat until combined. Gradually cream together with the flour mixture and mix until the dough comes together. Separate 1/3 of the dough and set aside. 3. Butter a 9 inch pan and press the remaining dough into the pan evenly. Roll and cut the separated dough and set aside. Freeze all of the dugh for approx 30 minutes until firm. 4. Meanwhile, to make the filling: In a medium bowl, mix togetherthe vinegar, cornstarch, and sugar and add in the berries. Stir until the berries are fully coated and set aside until needed. 5. Assembling/making the Pie: Pour the filling evenly into the chilled pie crust and top with the rolled out dough. Sprinkle lightly with granulated sugar and turn the oven up to 375. Place the pie on the middle rack and bake at 375 for 25 minutes, until the crust is golden. Serve cold and refridgerate for up to a week.
__label__1
0.999085
The deal was announced Monday. Upton batted .237 with 18 homers, 38 doubles, 62 RBIs and 42 steals in 154 games with the AL East champions last season. He is the only player in the major leagues with at least 100 doubles and 100 steals over the past three seasons. The 27-year-old center fielder lost in arbitration last year, when he was awarded $3 million rather than his request for $3.3 million. He and Carl Crawford then became the first teammates in major league history to have 60-plus extra-base hits and 40-plus steals in the same season. Crawford left last month for a $142 million, seven-year contract with Boston.
__label__1
0.995339
Abilene City Council: Lease with Frontier, Texas! renewed, ACU land rezoned ABILENE, Texas - The Abilene City Council voted to renew its 10-year lease agreement with Frontier, Texas! This means the museum will continue to maintain the facility and the city will be responsible for the landscaping. Frontier, Texas! just celebrated its 10th anniversary. The tourist attraction has seen 370,000 visitors since it first opened its doors.  Also Thursday, the council voted to rezone land just north of Abilene Christian University, located at the northwest corner of Ambler Ave. and ACU Drive. The university wants to use this land to build overflow parking lots for its new athletic facilities. Members of a nearby neighborhood homeowner's association expressed their concerns at Thursday's council meeting. They said they're worried about not having a buffer zone between their homes and the school. They said they're also concerned about potential street closures, as it would impede the way they get in and out of their neighborhood. ACU has requested to abandon ACU Drive north of Ambler Ave and Margaret St., as well as the adjacent alleys. Mayor Norm Archibald said this proposal would allow ACU to combine areas currently separated by ACU Drive. That motion passed, but with with some stipulations. The council proposed a solid, 7-foot fence with no access would be built to separate the neighborhood from the school property. Recent Headlines Local Mugshots
__label__1
0.976373
Precious time Precious time While we all have busy lives, we should better appreciate the moments we have with God It’s Time to ... What is the purpose of time? What is the purpose of time? Does every moment count? Does every moment count? Whether we are wasting time or using it wisely depends on our individual vocation Arriving Late To Mass An uncorrectable habit? Stickball and iceboxes Stickball and iceboxes Reminiscences of days gone by shows us that time passes, life goes on and winter will come Free webcast Most Popular Former Satanist becomes Catholic leader, teacher The inconsistencies of ‘Je suis Charlie’ Anglo-Lutheran Catholics? Catholic population steady, despite Pew report An argument: Retreat toward engagement
__label__1
0.853332
South Africa-Historical Development South Africa Index Before South Africa's vast mineral wealth was discovered in the late nineteenth century, there was a general belief that southern Africa was almost devoid of the riches that had drawn Europeans to the rest of the continent. South Africa had no known gold deposits such as those the Portuguese had sought in West Africa in the fifteenth century. The region did not attract many slave traders, in part because local populations were sparsely settled. Valuable crops such as palm oil, rubber, and cocoa, which were found elsewhere on the continent, were absent. Although the local economy was rich in some areas--based on mixed farming and herding--only ivory was traded to any extent. Most local products were not sought for large-scale consumption in Europe. Instead, Europeans first settled southern Africa to resupply their trading expeditions bound for other parts of the world (see Origins of Settlement, ch. 1). In 1652 the Dutch East India Company settled a few employees at a small fort at present-day Cape Town and ordered them to provide fresh food for the company's ships that rounded the Cape on their way to East Africa and Asia. This nucleus of European settlement quickly spread outward from the fort, first to trade with the local Khoikhoi hunting populations and later to seize their land for European farmers. Smallpox epidemics swept the area in the late eighteenth century, and Europeans who had come to rely on Khoikhoi labor enslaved many of the survivors of the epidemics. By the early nineteenth century, when the Cape settlement came under British rule, 26,000 Dutch farmers had settled the area from Stellenbosch to the Great Fish River (see fig. 7). In 1820 the British government sponsored 5,000 more settlers who also established large cattle ranches, relying on African labor. But the European immigrants, like earlier arrivals in the area, engaged primarily in subsistence farming and produced little for export. The discovery of diamonds in 1869 and of gold in 1886 revolutionized the economy. European investment flowed in; by the end of the nineteenth century, it was equivalent to all European investment in the rest of Africa. International banks and private lenders increased cash and credit available to local farmers, miners, and prospectors, and they, in turn, placed growing demands for land and labor on the local African populations. The Europeans resorted to violence to defend their economic interests, sometimes clashing with those who refused to relinquish their freedom or their land. Eventually, as the best land became scarce, groups of settlers clashed with one another, and rival Dutch and British populations fought for control over the land (see Industrialization and Imperialism, 1870-1910, ch. 1). South Africa was drawn into the international economy through its exports, primarily diamonds and gold, and through its own increasing demand for a variety of agricultural imports. The cycle of economic growth was stimulated by the continual expansion of the mining industry, and with newfound wealth, consumer demand fueled higher levels of trade. In the first half of the twentieth century, government economic policies were designed to meet local consumer demand and to reduce the nation's reliance on its mining sector by providing incentives for farming and for establishing manufacturing enterprises. But the government also saw its role as helping to defend white farmers and businessmen from African competition. In 1913 the Natives Land Act reserved most of the land for white ownership, forcing many black farmers to work as wage laborers on land they had previously owned. When the act was amended in 1936, black land ownership was restricted to 13 percent of the country, much of it heavily eroded. White farmers received other privileges, such as loans from a government Land Bank (created in 1912), labor law protection, and crop subsidies. Marketing boards, which were established to stabilize production of many crops, paid more for produce from white farmers than for produce from black farmers. All farm activity suffered from the cyclical droughts that swept the subcontinent, but white farmers received greater government protection against economic losses. During the 1920s, to encourage the fledgling manufacturing industries, the government established state corporations to provide inexpensive electricity and steel for industrial use, and it imposed import tariffs to protect local manufacturers. Again black entrepreneurs were discouraged, and new laws limited the rights of black workers, creating a large pool of low-cost industrial labor. By the end of the 1930s, the growing number of state-owned enterprises dominated the manufacturing sector, and black entrepreneurs continued to be pressured to remain outside the formal economy. Manufacturing experienced new growth during and after World War II. Many of the conditions necessary for economic expansion had been present before the war--cities were growing, agriculture was being consolidated into large farms with greater emphasis on commercial production, and mine owners and shareholders had begun to diversify their investments into other sectors. As the war ended, local consumer demand rose to new highs, and with strong government support--and international competitors at bay--local agriculture and manufacturing began to expand. The government increased its role in the economy, especially in manufacturing, during the 1950s and the 1960s. It also initiated large-scale programs to promote the commercial cultivation of corn and wheat. Government investments through the state-owned Industrial Development Corporation (IDC) helped to establish local textile and pulp and paper industries, as well as state corporations to produce fertilizers, chemicals, oil, and armaments. Both manufacturing and agricultural production expanded rapidly, and by 1970 manufacturing output exceeded that of mining. Despite the appearance of self-sustaining economic growth during the postwar period, the country's economy continued to be susceptible to its historical limitations: recurrent drought, overreliance on gold exports, and the costs and consequences of the use of disenfranchised labor. While commercial agriculture developed into an important source of export revenue, production plummeted during two major droughts, from 1960 to 1966 and from 1981 to 1985. Gold continued to be the most important export and revenue earner; yet, as the price of gold fluctuated, especially during the 1980s, South Africa's exchange rate and ability to import goods suffered. Manufacturing, in particular, was seriously affected by downswings in the price of gold, in part because it relied on imported machinery and capital. Some capital-intensive industries were able to expand, but only with massive foreign loans. As a result, many industries were insulated from the rising labor militancy, especially among black workers, which sparked disputes and slowed productivity in the late 1980s. As black labor increasingly voiced its frustrations, and foreign banks cut short their loans because of mounting instability, even capital-intensive industries felt the impact of apartheid on profits. The economy was in recession from March 1989 through most of 1993, largely in response to worldwide economic conditions and the long-term effects of apartheid. It registered only negligible, or negative, growth in most quarters. High inflation had become chronic, driving up costs in all sectors. Living standards of the majority of black citizens either fell or remained dangerously low, while those of many whites also began to decline. Economic growth continued to depend on decent world prices for gold and on the availability of foreign loans. Even as some sectors of the economy began to recover in late 1993, intense violence and political uncertainty in the face of reform slowed overall growth through 1994. Data as of May 1996 Copyright mongabay 2000-2013
__label__1
0.943395
Web Results Pop music Pop music is a genre of popular music that originated in its modern form in the Western world ... a matter of enterprise not art", is "designed to appeal to everyone" and "doesn'... History of pop music - Citelighter Fatal error: Uncaught Error: Call to undefined function detokenize() in /mnt/ citelighter/releases/20160811114804/application/views/cards/view.tpl.php:13 Stack ... Genres Pop Music - Audials.com GENRE POP MUSIC - all you have to know about pop music. ... Pop music is an ample and imprecise category of modern music not defined by artistic .... 2000) and British artists did the same with "Feel" (Robbie Williams, 2003) or "You're ... What is Pop Music? | English Club Learn the difference between pop music and popular music. Many people think they are the same but really they're not. With vocabulary and example sentences  ... Pop Music Genres List The most comprehensive list of pop music genres available on the Internet ... Crosby sold millions of records, as did Frank Sinatra (arguably the first modern pop star, ... Several younger artists have come and gone, and new styles have briefly ... POPULAR MUSIC - Fact Monster Popular or pop music is largely vocal and appeals to a large, mainly young audience. It was originally ... WHEN DID POPULAR MUSIC BEGIN? WHAT STYLES ... Pop Music Defined from the 1950s to Today - Top 40 Pop - About.com This would include an extremely wide range of music from vaudeville and minstrel shows to heavy metal. Pop music, on the other hand, has primarily come into ... Pop music - Music Watch Here is the overview page for the pop music genre. It includes a brief history and some notable pop musicians. theory - What defines pop music as a genre? Which patterns and ... May 14, 2014 ... Which are those musical characteristics that separate Pop from other genres? For clarity .... So where did the styles of pop music originate? Ch-ch-changes: The evolving elements of pop music - Graphics May 6, 2015 ... Pop music is often considered a reflection of changing culture in the United States — and between 1960 and 2010, songs featured in the ... More Info Pop music - New World Encyclopedia Dec 28, 2012 ... Pop music is generally understood to be commercially recorded .... widespread popularity in white communities to some extent did not take off ... The History of Pop Music - Shine Music School Click here to download the Entire History of Pop Music in pdf ... They were largely hacks, but did produce some beautiful material. ... Several younger artists have come and gone, and new styles have briefly emerged, but nothing appears to ... The history of pop music - SlideShare Jul 7, 2012 ... Pop music, which accounts for the majority of the music Origins Of .... of commercial music enjoyed by the people,can be seen to originate in the ...
__label__1
0.999984
The Baltimore Sun Hoag creates two more endowed chair positions The first endowed chairs in cardiovascular surgery and gastrointestinal cancer at Hoag Hospital were named earlier this month, according to a hospital news release. Cardiovascular surgeon Dr. Aidan Raney, named the James & Pamela Muzzy endowed chairman in cardiovascular surgery, and digestive disease expert Dr. John Lipham, named the James & Pamela Muzzy endowed chairman in gastrointestinal cancer, were recognized Dec. 7 at the inaugural Hoag Hospital Foundation's Endowed Chair Investiture Ceremony. The creation of the positions is possible due to a donation by PIMCO founding partner Jim Muzzy and his wife, Pam. "The funds from the endowed chairs will allow Hoag to stay on the leading edge of improving technology and programs for our patients in cardiovascular surgery and gastrointestinal cancer," Dr. Richard Afable, Hoag president and chief executive, said in a prepared statement. "The concept of endowed chairs puts Hoag on the same playing field as leading academic institutions and we look forward to developing these world-class programs in order to provide the highest quality care to the community we serve." Hoag has four other endowed chair positions revolving around cancer, cardiac care, memory loss and cognitive impairment and neurosciences. — Sarah Peters Twitter: @speters01
__label__1
0.95676
Feats‎ > ‎Metamagic Feats‎ > ‎ Dazing Spell (Metamagic) The gadget spec URL could not be found You can daze creatures with the power of your spells. Benefit: You can modify a spell to daze a creature damaged by the spell. When a creature takes damage from this spell, they become dazed for a number of rounds equal to the original level of the spell. If the spell allows a saving throw, a successful save negates the daze effect. If the spell does not allow a save, the target can make a Will save to negate the daze effect. If the spell effect also causes the creature to become dazed, the duration of this metamagic effect is added to the duration of the spell. Level Increase: +3 (a dazing spell uses up a spell slot three levels higher than the spell’s actual level. Spells that do not inflict damage do not benefit from this feat.
__label__1
0.565867
It seemed like such a simple idea - laser etch a cool design on creme brulee for Valentine's day dessert.  First, we needed to bake the custard at home then off to techshop to use the lasers. You'll need: 6 large egg yolks (I only had medium eggs and the internet told me 6 yolks is 105 grams, so I used 7 medium egg yolks) 1 quart of cream 1 cup of sugar a vanilla bean a pinch of salt a torch and a laser cutter Let's get started Step 1: Make the custard Gather the ingredients.  Preheat your oven to 325*F.  Split the vanilla bean lengthwise, put it along with the cream, half a cup of sugar, and a pinch of salt in a sauce pan and bring it up to a simmer.  Let it steep for 15 minutes then temper the egg yolks in (whisk the yolks adding a little of the hot liquid at a time so you slowly bring the yolks up to temperature without ending up with scrambled eggs).  Pour the custard into ramekins, then place the ramekins in a baking dish and add enough water to the baking dish until it's the same depth as your custard.  Bake for about 25 minutes, until the custard is just set.  Use a metal spatula to retrieve the ramekins and put them on a wire rack to cool.  Wrap them in plastic wrap and stick them in the fridge overnight. About This Instructable More by FoodGeek:How to make vise jaws Make a bunch of kitchen utensils from a block of wood Creme brulee + Laser cutter = ???  Add instructable to:
__label__1
0.968096
Highest Quality Supplements Since 1980 Life Extension Magazine Healing Cream September 2010 The effects of topical vitamin K on bruising after laser treatment. BACKGROUND: Pulsed dye laser treatment and other cosmetic procedures result in significant bruising. Claims have been made regarding the efficacy of topical vitamin K in both preventing and speeding the clearing of bruising; however, well-controlled studies are lacking. OBJECTIVE: The purpose of this study is to evaluate the effects of topical vitamin K versus placebo in the prevention and clearing of laser-induced purpura. METHODS: A total of 22 patients were enrolled in this double-blind randomized placebo-controlled study. The patients were divided into pretreatment and posttreatment groups; the 11 patients in the former group applied vitamin K cream to half of their face and vehicle alone to the other half of their face twice daily for 2 weeks before laser treatment. The latter group followed the same procedure for 2 weeks after laser treatment. On day 0, all subjects underwent laser treatment for facial telangiectases using a 585-nm pulsed dye laser. Bruising was rated by the both the patient and physician by means of a visual analogue scale on days 0, 3, 7, 10, 14, and 17. RESULTS: The side of the face treated with topical vitamin K before laser therapy showed no significant difference in bruising as compared to placebo. However, the side of the face treated with vitamin K cream after laser treatment had significantly lower scores of bruising severity when compared with the side treated with placebo. CONCLUSION: Although pretreatment with vitamin K did not prevent bruising after laser treatment, use of vitamin K cream after laser treatment did reduce the severity of bruising, particularly in the initial days of application. Helenalin, an anti-inflammatory sesquiterpene lactone from Arnica, selectively inhibits transcription factor NF-kappaB. Alcoholic extracts prepared form Arnicae flos, the collective name for flowerheads from Arnica montana and A. chamissonis ssp. foliosa, are used therapeutically as anti-inflammatory remedies. The active ingredients mediating the pharmacological effect are mainly sesquiterpene lactones, such as helenalin, 11alpha,13-dihydrohelenalin, chamissonolid and their ester derivatives. While these compounds affect various cellular processes, current data do not fully explain how sesquiterpene lactones exert their anti-inflammatory effect. We show here that helenalin, and, to a much lesser degree, 11alpha,13-dihydrohelenalin and chamissonolid, inhibit activation of transcription factor NF-kappaB. This difference in efficacy, which correlates with the compounds’ anti-inflammatory potency in vivo, may be explained by differences in structure and conformation. NF-kappaB, which resides in an inactive, cytoplasmic complex in unstimulated cells, is activated by phosphorylation and degradation of its inhibitory subunit, IkappaB. Helenalin inhibits NF-kappaB activation in response to four different stimuli in T-cells, B-cells and epithelial cells and abrogates kappaB-driven gene expression. This inhibition is selective, as the activity of four other transcription factors, Oct-1, TBP, Sp1 and STAT 5 was not affected. We show that inhibition is not due to a direct modification of the active NF-kappaB heterodimer. Rather, helenalin modifies the NF-kappaB/IkappaB complex, preventing the release of IkappaB. These data suggest a molecular mechanism for the anti-inflammatory effect of sesquiterpene lactones, which differs from that of other nonsteroidal anti-inflammatory drugs (NSAIDs), indomethacin and acetyl salicylic acid. Biol Chem. 1997 Sep;378(9):951-61 Effect of Thymol on the spontaneous contractile activity of the smooth muscles. Effects of Thymol on the spontaneous contractile activity (SCA) have been found in in vitro experiments with circular smooth-muscle strips (SMAs) from guinea pig stomach and vena portae. Thymol was found to possess an agonistic effect on the alpha(1)-, alpha(2)- and beta-adrenergic receptors. Its spasmolytic effect is registered at doses higher than 10(-6)M. Thymol in a dose of 10(-4)M inhibits 100% the SCA of the SMAs and reduces the excitatory effect of 10(-5)M ACH to 35%. It is assumed that Thymol has an analgesic effect through its action on the alpha(2)-adrenergic receptors of the nerve cells. By influencing the beta-adrenergic receptors in the adipose cells, it is possible to induce increased synthesis of fatty acids and glycerol, which is a prerequisite for increased heat release. Phytomedicine. 2007 Jan;14(1):65-9 Elastase, a serine proteinase released by activated human neutrophils, can degrade a wide variety of biomacromolecules including elastin, and is considered a marker of inflammatory diseases. As the logical strategy to protect tissue is to inhibit excessive elastase activity, experimental and clinical researches have concentrated on trying to find efficient elastase inhibitors. As thymol, one of the major components of thyme oil with a phenolic structure, has been credited with a series of pharmacological properties, that include antimicrobial and antioxidant effects, the aim of this study was to explore whether it can also interfere with the release of elastase by human neutrophils stimulated with the synthetic chemotactic peptide N-formyl-methionyl-leucyl-phenylalanine (fMLP). After the neutrophils were incubated with increasing amounts of thymol (2.5, 5, 10, 20 microg/ml), elastase release was initiated by fMLP and measured using MeO-Suc-Ala-Ala-Pro-Val-MCA. The results showed that thymol inhibited fMLP-induced elastase release in a concentration-dependent manner, with the effects of 10 and 20 microg/ml being statistically significant. The behavior of cytosolic calcium mobilization revealed by fura-2 closely resembled that of elastase, thus suggesting that they may be related. The hydrophobic nature of thymol means that it can approach ion channel proteins through the lipid phase of the membrane, alter the local environment of calcium channels and thus inhibit capacitative calcium entry. In brief, thymol inactivates calcium channels machinery, thus triggering a corresponding reduction in elastase. The antibacterial and antimycotic activity of thymol is already well known, but our findings that it inhibits elastase extend our knowledge of the anti-inflammatory activity of this interesting molecule that is already credited with antioxidant activity. These two latter characteristics make thymol a molecule that can have helpful effects in controlling the inflammatory processes present in many infections. Pharmacology. 2006;77(3):130-6 Thymol, a constituent of thyme essential oil, is a positive allosteric modulator of human GABA(A) receptors and a homo-oligomeric GABA receptor from Drosophila melanogaster. The GABA-modulating and GABA-mimetic activities of the monoterpenoid thymol were explored on human GABAA and Drosophila melanogaster homomeric RDLac GABA receptors expressed in Xenopus laevis oocytes, voltage-clamped at -60 mV. The site of action of thymol was also investigated. Thymol, 1-100 microm, resulted in a dose-dependent potentiation of the EC20 GABA response in oocytes injected with either alpha1beta3gamma2s GABAA subunit cDNAs or the RDLac subunit RNA. At 100 microm thymol, current amplitudes in response to GABA were 416+/-72 and 715+/-85% of controls, respectively. On both receptors, thymol, 100 microm, elicited small currents in the absence of GABA. The EC50 for GABA at alpha1beta3gamma2s GABAA receptors was reduced by 50 microm thymol from 15+/-3 to 4+/-1 microm, and the Hill slope changed from 1.35+/-0.14 to 1.04+/-0.16; there was little effect on the maximum GABA response. Thymol (1-100 microm) potentiation of responses to EC20 GABA for alpha1beta1gamma2s, alpha6beta3gamma2s and alpha1beta3gamma2s human GABAA receptors was almost identical, arguing against actions at benzodiazepine or loreclezole sites. Neither flumazenil, 3-hydroxymethyl-beta-carboline (3-HMC), nor 5alpha-pregnane-3alpha, 20alpha-diol (5alpha-pregnanediol) affected thymol potentiation of the GABA response at alpha1beta3gamma2s receptors, providing evidence against actions at the benzodiazepine/beta-carboline or steroid sites. Thymol stimulated the agonist actions of pentobarbital and propofol on alpha1beta3gamma2s receptors, consistent with a mode of action distinct from that of either compound. These data suggest that thymol potentiates GABAA receptors through a previously unidentified binding site. Br J Pharmacol. 2003 Dec;140(8):1363-72 An essential element of the innate immune response to injury is the capacity to recognize microbial invasion and stimulate production of antimicrobial peptides. We investigated how this process is controlled in the epidermis. Keratinocytes surrounding a wound increased expression of the genes coding for the microbial pattern recognition receptors CD14 and TLR2, complementing an increase in cathelicidin antimicrobial peptide expression. These genes were induced by 1,25(OH)2 vitamin D3 (1,25D3; its active form), suggesting a role for vitamin D3 in this process. How 1,25D3 could participate in the injury response was explained by findings that the levels of CYP27B1, which converts 25OH vitamin D3 (25D3) to active 1,25D3, were increased in wounds and induced in keratinocytes in response to TGF-beta1. Blocking the vitamin D receptor, inhibiting CYP27B1, or limiting 25D3 availability prevented TGF-beta1 from inducing cathelicidin, CD14, or TLR2 in human keratinocytes, while CYP27B1-deficient mice failed to increase CD14 expression following wounding. The functional consequence of these observations was confirmed by demonstrating that 1,25D3 enabled keratinocytes to recognize microbial components through TLR2 and respond by cathelicidin production. Thus, we demonstrate what we believe to be a previously unexpected role for vitamin D3 in innate immunity, enabling keratinocytes to recognize and respond to microbes and to protect wounds against infection. Vitamin D and the skin. Along with other organs like prostate, bones and kidney, skin is capable of vitamin D synthesis. Primarily keratinocytes but also macrophages and fibroblasts synthesize active vitamin D from cholesterol precursors by photochemical activation. The synthesized vitamin D functions by binding to nuclear vitamin D receptors. Vitamin D deficiency usually manifests as rickets in childhood although it is today only relevant in diseases characterized by malabsorption due to today’s recommended vitamin D prophylaxis. Excessive doses of vitamin D are the usual cause of increased levels. The most common therapeutic target of vitamin D is psoriasis. Here, topical preparations are usually employed; their anti-proliferative and cell differentiation-promoting action is mediated via binding to cutaneous vitamin D receptors. Vitamin D as an inducer of cathelicidin antimicrobial peptide expression: Past, present and future. Vitamin D was discovered as the preventive agent of nutritional rickets, a defect in bone development due to inadequate uptake of dietary calcium. However, a variety of studies over the last several years has revealed that vitamin D controls much more than calcium homeostasis. For example, recent research has underlined the key role of vitamin D signaling in regulation of innate immunity in humans. Vitamin D is converted to 25-hydroxyvitamin D (25D), its major circulating form, and then to hormonal 1,25-dihydroxyvitamin D (1,25D) in target cells. We now know that when cells of the immune system such a macrophages sense a bacterial infection they acquire the capacity to convert circulating 25D into 1,25D. Moreover, 1,25D thus produced is a direct inducer of expression of genes encoding antimicrobial peptides, in particular cathelicidin antimicrobial peptide (CAMP). Antimicrobial peptides such as CAMP are vanguards of innate immune responses to bacterial infection and can act as signaling molecules to regulate immune system function. This review covers what we have learned in the past few years about the expression and function of CAMP under physiological and pathophysiological conditions, and addresses the potential future applications of vitamin D analogues to therapeutic regulation of CAMP expression. J Steroid Biochem Mol Biol. 2010 Mar 17
__label__1
0.720528
March 12, 2012 Beliefs About Genes, God, Can Change Health Communication Strategies Beliefs about nature and nurture can affect how patients and their families respond to news about their diagnosis, according to Penn State health communication researchers. Understanding how people might respond to a health problem, especially when the recommendations for adapting to the condition may seem contradictory to their beliefs, is crucial to planning communication strategies, said Roxanne Parrott, Distinguished Professor of Communication Arts and Sciences and Health Policy and Administration. People affected with known genetic or chromosomal disorders, such as Down syndrome, Marfan syndrome and neurofibromatosis, tend to communicate differently about their illness based on their uncertainty of genetics' role in health. "When a person is uncertain about an illness, it can also predict how they manage that uncertainty and how they desire to talk or communicate about the condition," Parrott said. When patients and family members are willing to talk about a diagnosis, they have a better chance of connecting with sources of help and support. "Emotions experienced about a condition impact patient and family members' communication," said Parrott. "How fearful, angry, or sad they feel is part of the uncertainty about the condition and affects how patients and their families seek to navigate the situation." "What we can do is design programs for genetic counselors that suggest different scripts for communicating based on understanding how people might respond to a diagnosis," Parrott said. Parrott, who worked with Kathryn F. Peters, a certified genetic counselor and instructor of biobehavioral health, and Tara Traeder, graduate student in communication arts and sciences, said people clustered into four groups based on how they understand the role genetics plays in health. Uncertain relativists are not sure what role personal behaviors, religious faith and social networks play in genetics and health. Personal control relativists tend to be more certain about how personal behavior affects genetics. Genetic determinists believe that only genes determine their health, while integrated relativists believe that behavior, faith and support can affect genetic expression. While integrated relativists seem to have the most balanced approach to understanding genetics and health, the researchers said that they also had the highest levels of uncertainty about living with the condition of the four groups. People who were more likely to believe genetics are the dominant predictor of their health wanted to communicate more about their condition than those who believe they have more personal control in health. The researchers, who reported their findings in the online version of the journal Health Communication, analyzed data from a survey of 541 family members or patients diagnosed with neurofibromatosis, Marfan syndrome or Down syndrome. Participants were asked questions about the status of their diagnosis, beliefs on genetics, personal behavior, religious and social life, illness uncertainty and how they manage their uncertainty about living with the diagnosis or living with a family member who has had the diagnosis. To determine how the participants understood the connection between genetics, and personal behavior in health, the researchers asked participants several questions including whether alcohol can cause changes in the genes of adults. They were also asked whether participants believed drug use could cause genetic changes. The researchers also asked participants how a higher power or attending a house of worship could affect genes to assess the religious beliefs of the participant. The survey explored participants' negative feelings and links to how they preferred to communicate about the condition. Parrott said that communication strategies around the four frameworks linked to beliefs about the role of genetics for health can help simplify health communication strategies and prepare counselors for patient responses. "A significant number of people are affected by these conditions and it's important to remember that communicating with patients and family is not always a simple thing," said Parrott. "There are times they need to be hopeful and times that they need to be mad." On the Net:
__label__1
0.971423
Friday, March 21, 2014 THE SECOND COMING, The Archangel Gabriel Proclaims a New Age: Cause and Effect By Joel D. Anastasi with Robert Baker Robert Baker (1948-2013) Joel: I know this is an evolutionary process and it’s not going to happen right away, but that seems to be such a momentous shift. Our learning up to now has been through cause and effect, which, to me, essentially means that we take an action and there is a consequence. We assess it, make a judgment about it and base our future actions on the consequences. So we learn through trial and error, cause and effect. What I understand you to be saying is that we are going to be shifting into an ability to just operate out of knowing by being in consonant energy with others. Gabriel: Cause and effect, first of all, is created through unconscious choice and consequence. In other words, it is created through a survival process of unconsciousness or ignorance. You take an action. You are not aware of what the consequence will be. The consequence results. And because you were unconscious, you do not respond or take responsibility for the consequence; rather, you react to it in a survival manner. Therefore, you are not able to respond to it. You’re not able to see the value of it, take responsibility for it, and then learn from your actions so that the next time you are more conscious and you make conscious choices out of a place of knowing. Cause and effect is learning through trial and error because you’re not conscious enough to learn through conscious choice. However, through resonant causation I know this choice I am making is going to produce this result or that this is going to happen if I make this choice. It’s going to affect me this way and others that way. And I take responsibility for that. Or, if it’s not something for which I’m willing to take responsibility, I reevaluate my choice. When you operate from resonance, you operate at a certain vibration and frequency of reality. You are maintaining and sustaining a certain reality through the particular vibration. You resonate with the reality. Therefore, the choices you make contribute to that reality. You’re not doing it haphazardly. You’re not doing it through trial and error. Resonant causation means I resonate at a certain vibration and frequency of consciousness, and there is a certain knowing connected to that. I operate through a radiatory resonance of energy. I resonate with the reality I’m creating and with the choices I make. And I am making choices that have value, meaning, and purpose that are for the highest good for myself and all involved. I consider all parts of reality as to the effect or consequences that my choices may produce. That doesn’t mean I always know exactly what’s going to happen, but I’m in the ballpark. And I know my choices are not going to cause unconscious harm to someone. Joel: It sounds to me as though you’re talking about moving from unconsciousness to consciousness. Isn’t that what therapy work with the Robert Bakers of the world is designed to do? They don’t resonate with it.
__label__1
0.976219
Understanding Material Loss Across Time and Space February 17, 2017 - February 18, 2017 School of History and Cultures, University of Birmingham University of Birmingham United Kingdom View the Call For Papers Topic areas Talks at this conference Add a talk Understanding Material Loss intends to examine the usefulness of ‘loss’ as an analytical framework across different disciplines and subfields, but principally within historical studies. Loss and absence are slowly being recognized as significant factors in historical processes, particularly in relation to the material world. Archaeologists, anthropologists, philosophers, literary scholars, sociologists and historians have increasingly come to understand the material world as an active and shaping force. Nevertheless, while significant, such studies have consistently privileged material presence as the basis for understanding how and why the material world has played an increasingly important role in the lives of humans. In contrast, Understanding Material Loss suggests that instances of absence, as much as presence, provide important means of understanding how and why the material world has shaped human life and historical processes. Speculative and exploratory in nature, Understanding Material Loss asserts that in a period marked by ecological destruction, but also economic austerity, large scale migration and increasing resource scarcity, it is important that historians work to better understand the ways in which humans have responded to material loss in the past and how such responses have shaped change. Understanding Material Loss asks: how have humans historically responded to material loss and how has this shaped historical processes? The conference will bring together a range of scholars in an effort more to begin to explore and frame a problem, than provide definitive answers. Confirmed keynote speakers include: ·      Professor Pamela Smith, History, Columbia ·      Dr. Simon Werrett, Science and Technology Studies, UCL ·      Professor Maya Jasanoff, History, Harvard ·      Professor Jonathan Lamb, English, Vanderbilt ·      Professor Anthony Bale, English and Humanities, Birkbeck ·      Dr. Astrid Swenson, Politics and History, Brunel Understanding Material Lossseeks to uncover the multiple practices and institutions that emerged in response to different forms of material loss in the past and asks, how has loss shaped (and been shaped by) processes of acquisition, possession, stability, abundance and permanence? By doing so it seeks to gauge the extent to which ‘loss’ can be used as an organizing framework of study across different disciplines and subfields. Understanding Material Loss seeks papers from across a variety of time periods and geographies. Although open and speculative in nature, this conference will focus on three broad topics within the wider rubric of loss, in order to facilitate meaningful conversations and exchanges. Using Materials ·      How has the ‘loss’ of particular materials affected scientific practice, manufacturing, architectural design or development in the past? ·      How have humans responded to the partial loss or decay of materials? ·      How have ‘lost’ skills or knowledge affected the use of materials? ·      How have humans re-appropriated or recycled seemingly damaged or obsolete materials? Possessing Objects ·      How have humans sought to maintain and mark the ownership of objects? ·      How has the loss of possessions and property affected human mobility and constructions of identity? ·      How have communities historically responded to the loss of particular objects? When and why have they sought to stave off the loss of things? ·      Where, when and how have cultures of repair flourished? ·      How has the loss of possessions and property (or the potential for loss) affected processes of production, consumption or financial stability? Inhabiting Sites and Spaces ·      When and why have particular sites or buildings been understood as destroyed or obsolete? ·      How have past societies responded to the loss of particular sites? ·      When and how have landscapes been actively purged of symbols and sites? ·      How have past societies worked to rebuild or reclaim particular sites? ·      What strategies did past societies develop to ensure the resilience of certain structures? Please send proposals (250 words max per paper) for papers and panels to conference organizer Kate Smith ( by Friday 14 October 2016. Papers should not exceed 20 minutes. Roundtable panels featuring 5-6 papers of 10 minutes each or other innovative formats are encouraged. More details available at: Thanks to Past & Present and the University of Birmingham for their generous support for the conference. Supporting material Add supporting material (slides, programs, etc.) January 27, 2017, 9:00am BST External Site Who is attending? No one has said they will attend yet. Will you attend this event? This event has been submitted and is maintained by:
__label__1
0.999928
یکشنبه 26 فروردین‌ماه سال 1386 ساعت 11:33 ق.ظ One night a man had a dream He dreamed he was walking along the beach with the LORD Across the sky flashed scense from his life For each scene he noticed two sets of footprints in the sand One belonging to him, and the other to the LORD. WHEN the last scene of his life flashed befor him, He looked back at the footprints in the sand. He noticed that many times along the path of his life there was only one set of footprints.he olso noticed that it happened at the very lowest and saddest times in his life. This realy bothered him and he questioned the lord about it:  “LORD, you said that once I decided to followed you, you ’d walk withe me all the way. But I have noticed that  during the most troublesome time in my life, there is only one set of footprints. I do ’nt understand why when I needed you most you would leave me.” THE lord replied:“MY soon , my precious chilled , I love you and I woude never leave you. del.icio.us  digg  newsvine  furl  Y!  smarking  segnalo
__label__1
0.968885
Time Series Satellite and In-Situ Monitoring Data for Climate Fluctuations and Seismic Precursors Assessment Zoran, Maria; Savastru, Roxana; Savastru, Dan National Institute of R&D for Optoelectronics, ROMANIA Results of recent investigations suggest that climate change tends to exacerbate geo-disasters like as earthquake events. Earthquake science has entered a new era with the development of space-based technologies to measure surface geophysical parameters and deformation at the boundaries of tectonic plates and large faults. Different criteria can be used to select the remote sensed earthquake pre-signals for which there is an evidence for anomalies in the geophysical observables. Observations from Earth orbiting satellites are complementary to local and regional airborne observations, and to traditional in field measurements and ground-based sensor networks. Rock microfracturing in the Earth's crust preceding a seismic rupture may cause local surface deformation fields, rock dislocations, charged particle generation and motion, electrical conductivity changes, gas emission, fluid diffusion, electrokinetic, piezomagnetic and piezoelectric effects as well as climate fluctuations. Space-time anomalies of Earth's emitted radiation (thermal infrared radiation linked to air and land surface temperature variations recorded from satellite months to weeks before the occurrence of earthquakes, radon in underground water, soil and near the ground air, etc.), ionospheric and electromagnetic anomalies are considered as earthquake precursors. At land surface, energy fluxes interact instantaneously with each other in accordance with the prevailing meteorological conditions and the specific thermal and radiative characteristics of the soil surface. This paper aims at investigating the seismic pre-signals like as air and land surface temperature, ionospheric TEC and geomagnetic parameters for some major earthquakes recorded in the world based on satellite data provided by NCEP/NCAR, NOAA, WDC Australian, Space Environment Information Service Japan, British Geological Survey and World Data Center for Geomagnetism, Kyoto and in-situ monitoring geophysical data. As test cases have been analyzed March 11 th 2011 Tohoku earthquake in Japan and some earthquakes recorded in Vrancea seismic region in Romania. Land surface and near-surface air temperature and sensible latent heat flux SLHF parameters were analyzed both on short-term and long-term intervals and within a year before and after the strong earthquakes. Based on local tectonic geology, hydrology and meteorology, such findings support lithosphere-ionosphere coupling theory.
__label__1
0.998815
[ICO]NameLast modifiedSize [PARENTDIR]Parent Directory  - [DIR]data/2015-02-25 10:41 - [TXT]manifest-sha1.txt2015-02-25 10:56 149K [TXT]version.txt2015-02-25 10:56 84
__label__1
1.000008
tattoo Interview with Andrea Lanzi by Iva Kancheska 12/04/2010  Q: How long have you been tattooing? A: I was tattooing for about 18 years. I opened my first studio 10 years ago called "ANTIKORPO". Q: Do you have an artistic background growing up? A: I'm self-taught artist. I grew slowly by my passion for drawing and art in general. It took many years to refine the technique of tattoo art and sculpture. Q: How old were you when you did your first tattoo? A: I was 17. I made my first tattoo on my forearm. It was a six-pointed star formed by two crossed triangles. Q: Do you have any influences? A: The artist who approached me was the artist of Caravaggio (Michelangelo Merisi). He led me to use the realism that he used in his paintings. I was passionate to modern and contemporary work. I was experimenting with various techniques and materials, trying to get a realistic vein as close to classical and religious. Q: What got you interested in the business? A: Since I opened the ANTIKORPO Tattoo Studio 10 years ago, this became my only job and my greatest passion that allows me to live happily with my family. Q: How much time was necessary to develop your tattoo skills? A: My technique of tattooing is evolving, there is always something to learn. 10 years of continuous work led me to these results. Q: What is your favorite style? A: Living in Italy with a classic artistic culture, are certainly focused on realistic style, but nevertheless it is very difficult to make people appreciate the classic tattoo might be like Botticelli Venus in place of a Japanese geisha. Q: Outside tattooing, sculpting is your second passion. I was impressed by your "piggy" creation:) How much time was necessary to create this piece of art? A: Sculpture as the tattoo art is part of my life. Lately I've been experimenting with gyms that allow scenery to produce results similar to human skin. The implementation process is very long. To obtain a sculpture as Pig is required a week of continuous work. Q: What is your biggest inspiration? Do you use a sketchbook? A: I must say that I get more inspiration from the Web. Sometimes I'm looking at random pictures that potentially have nothing to do with the finished sculpture. However for a good sculptor sketchbook is always a necessary and fundamental. Q: What equipment do you use? A: I use different materials, from modeling clay for children to the latest silicone rubber. Q: Where did you learn sculpturing? A: The artistic training at the academy of fine arts has taught me the basics of sculpture which then evolved by continuous experimentation and frequent art exhibitions. Q: Do you also work on custom orders? A: The commissioned works are always a constraint for me, but happened to run parts required by customers. Q: What sets you apart form other artists? A: The artist is the one that materializes his thought. Everyone has their own .... here is the difference! Q: What are your feature plans? A: Only the best! Thank you for the interview Iva,
__label__1
0.754015
Permaculture is a branch of ecological design, ecological engineering, and environmental design that develops sustainable architecture and self-maintained agricultural systems modeled from natural ecosystems. The term permaculture (as a systematic method) was first coined by Australians Bill Mollison and David Holmgren in 1978. The word permaculture originally referred to “permanent agriculture” but was expanded to stand also for “permanent culture,” as it was seen that social aspects were integral to a truly sustainable system as inspired by Fukuoka natural farming philosophy. -Bill Mollison
__label__1
0.999825
Design a multiplexer and anomalous signals, Electrical Engineering 1. The size of the multiplexer used to implement a truth table can be cut in half (e.g. 4 inputs instead of 8) if one of the variables is used as an input instead of being connected to a select line. For example, a truth table with inputs A, B, C, could be implemented using a 4-input multiplexer with A and B connected to the 2 select lines. A and B would then be able to select 0, 1, C or C (assuming that an inverter is available for C). Figure out how to re-implement Question 3 this way and prove that your solution is correct with a LogicWorks simulation. 2. Another way to implement a truth table is to use a multiplexer. A 2 to1 mux, like the one discussed in class, can select one of two inputs using a single control input. A 4 to 1 mux selects one of four inputs using two control inputs. Consider the following. By using the 2 control inputs as the table input variables and appropriately hard wiring the 4 inputs of the mux to 0 or 1, a 2 input truth table can be implemented. Using this approach, an n input truth table can be implemented using a 2n to 1 mux. a) Design a 2 to 1 multiplexer. Verify its operation using LogicWorks. b) Now using the 2-way routing switch as a building block (use the device editor in LogicWorks to encapsulate the 2-way switch), design a multiplexer large enough to implement the truth table described in Question 2 (Z3 only). Predict the propagation delay, Tpd, of your multiplexer (you will need this to figure out how to space the inputs to your circuit in time). Test your multiplexer with appropriate waveforms and verify that the measured Tpd is consistent with its predicted value. c) Hard wire the inputs to your multiplexer to implement the truth table described in Question 2(Z3). Verify its operation using LogicWorks. d) Explain the presence of any anomalous signals (glitches) in your output and give an example of an input transition those results in a glitch at the output. Show this example using LogicWorks. 3. At night, a security guard is suppose to walk from room to room in a building having four rooms. Create a motion detector circuit which will detect the following conditions: 1.- Exactly one motion sensor being equal to 1, meaning, motion has been detected in one room. 2.- No motion sensor is equal to 1, meaning, the guard is either sitting or sleeping and no intruder is present in the building. 3.- Two or more sensors are equal to 1, meaning, there must be an intruder or intruders in the building. The circuit to be designed has four inputs, S1,S2, S3, S4, one input per sensor, and three outputs, Z1, Z2, Z3, corresponding, respectively, to each one of the mentioned three conditions. Each output is set to 1 when the corresponding condition occurs; otherwise, it is set to 0. 2364_Motor detector.png The following block diagram represents the circuit to be designed. a) Produce the truth table of the three output functions. b) Determine the minimal ΣΠ and ΠΣ for Z3. c) Implement the corresponding circuit for Z3 using NAND-NAND and NORNOR logic in LogicWorks. Show that your circuits implement the specified truth tables. d) Using the LogicWorks PROM/PLA wizard, generate the look-up table corresponding to the truth table and generate a test circuit. Verify its operation using LogicWorks. Posted Date: 3/1/2013 4:38:05 AM | Location : United States Related Discussions:- Design a multiplexer and anomalous signals, Assignment Help, Ask Question on Design a multiplexer and anomalous signals, Get Answer, Expert's Help, Design a multiplexer and anomalous signals Discussions Write discussion on Design a multiplexer and anomalous signals Your posts are moderated Related Questions I want to know whether the circuits for both methods(linear polarization resistance and electrochemical impedance spectroscopy) are same or not? A 440-V, 60-Hz, six-pole, wye-connected, wound-rotor induction motor with a full-load speed of 1170 r/min has the following per-phase parameters referred to the stator:R 1 = R' 2 Q. What are the applications of JFET? · FET is used as a buffer in measuring instruments, receivers since it has high input impedance and low output impedance. · FET's are u 1. Modeling of Armature and Field Controlled DC motors: Steady state and dynamic models; development of block diagrams;  assumptions used; complexity involved in realistic models; Q. A charge variation with time is given in Figure. Draw the corresponding current variation with time. what is the difference between half &full wave rectifier Describe soldering materials and their uses. The process of joining two or more metals is termed as soldering. An alloy of two or more metals of low melting point utilized for Im doing my final year project and Im stuck in vhdl coding. The main mission of this project is to design and build a tap changer which is going to be fitted to power transformers
__label__1
0.934809
Tree of Life "The Tree of Life" is Victor's depiction of Adam and Eve, the eternal couple. In the center of the painting, a blossoming apple tree is shown as the Tree of Life. On the surface of a table in front of the tree are an apple blossom, an apple, and Adam and Eve, formed from the nectar of the apple. ( Adam was formed first, and is shown as already separated from the apple; Eve is still in the process of forming.) Thus Victor shows that the various manifestations of the tree of Life are related to each other, continuously changing one into the other. The apple blossoms will turn into apples, and humanity also is interconnected with the other forms of life. As an apple tree draws its sustanance from sunlight, so all of the manifestations of the Tree of Life get their life force from the universal divine source, depicted as the light of the sun. The branches of the Tree of Life reach upwards toward the divine light. Adam and Eve, male and female, are the complementary opposites of ech other; thus Victor uses the complementary opposite colors of red and green as the basic colors of the painting. Tree of Life by Victor Bregeda VICTOR BREGEDA - Tree of Life - 19.5 x 15.75 inches Giclee on Canvas For more information about Victor Bregeda's work, please contact one of our consultants at 1-800-440-1278 or use our .
__label__1
0.879516
Request a Quotation for Language Translation Services. If the translation is into multiple languages, please specify them here. What is the total word count for the document(s)? a. *With the exception of Asian languages, the client will be charged per target word if an editable file format (Word, InDesign, and/or Illustrator) is not provided. If the target text is an Asian language, the client will instead be charged a fee to create an editable file from the source text. If your document requires an NDA please contact Katharine Spehar at Font style, size and color (We will provide our best match if no specific font is provided.) Please upload any graphic files (Format: EPS, AI, JPEG) Privacy Policy   |   Report Abuse
__label__1
0.87921
Cause and Effect Essay Cause and Effect Essay Natural Disaster: How to Solve the Problem? Unfortunately, all the humans are just helpless before nature. In the 21st century, natural disasters are no more rare cases; they happen all over the world. They are the devastating powers, which are able to ruin everything including the human lives and environment. Unfortunately, people find it impossible to struggle with them. Significantly, it is particularly important to realize the main causes of disasters to try to avoid the unpleasant consequences. Flooding is a common natural disaster, which occurs when the river destroys its banks, and they cannot keep the water pressure. The amount of water becomes too huge to flow in a certain line. Usually, this disaster happens after a heavy rain or in some other wet periods. There are many factors for flooding including the natural causes and the ones caused by human beings. The ability to have information about the approaching flood can help people to take the appropriate measures. Flood forecasting service usually provides information about the maximum expected level of the water and the approximate time of its occurrence for the most important places along the stretch of the river. Earthquake is also one of the most widespread disasters. When it occurs, it can destroy the houses, streets, and even the whole cities. The plates` movements in Earth`s crust are the main reason for this disaster. Sometimes, the plates move smoothly and do not cause any discomfort for people, but sometimes, they can get stuck causing the enormous pressure. At the moment, when this pressure is released, the earthquake happens. When the earthquake occurs in the ocean or sea, it can cause one more disaster, tsunami. It is the disaster when the great waves caused by the earthquake are pushed to the surface destroying everything on their way. Besides, tsunamis can be caused by the underwater volcanic eruptions. Volcanic eruptions occur when magma escapes from inside the Earth. When magma bursts from earth, it also releases the large quantities of gas and dust, which is particularly harmful to nature and human health. Also, people should remember about such a terrible disaster as wildfire. This disaster takes place in many countries and causes much harm to flora and fauna of the certain territory. Usually, it happens in summer, when it is very hot, and there are many factors for spreading of fire with the incredible speed. Undoubtedly, most of the wildfires are manmade, but their spreading is the natural process. As for the natural causes, they are the lightning struck and the sun`s heat. In conclusion, one should admit that the natural disasters bring many grief and sufferings to people. When they happen, people realize how they are weak comparing to nature. Despite all the technological progress, when the disaster happens, people can do nothing to save or protect their lives. Without any doubts, one person can do nothing to prevent them. However, people should unite in reducing their influence on the environment since it will help to lower the chances of the disaster`s happening.
__label__1
0.890596
StudentShare solutions Research Paper example - Different De-Icing Systems for Aircraft Only on StudentShare Research Paper Design & Technology Pages 4 (1004 words) De-Icing on Airplanes The formation of ice on the body of the airplanes and other aero navigating systems has been a problem that faces the aero navigating systems in the world. Ice is mostly condensed or frozen carbon dioxide. It is formed as a result of minimal or unavailable oxygen due to high atmospheric pressure… Extract of sample This paper looks at the various systems used for de-icing in airplanes. The protection of engines and the aircrafts can take fundamental forms. One of them is the removal of ice once it has been formed, or probes be used to prevent it from forming. De-icing is the removal of ice, snow, or hoarfrost on the surface of the airplanes. However, deicing is correlated with anti-icing, which is defined as the use chemicals in the surfaces of the aircraft. The chemicals do not only de-ice but also stay put on a surface and prevent buildup of ice for a period, or hinder adhesion of ice to make mechanical removal easier. Therefore anti-icing is also a form of de-icing (Skybrary, 2012). Removing ice on the surface of the aero planes takes various forms. It can be done using chemical methods such as scrapping and pushing. In order to achieve this, heat must be applied on the surface of the plane, by using liquid or dry chemicals that are formed to decrease the freezing point of water. Such chemicals include alcohols, brines, salts, and glycols. Moreover, they can combine many of these chemicals in order to enhance their effectiveness. De-icing can also be done through the use of a protective layers such as the use of viscous liquid known as the anti-icing fluid on the surface of the aero plane to absorb the contaminate. ... Download paper Not exactly what you need? Related papers Technostructural Intervention of TRW Systems. (Thompson, Seher, and Kotter, 1976) Ramo-Wooldridge Corporation experienced quick growth due to its ties to the “accelerating ICBM program that the air force was sponsoring. Following the win on the contract bid for the job of providing technical supervision of the ICBM program it is reported that RW gradually expanded its capability to include advance planning for future ballistic weapons… 10 pages (2510 words) Possibility of developing voice recognition system in an aircraft But with an increase in air traffic, thanks to the continued technological innovation in the world today, new methods of communication in aircrafts have been devised in order to curb increased traffic along with the errors that occurred with voice radio and the need to incorporate even the disabled into the profession of piloting (Adams, 2009). Able Flight is a non-profit organization, which… 10 pages (2510 words) The impact of airport design, development, operations, and funding sources on Airport systems. Thus, to build new safety standards and maintain them to the highest level, an Act regarding aviation safety was passed on May 20, 1926. The Act made great beneficial impacts upon the airport design, development, operations, and funding sources. According to this Act, all aircraft should be properly checked and certified in order to be suitable for flight. The Act emphasized that the federal… 1 pages (251 words) How has technology influenced commercial aircraft to make them more eco-friendly and safer. The transport sector is one of them. Over the recent years, manufacturers of various brands of vehicles, trucks and aircrafts have come up with designs that have increased fuel efficiencies. The result of this has been reduced green house gas (GHG) emissions from the sector. In aviation specifically, a four-pillar strategy has been identified as a mitigation measure to GHG emissions from the… 7 pages (1757 words) Software Systems Choices - In the beginning, was just an online book store. However, as time went by, it grew to be more than that. Today, it has diversified its markets and now sells DVD’s; MP3 downloads, CDs, video games, software, electronics, food, furniture, apparel, and toys. In the countries where it operates in, it also offers shipping; internationally, for selected products it sells. This has really… 4 pages (1004 words) Dental Office Network Systems Despite the robustness of the proposed network, the report will details its risks along with its benefits. Network Requirements The entire organization, which has five offices, must connect to a central repository where all data will be stored. Because of this reason, proposed network must have a server to store all files for the five offices. In addition, the need for a fast and reliable network… 3 pages (753 words) Automated Manufacturing Process and/or Systems 7 pages (1757 words)
__label__1
0.949551
Learn about this topic in these articles: alternating current ...opposite direction, returns again to the original value, and repeats this cycle indefinitely. The interval of time between the attainment of a definite value on two successive cycles is called the period; the number of cycles or periods per second is the frequency, and the maximum value in either direction is the amplitude of the alternating current. Low frequencies, such as 50 and 60 cycles... ...example with V 0 = 170 volts and ω = 377 radians per second, so that V = 170 cos(377 t). The time interval required for the pattern to be repeated is called the period T, given by T = 2π/ω. In Figure 22, the pattern is repeated every 16.7 milliseconds, which is the period. The frequency of the voltage is symbolized by f and... measurement of pendular motion Figure 4: Oscillation of a simple pendulum (see text). ...point so that it can swing back and forth under the influence of gravity. Pendulums are used to regulate the movement of clocks because the interval of time for each complete oscillation, called the period, is constant. The Italian scientist Galileo first noted ( c. 1583) the constancy of a pendulum’s period by comparing the movement of a swinging lamp in a Pisa cathedral with his pulse... in mechanics of vibrations, the fraction of a period (i.e., the time required to complete a full cycle) that a point completes after last passing through the reference, or zero, position. For example, the reference position for the hands of a clock is at the numeral 12, and the minute hand has a period of one hour. At a quarter past the hour the minute hand has a phase of one-quarter period,... simple harmonic motion ...= 0. As time goes on, the mass oscillates from A to − A and back to A again in the time it takes ω t to advance by 2π. This time is called T, the period of oscillation, so that ω T = 2π, or T = 2π/ω. The reciprocal of the period, or the frequency f, in oscillations per second, is given by f =... ...this property is common to all harmonic oscillators, and, indeed, Galileo’s discovery led directly to the invention of the first accurate mechanical clocks. Galileo was also able to show that the period of oscillation of a simple pendulum is proportional to the square root of its length and does not depend on its mass. transverse waves Transverse wave The time required for a point on the wave to make a complete oscillation through the axis is called the period of the wave motion, and the number of oscillations executed per second is called the frequency. Wavelength is considered to be the distance between corresponding points on the... water waves Surfer riding a wave. ...whose amplitude is small compared to their length, the wave profile can be sinusoidal (that is, shaped like a sine wave), and there is a definite relationship between the wavelength and the wave period, which also controls the speed of wave propagation. Longer waves travel faster than shorter ones, a phenomenon known as dispersion. If the water depth is less than one-twentieth of the... The theory of waves starts with the concept of simple waves, those forming a strictly periodic pattern with one wavelength and one wave period and propagating in one direction. Real waves, however, always have a more irregular appearance. They may be described as composite waves, in which a whole spectrum of wavelengths, or periods, is present and which have more or less diverging directions of... A few examples are listed below for short waves, giving the period in seconds, the wavelength in metres, and wave speed in metres per second: When waves run into shallow water, their speed of propagation and wavelength decrease, but the period remains the same. Eventually, the group velocity, the velocity of energy propagation, also decreases, and this decrease causes the height to increase. The latter effect may, however, be affected by refraction of the waves, a swerving of the wave crests toward the depth lines and a corresponding... • MLA • APA • Harvard • Chicago You have successfully emailed this. Error when sending the email. Try again later. Keep Exploring Britannica Read this Article Read this Article game theory Read this Article A piece of compressed cocaine powder. drug use Read this Article Margaret Mead Read this Article Babylonian mathematical tablet. Read this Article Zeno’s paradox, illustrated by Achilles racing a tortoise. foundations of mathematics Read this Article human genetic disease Read this Article Read this Article acid–base reaction Read this Article Read this Article quantum mechanics Read this Article Email this page
__label__1
0.875765
Abstract: Sleep is a naturally life style of the zoological species like human, animals, birds, mammals, Reptiles etc. The some species sleep time eye is open & mostly eye is closed. Different-different species sleeping time is dissimilar. The human sleep timing is dissimilar to born to older adults. Sleep is not functioning properly i.e. many disease is generated i.e. Blood pressure & other sleep disorder. Sleep is measured in different-different signal i.e. EEG, EOG, EMG signal. The stages of sleep are measuring a wave, O wave, delta wave. Stages of sleep are basically in two part Non REM and REM stages. Keywords: Sleep, Stages of Sleep, Non-REM stage, REM stage.
__label__1
0.999838
HomeQuickLinks ­ FAQs ­ Stormwater Management FAQs Stormwater Management FAQs What is stormwater runoff? Stormwater runoff is water from rain or melting snow that "runs off" across the land instead of seeping into the ground. This runoff usually flows into the nearest stream, creek, river, lake or ocean. The runoff is not treated in any way. What is polluted runoff? Water from rain and melting snow either seeps into the ground or "runs off" to lower areas, making its way into streams, lakes and other water bodies. On its way, runoff water can pick up and carry many substances that pollute water. Some – like pesticides, fertilizers, oil and soap – are harmful in any quantity. Others – like sediment from construction, bare soil, or agricultural land; or pet waste, grass clippings and leaves – can harm creeks, rivers and lakes in sufficient quantities. Polluted runoff generally happens anywhere people use or alter the land. For example, in developed areas, none of the water that falls on hard surfaces like roofs, driveways, parking lots or roads can seep into the ground. These impervious surfaces create large amounts of runoff that picks up pollutants. The runoff flows from gutters and storm drains to streams. Runoff not only pollutes but erodes streambanks. The mix of pollution and eroded dirt muddies the water and causes problems downstream. What causes polluted stormwater runoff? Why do we need to manage stormwater and polluted runoff? Polluted stormwater runoff is the number one cause of water pollution in Georgia. Polluted water creates numerous costs to the public and wildlife. As the saying goes, "we all live downstream." Communities that use surface water for their drinking supply must pay much more to clean up polluted water than clean water. How are stormwater and runoff "managed"? What are the legal requirements for managing stormwater? The federal Clean Water Act requires large and medium sized cities to take the following steps to reduce polluted stormwater runoff: • Conduct outreach and education about polluted stormwater runoff. • Detect illicit discharges (e.g. straight piping or dumping). • Control construction site runoff. • Control post-construction runoff. • What can I do to reduce the amount of stormwater pollution I contribute? Never put anything in a storm drain. Don't litter. How else can I help reduce stormwater pollution in my area? Participate in the next stream or river cleanup in your area. Storm drain marking events – where the destination of storm water is clearly marked on the drain – are a fun way to let your neighbors know the storm drain is only for rain. Report stormwater violations when you spot them to Community Development at 678-512-3200. Why does the City have a Stormwater Management Program (SWMP)? The City has developed and implemented a SWMP to maintain and enhance water quality and ensure that negative stormwater impacts are minimized throughout the City. The SWMP consists of both citizen educational efforts as well as technical and regulatory programs. It is also a federal and state requirement that certain cities and counties implement a SWMP. What federal and state laws require the City to have a SWMP? The federal Clean Water Act requires Georgia (and most other states) to comply with the National Pollutant Discharge Elimination System (NPDES). To comply with this program, the Georgia Environmental Protection Division requires local governments to obtain NPDES permit coverage by developing and implementing a SWMP. What is the City's SWMP? The City's SWMP consists of 6 categories and a number of practices and/or programs to address and support each category. These categories are: Public Education, Public Involvement, Illicit Discharge Detection & Elimination, Construction Site Runoff Control, Post-Construction Stormwater Management, and Pollution Prevention from Municipal Activities. Implementation of all 6 categories make up a comprehensive program. What is the City doing for Public Education & Public Involvement? The City's public education and involvement efforts are directed to both the development/engineering community as well as the general public. The City posts articles and stormwater-related information on the website as well provides permit applicants with detailed checklists to ensure stormwater requirements are met. The City also hosts an annual Fall Festival and Public Works Day to distribute stormwater information and inform citizens on how stormwater is addressed within the City. Other programs the City participates in are Adopt-A-Road, Stormdrain Markers, Stream Cleanups, and Bulky Trash and Recycling Day. What is the City doing for Illicit Discharge Detection & Elimination? An illicit discharge is a discharge of pollutants or non-stormwater materials such as sanitary wastes, yard debris and auto fluids, into a stormwater drainage system. The City adopted an Illicit Discharge and Illegal Connection Ordinance in 2006 that establishes fines for illegal discharges. In addition, the City inspects stormwater outfalls annually to ensure non-stormwater discharges are identified and corrected. The City also posts information on the website and has an active stormdrain marking program that educates the public about the stormwater system, pollutants, and illicit discharges. What is the City doing for Construction Site Runoff Control? The City has an erosion & sediment control ordinance and requires compliance with approved erosion & sediment control plans for building and land disturbance permits. City inspectors visit these permitted sites to ensure the plan is being followed and sediment is retained onsite and prevented from adjacent properties and surface waters. The City also enforces stream buffers which prevents development activities from occurring close to streams. What is the City doing for Post-Construction Stormwater Management? The City has adopted a stormwater ordinance and design manual to ensure that water quality is maintained and improved and drainage systems are designed to protect downstream properties. The ordinance requires that new developments and redevelopments have a plan in place to address quality and quantity impacts. The design manual provides guidance on the proper ways to select, design and maintain stormwater controls. What is the City doing for Pollution Prevention? The City has programs to inspect and maintain storm structures in the rights of way. Structures such as catch basins and inlets are cleaned and the roadways are swept to keep trash and debris from entering the storm system. In addition, the City conducts annual water quality training sessions for employees and retrofits existing, City-owned detention ponds so that they better able to reduce pollutants and enhance water quality. How can I learn more? To learn more about the City's stormwater management program, please visit our Stormwater Management pages.
__label__1
0.994856
5 Just as you don't know the path of the wind, or how bones [develop] ina the womb of a pregnant woman, so you don't know the work of God who makes everything.b References for Ecclesiastes 11:5 • a 11:5 - Or know how the life-breath comes to the bones in • b 11:5 - Jb 10:10-11; Ps 139:14-16; Jn 3:6-8
__label__1
0.994732
Copyright (c) 2002 Cameron Browne This game is played on the following 8x8 square board: SCHOOL - A group of connected fishes (the white stones). Adjacency is orthogonal and diagonal. FISH - Moves to an adjacent empty cell or jumps over an adjacent fish if the cell on the far side is empty (like a hop in Chinese Checkers). A fish may jump any number times in a given turn, possibly changing direction, as long as no fish is jumped over twice. All fish must remain in a single connected school. Any fish that stray from the school at the end of White's turn die and are removed from the board. If several groups exist, the largest group survives to become the new school. If more than one maximal group exists, Black chooses which group survives. Spawning - At the end of White's turn, a new fish is added to any empty cell surrounded by eight occupied cells (fish or shark). SHARK - A shark (the black stones) can move to any adjacent cell occupied by a fish, where it captures (eats) the fish. Any shark with no adjacent fish is immobilized until a fish moves next to it. FIRST TURN - Black starts by playing the two sharks on occupied cells to capture (eat) one fish each. Capture is by replacement and eaten fish are removed from the board. TURNS - At each turn, each player must move one of his stones. GOAL - The White player wins if both sharks are immobilized. Black player wins if all fishes were captured. An example Black's turn. The shark at g7 is immobilized. Black moves d3 to c3 (the marked white fish). Doing so, the school splits into two groups of equal size (4 fishes each). Black may choose what group is captured. Black must choose the south group, because if not, White then would move c2-c1 thus winning the game. The original rules of FRENZY does not work. The Sharks don't have a chance to win. One way to solve this problem is to give symmetry to both players, each one has a school of fishes and a pair of sharks (this can easily be extended to 3+ player games): 2-FRENZY Rules (board is split by 32 fishes on each half of the 8x8 board, before the game starts, both players place their sharks on top of opponent fishes capturing them). Each turn a player makes two moves, with different animals. Each move is either a fish move as in Chinese Checkers; or a shark move as a chess king capturing a foreign animal. If after any turn a shark is isolated from all foreign animals, it immediately dies of isolation. If both a player's sharks are dead or his fishes are all captured, he loses; if this happens to both players simultaneously, it is a tie. If a player's school of fish splits into two or more isolated schools, all but the largest die. If there are equal largest, the player who made the move decides which.
__label__1
0.999077
JR Hiroshima Yamaguchi pass This new pass from JR-West is valid in the western part of the area covered by the Sanyo-San’in Pass: the Bullet train line and local lines between Hiroshima and Hakata (Fukuoka), on Kyushu. It is valid on the Sanyo Shinkansen between Hiroshima and Hakata (incl. the NOZOMI and MIZUHO train services), and unreserved seats on Limited Express, Rapid services and local trains, and the JR-West Miyajima Ferry. *Not valid on the local lines on Kyushu (ie. between Shimonoseki and Hakata). It is a 5-day pass. JTB-ABTAJTB-ATOLJTB-British-airwayscredit cardJTB-IATAJTB-JNTOYOUTUBE Japanspecialist
__label__1
0.999938
13 Worst States To Be A Burglar We’ve come a long way from chopped-off hands and the stockades, but the fate of burglars still varies widely depending where you are, even within the US. While some states let thieves off with a slap on the wrist, others lock them up for life. We dug into the states that are hardest on burglars, and explored what effect their hard-line tactics have on crime rates. The results may surprise you. Residential burglary has the same basic definition everywhere: illegally entering a private dwelling with the intent of committing a crime. But once a burglar has been found guilty, any number of things might happen to him. Federal, state, and local laws all frame things slightly differently—for one nation, the United States sure does have a lot of different rules—and sentences, fines, and even classifications can change entirely the minute you cross the border. To borrow a phrase, one state’s felony is another state’s misdemeanor: what gets you locked up for life in South Carolina can mean a mere decade behind bars in North Carolina. Meanwhile, burglary rates are also very different depending on where you are. Which got us to thinking — which states are the least forgiving of burglars? And when it comes to burglary rates, does harshness make a difference? To find out, we first searched through the legislative codes of all fifty states (plus DC) to find maximum and minimum burglary sentences, as well as maximum fines. We dug into hard data from actual prison populations to figure out how long the average burglar actually spends in jail. Finally, we looked into “stand-your-ground” and “castle doctrine” laws, which mandate the lengths to which homeowners can legally go to protect their property, under the logic that getting shot at is, effectively, another consequence of burglary in some places. When we put all this information together, we found wild extremes—the minimum sentences ranged from zero (in many states) to seven years (in Oklahoma). Maximums could be as low as three years (in New Mexico and Kansas) and as high as life imprisonment (in South Carolina and Virginia), while fines could be anywhere from five hundred dollars to a hundred thousand dollars — two hundred times as much! In order to compare states, we developed a ranking system that weighed each data point according to where it fell within its range. We added up these weights to get a total for each state, sorted the totals, and came up with this list: the 13 states where you really, really don’t want to be convicted of burglary. This information also gave us a new perspective on the FBI Uniform Crime Report, which ranks states according to burglary rate. Does being hard on burglars make them give up a life of crime—or does it just make them try harder? Check it out and tell us what you think. And if you want more information on our sources and methodology, click here.
__label__1
0.601885
In the UK you can get a smartphone for £100 and they are only going to get cheaper as more people want them. A device with a camera and video camera, access to the internet, with thousands of apps and the ability to share and recieve information will have a profound effect on education if we work with it. Here are some lessons I’d do if all my students had Android  powered Smartphones with access to the internet and cameras: 1. Mess about with googlemaps. I’d spend ten to fifteen minutes making a virtual tour of the town where we live and then ask students to log on and follow it in teams of two on their phones answering the questions I set them as they went. I actually did do this, but my students didn’t have the technology to access it. I’d go even further than this and get students to make their own tours on googlemaps with commentary, get other students to follow them and decide if they were good or bad. 2. Get students to take photos of things in pairs or teams, I blogged about this a long time ago here (click here to read it), I asked students to follow instructions to take snaps of various easy things like ‘a chair’ or ‘a book’ and then made it much harder by asking them to capture ‘sadness’ or ‘happiness’ to stretch their imagination. It worked really well and if we could do this with smartphones on the internet then we could instantly share the pictures on facebook or googledocs. I’d also get students to make films – something which I’m experimenting with at the moment. This is huge area for development. 3. I’d do loads more things with text messages – except that I wouldn’t use text messages because they cost money, I’d use the email feature of Facebook. I’ve already blogged about using text messages on here and use them to give students ‘thinking homework’ click here to read that one, and sending students out on a treasure hunt using messages they recieve digitally to tell them what to do – click here to read that one. With smartphones linked to the internet, this would be quick and more importantly, free 4. I’d get students out of the classroom with a text to read from either Amazon’s Kindle store via their ap or through googledocs, or even their email. When they’ve read it they can come back. And… I’d get students to read comics on their phones as homework. 5. I’d have students download the ‘my tracks‘ app which would track their every move via satelitte. I’d ask them to record their movements for one full day and then upload the ‘track’ to googlemaps so they could share it with the rest of the students. In class I’d get them to explain what they did and where they went using the map as a presentation tool. 6. I’d make students play games together, I mean multiplayer games where they play at the same time. Stratergy games, platform games, farmville type games – anything as long as they play together and it makes them interact with each other in the target language either to tell  each other how to do it, or how to cheat. 7. I’d ask students to interview someone in English, record it, edit it with something like audacity and then present it to the rest of the group. 8. I’d get students to use the radio and  listen to the news/a radio play/ in English at a certain time, write down some information from it and then share what they learned with the class in the next lesson. 9. I’d get students to listen  to music, any type of music as long as it’s in English. They could share it with each other, talk about it, learn lyrics from songs and find out about the singers or the group. I’ve also blogged about using youtube in lessons and those ideas would work perfectly well on smartphones. 10. Finally…I’d call them. Sometimes during the lesson, sometimes for homework. I’d phone for a chat or to ask them their opinion or to tell me something they found out. I’d also ask them to phone each  other – as long as they spoke in English. My prediction is this: a decent,cheap smartphone that is compatible with the internet and has all the features needed to work with a PC will be able to dominate the educational landscape of the future. Students buy and maintain the device but the books  and the materials that might be needed be them audio, video or text, are transmitted to this device by the educational institution. We would need paper of course but there wouldn’t be the need for nearly as much. The technology is here now. Is there anyone out there who agrees…? Is there anyone out there who wants to lend me a bunch of smartphones to teach my students with?  Please get in touch of you’ve got a project on this! Can’t be bothered to read the blog? Download the lesson here https://chrisspeck.files.wordpress.com/2010/03/telling-stories-with-photographs.doc This is the second time I’ve blogged about using digital cameras in the language classroom, so if you missed it, here’s a link to my first effort. https://chrisspeck.wordpress.com/category/using-digital-cameras/ Lesson Plan Further Reading 21st Teaching has some good ideas on using photo stories http://21stcenturyteaching.pbworks.com/Ideas-for-Photostory-3-Projects If you haven’t got time to read – here’s the lesson Students and teachers alike enjoying having their mobile phones switched on in class so they can look at text messages recieved, share photos of new editions to their family or tweets from people they have never met. Sadly, convention causes us to have our ringtones on silent and to gently chuckle along to ourselves as we read, look at a picture or digest the philosophical tweet. What, as someone in teaching recently has probably said over and over again, we turned the tables on this and used the phones as a teaching tool?  I’ll be looking at this in more detail in other blog posts but here, we’ll deal with photos. Most mobile phones have decent cameras in them these days and with that much power, students have the ability to snap away at anything they like, for free. Of course, none of us are the very talented and famous photographer, David Bailey so we don’t really care what the photos really look like, what’s more important is the language that will be produced by talking about then before, during and after the process. Before we start – some possible issues. Not everyone will have a camera phone. This might make some activities difficult, especially homework tasks. Your institution or school might have some digital cameras you could lend students, or they might want to consider investing in some. How do you share or show the photos? The best way to share photos is if you have an Interactive Whiteboard or a projector, but even just a standard computer monitor will do. Students can gather round to look at the photos together.  Annoyingly, lots of mobile phones don’t have standard USB connection ports and this might make it difficult to quickly upload photos from the phones as students will need their special connectors. One way might be to ask students to upload their photos to a repositiory, you might have moodle at your institution or something similar, but you could also use the fabulous www.flickr.com and ask students to store their photos there or even facebook. If you do have an interactive whiteboard or will use a central computer you could ask students to store their photos on memory stick which they could plug in.  Using Photos in Classroom Time. 1. In pairs students go away and take photos of  a series of increasing ly difficult objects to find. This is the only idea I’ve had enough time to develop into an actual lesson –  I’ve made a worksheet for this here. 2. Signs. In pairs students take photos of signs.  They then come back and discuss these in class. 3. Where is it? Students take photos of familiar places that are not obvious, other students have to guess where it is. 4. Recreate these photos. Student have to try to reproduce famous photos or pictures. (not sure about this one!) Outside class time – photo lessons that need some preparation. 1. Tell me about your favourite photo. Students bring in their own photo and talk about why it’s important to them or why they like it. 2. A day in your life – in photos. Students take a sequence of photos showing a day in their life which they can them show and explain to the rest of the class. 3. What happened at the weekend? The same as the task about but this time about the weekend. This could also be extended to include a celebration or other party. 4. My favourite place/ food/ person. Feedback- How did it go? Will let you know. Please let us know of any other digital photgraphy lessons you’ve done.
__label__1
0.558911
We reviewed the previous assignments, all dealing with investigative questions and the research paper outline. We turned in those assignments and we discussed reading journals and how to self-assess them. The reading journals are due Monday. (Remember, they are 10% of the final quarter grade.) Use your chapter one reading journal and review all the question marks you had placed inside prior to reading chapter one. What do you NOW know about chapter one? For example: If you placed a ‘?’ beside 1.5 (American Indian of the Pacific NW coast), you might pencil in “I now know that these people relied on fish and other seafood as a primary food source.” We devoted class time to starting our first official writing assignment. Students wrote down the assignment question. On March 21 both northern and southern hemispheres receive the same amount of sunlight. But on June 21, the northern hemisphere receives the greatest amount of sunlight, while the southern hemisphere receives the smallest amount of sunlight. In a paragraph, with clear and complete sentences, please explain how these two things are possible. (How do we do this?) In groups, they discussed what they question was asking for. We then discussed what we should do to write a first draft. We wrote down the collaborative writing process on the board and followed it. 1. Study the question. (What is asked? What do we need to explain?) 2. Brainstorm ideas. (What information do I need to cover to answer the question? Can I visualize it as a picture?) 3. Put the ideas in order. (What do I discuss first? Second?) 4. Write out the ideas into complete sentences to make a paragraph. (Due Monday) Today is a half-day for teacher training. We are on a ‘B’ schedule: 5, 6, 7, 8 periods. Students worked on the research paper outline and the research question assignment. We reviewed previous concepts and students were allowed to complete missing or late work.
__label__1
0.999205
Issue Six Published by: Eli Kanon Reviewers: Amelie Benedikt,  Dean Geuras, Eric Gilbertson, Vincent Luizzi,  Russell G. Moses, Jonathan Surovell, Issac Wiegman “Guilt and Consistent Moral Progress in Relation to Environmental Ethics” by Jonathan Lollar “I think, therefore I do?” by William Alexander Hernandez “A Foucauldian Postmodern World” by Tyrell VanWinkle Guilt and Consistent Moral Progress in Relation to Environmental Ethics Jonathan Lollar Texas State University It is often stated that guilt plays a significant role in the way one chooses to act. This paper works to outline case studies in which the role of guilt in one’s decision making is called into question. Focus will be given to guilt in relation to an individual’s moral progress toward environmental action. It briefly discusses a possible definition of moral progress through habitual action, as well as discusses the important ways in which guilt relates to concepts such as judgment and shame. Different sources of guilt will be identified, and then their effectiveness in creating consistent moral progress in an individual will be accessed. Self-recognized moral failing, harm, the perception of others, and inaction are all acknowledged as sources of guilt, however, they seem to be implausible as causes of consistent moral progress in the individual. Empathy is then explored as a source of guilt that would more likely accomplish the goal for moral progress. Guilt often plays a role when thinking about how to act ethically toward nature. This essay seeks to highlight some of the sources of guilt and their relationship to more consistent moral progress. Consistent moral progress in this context refers to the ability for one to apply moral change to a particular situation in a reliable fashion. Though consistent moral progress may not be perfect, it should still strive for more long term results. This paper will first highlight sources of guilt regarding: self-recognized moral failing, harm, the perception of others, inaction, and empathy. Each of these will then be defined and evaluated with respect to consistent moral progress. Finally, a case will be made that empathy is the most plausible source of guilt that has the ability to facilitate consistent moral progress toward nature. The objective of this paper is not to decide what acts toward nature are moral or immoral. The case studies used in this paper serve only to illustrate the areas in which sources of guilt can be seen.[1] Moral Progress Moral progress within the scope of this paper is one that should be viewed on an individual level. The ways in which one interacts with nature should be viewed as current status of character for that person. As characteristics “develop from corresponding activities,”[2] the ways in which you interact will either raise or lower one’s character. Upon realizing that a certain interaction with nature is undesirable, one should strive to move towards making a habit out of more desirable corresponding activities. If one is able to institute such a habit, it would then raise the current status of character for said person and constitute moral progress. Guilt can come from many sources. However, a common feature of guilt is the judgement of a prior action. All types of guilt are contingent upon judgement (both internal and external judgements) for an action that has been committed. Guilt can be external; being judged by your neighbor for the act of running over their dog and thus feelings guilty when confronted or judged by your neighbor (i.e. the external source). In this case the judgement would come from a failure to adhere to a certain social standard, namely that we ought not to run over another person’s dog, and one would feel guilty about their action. Guilt can be internal as well. One can judge oneself for the act of running over their neighbor’s dog and failing to adhere to a personal standard. Though they often coincide, guilt is meant to be distinct from shame for the purposes of this paper. Simply put, shame is stating “How could I have done that?” Whereas guilt would ask “How could I have done that?”[3] Shame is directed at the self and guilt is directed toward the action committed. This essay focuses on the actions committed because of the association guilt has with “repair”[4]  or making up for previous actions. The idea that guilt is conducive to a reparative process (whereas shame is associated with withdrawal) makes it more relative to the aim of the paper, which is moral progress. Guilt Stemming from a Recognized Moral Failing It is not uncommon for one to feel guilty about an action that would be regarded as immoral or wrong, but what of cases where there is no wrongdoing? A simple case, for instance, would be to imagine a wealthy businessman. This businessman, Mr. Haberdasher, owns a series of parks that he has had built within a large city. Though the city pays him a large sum of money for these parks, Mr. Haberdasher is not as wealthy as he wishes to be. In fact, he is quite greedy and currently has his eye on a mansion that sits atop the highest hill in town, but as of now cannot go and spend the money. He is one day approached by the CEO of a strip mall who wishes to purchase the land surrounding one of the parks, as well as the land the park sits on. The plan is to pave over all of the land in that area and place another mall. The money received from selling this park would cover the cost of the mansion exactly. He announces his decision to sell the park, to the dismay of many in the city. Mr. Haberdasher responds by stating, “Well, if you’ve seen one park, you’ve seen them all,” and proceeds to sell the park. After the park is sold, he begins to feel guilty. Many would agree that there was no wrongdoing here. Mr. Haberdasher had every right to sell something that he owned and was more than welcome to benefit from it. Why then is guilt felt here? Rosalind Hursthouse would claim that the feelings of guilt do not come from the act itself, but rather if “getting into those circumstances in the first place itself manifested a flaw in character.”[5] A flaw in character for this case would be greed. Wanting to accumulate a mansion and more wealth is what caused Mr. Haberdasher to make the decision. It may also stem from selfishness, or a lack of humility. By selling the park, which served many patrons in the city, Mr. Haberdasher put his concerns for more wealth before the benefit the community gains from having the park. Aristotle states that “that the median characteristic is in all fields the one that deserves praise.”[6] Greed, in this case, would be an excess. Whereas apathy towards the self would be a deficiency. The mean that would deserve praise in this particular situation would be one of a more selfless nature, or simply by realizing that there is utility outside of the self. Placing all of utility in the self (greed) or no utility in the self (apathy) would be flaws in character for this case. Mr. Haberdasher places the relation of utility towards himself above all else and shows a flaw of character. For the source of guilt to be one that stems from a recognized moral failing, as is with this case, it seems as though consistent moral progress would be likely. Learning of our character flaws would often facilitate self-reflection and change, however it is important to point out where this guilt is directed. The guilt is directed inward and is concerning the harm done to the self or your character, rather than toward the community (which includes animal residents) no longer benefiting from use of the park. The utility the park had for Mr. Haberdasher was seen as more important than the overall utility granted to park goers and animal residents. If confronted again by a similar offer for another one of his parks, Mr. Haberdasher may choose not to sell in a reparative effort associated with previous guilt. However, it would be in an attempt to prevent further moral failings rather than a recognition of the utility of the park (and if so, only as an indirect duty[7]). In a case such as this, though a degree of moral progress is shown if Mr. Haberdasher is to refuse any future offers for his parks, it is unclear as to how sustainable this moral progress is. If the utility of the parks to its human and non-human community is not considered, it is not as plausible that the utility of things to communities outside of oneself would be considered until there is yet more harm to your character. Guilt which stems from a recognition of moral failure then may not facilitate consistent moral change. Guilt Stemming from Harm A common source of guilt comes from causing harm to another individual. Consider a case in which you walk through the grocery store and accidentally running over another patron’s foot with your buggy. You no doubt will quickly apologize at the sight of your victim hopping up and down, holding their foot. Due to your guilt from harming the individual, it is likely that you would tread more carefully throughout your trip (and possibly future trips) at the grocery store. It is quite plausible for consistent moral change to come out of cases of direct harm. It is quite often, in cases of environmental ethics, unclear who or what is being harmed. Take for instance: buying a magazine that cuts down endangered rain forests, driving a gas guzzling SUV (especially in cases that are not necessary, i.e. joy rides), buying factory farmed meat, using pesticides that contain DDT, or buying concrete made from sand mining operations in India. Though there are harms in these cases, it is unclear which you directly contribute to, if any at all. By buying the magazine or going for joy rides in your gas guzzling SUV, you are contributing (in some manner) to global warming. Reducing CO2 absorbing trees and destroying the habitats of animals certainly cause harm in some way, but can the magazine you bought be attributed to one tree in particular? Can it be directed to an indigenous creature that was displaced from its habitat or died during the cutting down of a specific tree? Can you trace the CO2 from your SUV’s exhaust directly to its place in the atmosphere and the hole it creates? It is implausible to say yes. The same can be said for each of the examples given. It seems unclear then why guilt stemming from a harm principle would facilitate consistent moral change in these cases. Regarding the grocery store analogy, the direct cause and effect are seen by the perpetrator. The guilt comes from the ability to perceive the part played in the harming of another individual. Without seeing which factory farmed cow your hamburger comes from or which penguin the DDT in your pesticides reached, it becomes easy to forget the part one plays in the harming of the environment or its many parts. It becomes too difficult to tie one’s actions to a particular case of suffering. Mishka Lysack states that “our awareness of the magnitude of the problems, combined with the inadequacies of our response, cedes to increases in our emotional experience of fear, grieving, and anxiety.”[8] This feeling of inadequacy or despair simply results from not seeing any effect of your actions. Environmental studies, according to Lysack, have shown that: …Increasing numbers of people may simply withdraw from any advocacy or political involvement around ecological issues. As the gap between the scale of collective action needed to address the environmental crisis, and the actual response on the part of political and institutional leadership shrinks to disheartening proportions, our sense of immobilization is exacerbated. This sense of disempowerment then doubles back on itself, further inhibiting the possibilities of timely and decisive action. As a result, a collective ecological fatigue sets in, effectively constraining our ability to collectively respond to the challenges that we are facing.[9] In the absence of an immediate reaction to your own endeavors, it seems unlikely that any consistent moral change can be made. Charity organizations often do attempt to put a face with the harm. Organizations like the ASPCA have their commercials cycle through a menagerie of abused or hungry animals in order to make appeals to viewers at home. This is an instance where, despite not seeing the effects of your donations, you are still able to see, in some manner, the object that is being harmed. Some organizations will even send you pictures of a recovering animal or, at the very least, written updates. This would in some sense make one feel as if one is seeing the results of their charity and perhaps fixes the overall problem addressed in this section. However, this does not seem to translate into consistent moral progress. Out of the nine types of charity,[10] only 10.50 billion dollars (or roughly 3% of the total given) was given to environmental or animal based charities.[11] There is a low probability of these efforts working, despite the attempt to show the impact that you make. It is also shown in a study conducted by Guidestar that 50.5% of organizations receive the majority of their donations during the holiday season (October through December).[12] Network for Good reports that 31% of all charitable giving happens in December.[13] Even if one is to be part of the roughly 3% that donates to environmental issues, regardless of immediate results of one’s actions are seen, it is likely that charitable activity will not continue throughout the year. Rather it will happen only in the last three months, or more likely in December. This act of offsetting only in a minor portion of the year would not qualify as moral progress under the definition given in this paper Guilt Stemming from the Perception of Others Guilt can also come from how your peers perceive you. Peer pressure is no doubt a powerful force when it comes to the way that we react to the world around us. It is not always the case that one would follow a bandwagon or be negatively influenced by peer pressure. People are quite often swayed by peer pressure in more positive ways. Consider a case where you are floating down the San Marcos River with a group of friends. You have brought a cooler filled with canned and bottled drinks for the journey, and over the course of float you and your friends have drunk most or all of the beverages. Carrying the cooler with trash inside of it seems like a hassle, so rather than placing your empty cans back into the cooler, you attempt to sink your cans to the bottom of the river. A friend turns around to find you attempting to sink your first can, and seeing their look of judgment toward your littering, you refrain. Here we see a clear positive impact of peer pressure. The perception of yourself in the eyes of your friends is not worth compromising in order to sink a few cans in the river, no matter how convenient it seems at the time. Guilt is coming from external judgment in this case. The judgment is directed toward the act of littering rather than toward what characteristics you may or may not possess, and is distinct from shame. This is largely how we learn morality as children. Recall a time in which you didn’t want to wait until after dinner for your desert, so you simply go into the refrigerator or climb onto the counter to obtain a snack. All is well until you are caught by your parents, and at this point are taught very sternly about consequences. The eyes of your parents (and others) then become a deterrent for certain behaviors. Your parents would be judging the action you committed, rather than judging you personally. It seems, unfortunately, unclear how well these values translate into cases in which one is alone. This is because often “the goal is not to actually be moral, only to be seen by others—and to see oneself—as moral.”[14] Without the judgment of your actions from your parents, there seems to be little incentive to refrain from specific types of behavior. Daniel Batson conducted a study to show evidence of what he calls moral hypocrisy[15]. The experiment was simple, a participant is put in charge of assigning tasks to themselves and another participant. There is a task with positive consequences (each correct response in the task would grant the participant a raffle ticket to earn a prize of $30) and a task with neutral consequences. The participant is told that the other person will not be aware of who was in charge of assigning the tasks. Without any prompts about the moral way to delegate the tasks, 80% of participants gave themselves the assignment that would reward them with the raffle ticket. However, when asked directly, only “1 of 20”[16] participants stated that this was the correct thing to do. Once left alone, half were asked to assign the tasks in private and the other half were asked to flip a coin to assign the tasks. In the absence of perception by others, 90% of the former group assigned themselves the raffle oriented task. While in the latter group, “of those that did flip, 90% also assigned themselves the positive-consequence task, a significant deviation from chance.”[17] This study shows that while left alone, personal benefit most often will outweigh what one would believe the right thing to do is. It does not matter whether assigning oneself the better task is morally problematic or not. What matters is simply that the participants stated the correct thing to do was to give the other person the better task. By believing this was the more acceptable choice, it showed they only felt obligated to state that during their interviews. Whether the participants were lying about what they believed the best course of action was or they were simply stating what they believed the questioner wanted to here, the importance here is that there is some sort of hypocrisy being acted out. This hypocrisy becomes especially important to this case when it occurs during the absence of observation. In the case of littering in the San Marcos River, you are well aware of the negative consequences associated with littering before setting off. Perhaps a friend or employee at the business that rents inner tubes warned you of the effects of litter in the river. In either case, if you were to find yourself separated from the group and you are floating all alone, the likelihood of sinking the cans in the river increases significantly. If the perception of others, or simply the system of reward and punishment based on peer pressure, is the driving factor of your action then it “invites the inference that one does not value being moral as an ultimate goal.”[18] Acting morally or correctly in this situation would merely be a tool in which one would obtain a reward, namely acceptance in the eyes of your peers.[19] Much like the case of guilt stemming from a self-recognized moral failing, the harm that befalls the river is only relevant in an indirect way. The direct harm to be avoided is one done to oneself in the eyes of one’s peers. By placing importance on the approval of onlookers over your self-benefit only when being watched, it seems implausible that consistent moral progress would remain while alone, due to the study done by Batson. It is then implausible that guilt stemming from the perception of others to be enough to facilitate consistent moral progress. Guilt Stemming from Inaction Often times one may feel guilt due to the recognition of an area in which action could have been taken, but wasn’t. A common response to this recognition is the desire to make up for the behavior. Imagine a case in which you are attending a public lecture at your university in order to obtain extra credit. A prominent scientist takes the stage after a short introduction and begins to make her way through a slideshow about global warming. As she passes on more and more evidence about the worsening situation of the climate, you begin to realize that you simply have not been doing enough, if anything at all, to help alleviate your carbon footprint on the world. You quickly resolve make up for your action and, after an internet search, you find a charity that plants trees to donate to. After your $100 donation vanishes from your bank account, your guilt is lifted and you feel relieved. This practice can be referred to as offsetting. John Broome characterizes offsetting in this way: Offsetting your emissions means ensuring that, for every unit of greenhouse gas you cause to be added to the atmosphere, you also cause a unit to be subtracted from it. If you offset, on balance you add nothing…If you successfully offset all your emissions, you do no harm by emissions. You therefore do no injustice by them.[20] It remains unclear how offsetting will contribute to consistent moral progress, even if no wrong doing is involved. This is largely due to its consequential nature. Offsetting is contingent upon there being an unacceptable action (or in this case an inaction), as well as the self-recognition of the unacceptable behavior. It is also too difficult to access one’s carbon footprint in any precise fashion in order to adequately offset the damage later. There is a possibility that one would make an attempt to offset one’s carbon footprint but simply fall short without ever knowing it, thus failing to make moral progress despite the act of offsetting. Offsetting is perhaps inadequate for consistent moral progress because it implies a system in which one can continue to act in an unacceptable manner so long as one later offsets. If the $100 donation to charity indeed “offsets” your previous behavior, what is to keep you from engaging in future actions that will require offsetting? The type of moral integrity needed for such a system to stave off such behavior is rare. Batson states that in situations when “it is possible to obtain the self-benefits without actually being moral,”[21] those chances are taken. It is likely that one would continue to act undesirably for self-benefit now, and attempt to offset later, which would not halt the undesirable behavior and would show no moral progress. Broome does believe that one would be making moral progress by offsetting their carbon footprint, however two objections can be raised here. One is that it just too difficult to accurately assess one’s carbon footprint. This makes it just as difficult to calculate just how much one needs to do in order to offset their carbon footprint. The second objection is that this paper is concerned with the act itself in this case. The definition of moral progress for the scope of this paper is one in which characteristics develop from certain activities. To counteract these activities, one must make a habit of the corrected behavior. By acting in the undesirable manner more often than the actions that create the offsetting, the degree of moral progress (if any) can be called into question. Offsetting is also dependent on perception, or self-recognition, as with previously outlined cases. Your action would then stem from finding out that a wrong has occurred, or by perceiving oneself in a manner that does not seem adequate to the standard to which you would hold yourself. This again makes any harm done to the environment that is being offset merely an indirect duty. Guilt Stemming from Empathy The final source of guilt to be explored is empathy. Empathy is generally characterized as the ability to understand or experience the emotional state of others through their own point of view. It is something akin to saying, “I can see why this action would upset you or harm you in some way, so I should refrain from doing this action.” Empathy can be a powerful source of guilt due to the understanding of the relationship that one has with the victim or potential victim. When one is able to “not only categorize victims into groups but one can also categorize oneself as a member of a group,”[22] it becomes possible to manifest feelings of guilt at the recognition of a harm done to a member of the same social group or moral community. Suppose that you are a member of the 4H club. You learn to care for cattle from birth through the remaining stages of life. Throughout this process you gain a sense of respect for the cows that are under your care. It also becomes clear that this admiration for the cattle is one that goes beyond mere recognition of their utility. After the necessary amount of time has passed, it has now become time to sell off the cattle to the appropriate slaughterhouses. You are well aware of the fate that awaits them, and you begin to feel guilty about the process. The guilt in this situation stems from the empathy that you have for the cattle. There is a recognition that the cattle are victims in some sense, as well as a recognition that the victim is a part of your own group (or at the very least a part of a nontrivial relationship). Another dimension to this is one of empathic injustice.[23] It is clear from this case that you are in a privileged position in relation to the cattle. You have gained many benefits to your life and academic career from the raising of your cows, however, the cows themselves are highly disadvantaged in this relationship. This recognition of the injustice (good fortune for the self vs the misfortune for the cattle) regarding your relationship eventually manifests itself as guilt. The recognition of such a relationship is essential. It is often difficult to act ethically towards something it is something we cannot “see, feel, understand, love, or otherwise have faith in.”[24] It is often difficult to love or understand things that one would recognize as an “other.” These ideals can more easily be cultivated in a relationship with something that you view as part of your social system or moral community. It isn’t enough that you are able to recognize that the cattle are part of a social system that you both reside in. In order to act ethically towards the cattle, you must also learn to discover their inherent value by cultivating love and understanding for them. Empathy is not simply a source of guilt under this model, but it is also a motivation for certain ethical behavior. Hoffman states that: Empathic affects are congruent with two of Western Society’s major moral principles—caring and justice—both of which pertain to victims and beneficiaries of human actions. Empathic affects may therefore provide motivation for the operation of these principles in moral judgment, decision making, and behavior. The integration of empathy and moral principles may thus provide the heart of a comprehensive moral theory.[25] It then becomes probable for the 4H student to cease his or her behavior after recognizing not only the empathic injustice, but also by merely recognizing that a member of the same social system one subscribes to has been harmed in some nontrivial way and recognizing their inherent value. It becomes increasingly plausible for consistent moral change to be achieved. Guilt stemming from empathy then seems as though it possesses a crucial difference from the other sources of guilt outlined in this paper. Guilt stemming from a self-recognized moral failing, harm, the perception of others, and inaction, all seem to have a focus on the perpetrator. In the case of a self-recognized moral failing, the duty to the deer in your social system is merely indirect. The motivation for one’s actions in this case is based in a harm to their character, not the harm done to the deer. In cases of harm to those we cannot see (as environmental issues often operate in this manner), there seems to be little motivation for consistent moral progress. It is too difficult to tie one’s actions to any particular case of suffering and makes the actions seem trivial. Where cases are based on the perception of your peers, it can be inferred that one would only act morally so long as the perception persists. In cases where the perpetrator is alone, it becomes unlikely that the moral change would continue rather than actions that can facilitate one’s own benefits. In the case of inaction, though offsetting may effectively erase any injustice done, it seems implausible to say that the previous actions would cease if they are able to be erased at a later date. Moral progress cannot be achieved if the actions in question are allowed to continue. All the previous cases seem to be ones in which the inherent value of the victims is not considered, but more importantly, not recognized. Empathy implies a recognition of the inherent value of potential victims, as they are members of one’s own moral community. Allowing things to be part of one’s moral community grants the ability to act ethically towards it. Empathy, as it has been show, is also a motivation for behavior regarding certain principles. It seems plausible that empathy is important in facilitating consistent moral progress regarding action toward nature. Guilt is often the motivation behind ethical environmental action. However, when looking into the various sources of guilt, it seems unclear as to how effective guilt is as a motivator for a sustained ethical lifestyle regarding the environment. It has been shown that in cases of guilt stemming from a recognized moral failing, harm, the perception of others, and inaction, that it seems less plausible that one would undergo such long term moral progress in their environmental action. In the case of guilt stemming from empathy, it seems more plausible that consistent moral progress will be seen due to the recognition of the inherent value of the environment (or the parts of the environment being acted upon specifically) and its place in the same moral community as oneself. End Notes [1] It is also important to note that this essay will not be discussing whether guilt is a sufficient justification for action. I acknowledge that this may be an important distinction to make, but it is not within the scope of this essay. [2] Aristotle, “Nicomachean Ethics,” trans. Martin Ostwald, (Upper Saddle River: Prentice Hall, 1999), 1103b 21-22. [3] Laura Barnard Crosskey, et al., “Role Transgressions, Shame, and Guilt among Clergy,” in Pastoral Psychology, vol. 64, no. 6, (2015), 785. [4] Crosskey, et al., 789. [5] Rosalind Hursthouse, “Virtue Theory and Abortion,” in Philosophy and Public Affairs, vol. 20, no. 3, (1991), 243. [6] Aristotle, 1109b 23-24. [7] Tom Regan, “The Case for Animal Rights,” in Defense of Animals, ed. Peter Singer (Oxford: Basil Blackwell, 1985), 180-181. [8] Mishka Lysack, “Environmental Decline, Loss, and Biophilia: Fostering Commitment in Environmental Citizenship,” in Critical Social Work, vol. 11, no. 3, (2010), 49. [9] Lysack, 50. [10] The types given include: religion, education, human services, health, arts/culture/humanities, environment/animals, public-society benefit, foundations, and international affairs. Giving USA, “Giving USA: Americans Donated an Estimated $358.38 Billion to Charity in 2014; Highest Total in Report’s 60-year History” (2015). http://givingusa.org/giving-usa-2015-press-release-giving-usa-americans-donated-an-estimated-358-38-billion-to-charity-in-2014-highest-total-in-reports-60-year-history/ [11] Giving USA. [12] Chuck McLeans and Carol Brouwer, “The Effect of the Economy on the Nonprofit Sector” (Guidestar, 2012), 4. http://www.guidestar.org/ViewCmsFile.aspx?ContentID=4781 [13] Charity Navigator, “Giving Facts.” https://www.charitynavigator.org/index.cfm?bay=content.view&cpid=519 [14] Daniel C. Batson, “What’s Wrong with Morality,” in Emotion Review, vol. 3, no. 3, (2011), 231. [15] Batson, 231. [16] Ibid. [17] Ibid. [18] Batson, 232. [19] Ibid. [20] John Broome, “Climate Matters: Ethics in a Warming World” (New York: W.W. Norton & Company, 2012), 80. [21] Batson, 232. [22] Martin L. Hoffman, “The Contribution of Empathy to Justice and Moral Judgment,” in Moral Development: Reaching Out, ed. Bill Puka (New York: Garland Publishing, 1994), 169-170. [23] Hoffman, 170. [24] Aldo Leopold, “A Sand County Almanac,” (New York: Oxford University Press, 1949), 42. [25] Martin L. Hoffman, “Empathy, Social Cognition, and Moral Action,” in Handbook of Moral Behavior and Development, ed. William M. Kurtines and Jacob L. Gewirtz (New York: Psychology Press, 1991), 275. Work Cited Aristotle. “Nicomachean Ethics.” Translated by Martin Ostwald. Prentice Hall: Upper Saddle River, 1999. Batson, Daniel C. “What’s Wrong with Morality?” In Emotion Review, Volume 3, No. 3, 230-236, 2011. Broome, John. “Climate Matters: Ethics in a Warming World.” W.W. Norton & Company: New York, 2012. Charity Navigator. “Giving Facts.” Accessed May 29, 2016. https://www.charitynavigator.org/index.cfm?bay=content.view&cpid=519. Crosskey, Laura Barnard, John F. Curry, and Mark R. Leary. 2015. “Role Transgressions, Shame, and Guilt Among Clergy.” In Pastoral Psychology 64, no. 6: 783-801. Giving USA. “Giving USA: Americans Donated an Estimated $358.38 Billion to Charity in 2014; Highest Total in Report’s 60-year History.” Last modified June 29, 2015. Accessed May 29, 2016. http://givingusa.org/giving-usa-2015-press-release-giving-usa-americans-donated-an-estimated-358-38-billion-to-charity-in-2014-highest-total-in-reports-60-year-history/. Hill, Thomas E. “Ideals of Human Excellence and Preserving Natural Environments.” In Environmental Ethics 5, 211–224, 1983. Hoffman, Martin L. “Empathy, Social Cognition, and Moral Action.” In Handbook of Moral Behavior and Development, edited by William M. Kurtines and Jacob L. Gewirtz, 275-281. New York: Psychology Press, 1991. Hoffman, Martin L.  “The Contribution of Empathy to Justice and Moral Judgment.” In Moral Development: Reaching Out, edited by Bill Puka, 161-175. New York: Garland Publishing, 1994. Hursthouse, Rosalind. “Virtue Theory and Abortion.” In Philosophy and Public Affairs, Volume 20, No. 3, 223–246, 1991. Leopold, Aldo. “A Sand County Almanac.” Oxford University Press, New York, 1949. Lysack, Mishka. “Environmental Decline, Loss, and Biophilia: Fostering Commitment in Environmental Citizenship.” in Critical Social Work, Volume. 11, No. 3, 48-66, 2010. McLean, Chuck, and Carol Brouwer. “The Effect of the Economy on the Nonprofit Sector.” Guidestar, 2012. Accessed May 29, 2016. http://www.guidestar.org/ViewCmsFile.aspx?ContentID=4781. Regan, Tom. “The Case for Animal Rights.” In Defense of Animals, edited by Peter Singer. Oxford: Basil Blackwell: 13-26, 1985. I think, therefore I do? William Alexander Hernandez University of Houston In The Sources of Normativity, Christine Korsgaard argues that what people should do depends on what people think.  Korsgaard claims that “A view of what you ought to do is a view of who you think you are” (Korsgaard 1996, p. 117).  I will argue that there are possible problems with Korsgaard’s claim.  This paper has two objectives: 1. Analyze Korsgaard’s argument. 2. Argue that practical identity is not sufficient in order to address the normative problem. I will proceed as follows: first, I will lay out Korsgaard’s argument concerning practical identity and normativity.  Korsgaard makes the following claims regarding human beings.  Korsgaard claims that the problem of and the solution to normativity lies within human consciousness.   Human consciousness has a reflective structure that makes us think about whether or not we should act upon our desires.  The reflective structure forces us to make laws for ourselves.  People are autonomous.   People give themselves certain laws or commands concerning what they should do.  These commands are expressions of practical conceptions.  Our practical identity determines which of our desires we can take as reasons for acting on them.  In the second section of the paper, I will argue that Korsgaard’s claim is problematic for the following reasons.  First, there is a hermeneutical problem.  Different people can interpret their contingent and essential identities to mean different things and thus interpret what they ought to do differently.  Second, people might not be autonomous.  That is to say, an individual might not govern himself or herself. Section One: Korsgaard on Human Beings and Normativity Korsgaard makes the following claims regarding human beings and normativity.  People have normative problems.  Korsgaard states, “And we have normative problems because we are self-conscious rational animals, capable of reflection about what we ought to believe and to do” (Korsgaard 1996, p. 46).  People are rational animals with the ability to think about their desires and actions.  The ability to think or, more precisely, to reflect on our desires is what brings about the normative problem.  For example, I have a normative problem because when I am compelled to do X, I can still ask myself, must I really do X?  So my reflection brings about a normative problem.  However, Korsgaard also suggests that reflection is the solution to our normative problems because it “forces us to have a conception of ourselves” (Korsgaard 1996, p. 100).  Reflection forces a person to have a general description of identity, that is to say, a practical identity. The reflective structure also forces us to make laws for ourselves.  People can make laws for themselves because they are autonomous.  People can choose certain practical identities.  Korsgaard states, “Autonomy is commanding yourself to do what you think it would be a good idea to do, but that in turn depends on who you think you are” (Korsgaard 1996, p. 107).  In other words, people give themselves certain commands or laws based on their notions of their identities.  For example, what I should do depends on my practical identities or who I think I am.  What I should do does not depend on an eternal factor, such as, God’s commandments.   A person gives herself her own laws.  People are autonomous because their laws are not imposed by an external factor. The laws that people give themselves are connected to their practical identity.  That is to say, what a person ought to do is connected to that person’s practical identity.  Korsgaard suggests that people may have several conceptions of themselves.  For example, a person can have a conception of herself as a student, employer, etc.  However, not every practical identity or conception of a person has practical force.  In other words, not all practical identities force a person to act.  Korsgaard states, “you may stop caring whether you live up to the demands of a particular role” (Korsgaard 1996, p. 120).  In other words, not all contingent identities command equal force upon a person.  Not all practical identities compel people to do X.  People can choose which of their contingent identities to subscribe to.   For example, a university student can stop subscribing to her conception of being a university student.  Consequently, the student would no longer be compelled to write papers and to read books such as Fyodor Dostoevsky’s The Brothers Karamazov or Apuleius’ The Golden Ass for a class.  Thus, not all contingent identities spur a person to perform certain actions. Moreover, a person has the ability to endorse or reject his/her desires depending on his/her identity and whether the desire passes a test.  People should act upon the desires that pass a certain test.  Korsgaard writes, “The test for determining whether an impulse is a reason is whether we can will acting on that impulse as a law. So the test is a test of endorsement” (Korsgaard 1996, p. 108).  In other words, I have the ability to endorse or reject certain desires depending on my identity and whether my desires pass the test. Moreover, a practical identity gives a person reasons to do certain actions.  For example, the reason why I should not do X is because not doing X is a command that I gave to myself in virtue of my practical identity.  If I do X, then I would lose my practical identity.  According to Korsgaard, practical identity can allow me to turn my desires into reasons.  For example, if I endorse my desires after reflecting on them, then I would have “reason” to act on my desires.  On the other hand, if I reject my desires after reflection because they do not coincide with my identity, then I have an obligation not to act on my desires.   Korsgaard writes, “Practical identity is a complex matter and for the average person there will be a jumble of such conceptions. […] all of these identities give rise to reasons and obligations. Your reasons express your identity; your nature; your obligations spring from what that identity forbids” (Korsgaard 1996, p. 101).  In other words, my practical identities give me reasons or obligations for me to act on certain desires.  Practical identities give rise to obligations and reasons.  As such, practical identity is necessary in order for me to have reasons to do X.  Korsgaard states, “It is necessary to have some conception of your practical identity, for without it you cannot have reasons to act. We endorse or reject our impulses by determining whether they are consistent with the ways in which we identify ourselves. Yet most of the self-conceptions which govern us are contingent” (Korsgaard 1996, p. 120).   In other words, I endorse or reject certain desires depending on whether these desires coincide with my practical identities. However, Korsgaard claims that people have an essential practical identity.  The essential identity is our identity as a human.   Korsgaard states: If this is right, our identity as moral beings—as people who value themselves as human beings—stands behind our more particular practical identities […] Most of the times, our reasons for action spring from our more contingent and local identities. But part of the normative force of those reasons springs from the value we place on ourselves as human beings who need such identities. In this way all value depends on the value of humanity; others forms of practical identity matter in part because humanity requires them.  [Korsgaard 1996, p. 121] We have an essential or necessary identity.  The identity of being part of humanity or a member of the Kingdom of Ends lies behind other contingent identities.  For example, I have an identity as an employee.  Such an identity may compel me to do certain things like show up to work on time.  However, such an identity is contingent.  But what is not contingent is the identity that I have as a member of humanity or as a member of the Kingdom of Ends.  Korsgaard states, “Our other practical identities depend for their normativity on the normativity of our human identity—on our own endorsement of our human need to be governed by such identities—and cannot withstand reflective scrutiny without it. We must value ourselves as human” (Korsgaard 1996, p. 125). In other words, our moral identity, that is to say, our identity as a member of humanity or as a member of the Kingdom of Ends is a necessary condition that enables us to posses other practical identities.  So we take our identity as a member of the Kingdom of Ends as being normative in order for us to have reasons to act. The ultimate source of all reasons or values is humanity itself.  Moreover, the value of the essential identity of being a member of the Kingdom of Ends is implicit in the contingent practical identities.  Korsgaard adds, “And to the extent that we cannot act against them without losing our sense that our lives are worth living and our actions are worth undertaking, they obligate us” (Korsgaard 1996, p. 129).  In other words, I cannot act against such an identity.  Korsgaard states, “Again, in so far as we regard ourselves as Citizens of the Kingdom of Ends, those laws are one we have reason to accept. Citizen of the Kingdom of Ends is a conception of practical identity which leads in turn to a conception of the right” (Korsgaard 1996, p. 115).  In other words, our identity as members of the Kingdome of Ends plays a role in normativity. Section Two: Arguments against Korsgaard I will argue that practical identity, which determines which of our desires we can take as reasons for acting on them, is not sufficient in addressing the problem of normativity.  Practical identity is not a sufficient answer to the normative question because people can have the exact same identity and can still interpret what they ought to do in radically different ways.  Different people can interpret their essential identity, of being a member of the Kingdom of Ends, to mean different things and thus to interpret what they should do radically differently.  This could lead to a contradiction within Korsgaard’s view.  The contradiction is that if person A and person B both have the same exact identity, but person A interprets his essential identity in a different manner than person B, then person A and person B should do different things even though their identities are exactly the same. I will present several examples in order to clarify that practical identity is not a sufficient condition to establish what people ought to do.  People can interpret their essential identity to mean different things.  People can interpret the “Kingdom of Ends” or “humanity” in several different ways.  For example, Adolf Hitler and Mahatma Gandhi both interpreted the meaning of humanity in different ways.  Hitler valued himself and humanity in a terrible manner because he interpreted “humanity” in a specific way.  Hitler did not treat Jewish people as ends, as Korsgaard suggests, because Hitler did not consider Jewish people to be people.  Hitler interpreted them as “devils,” that is to say, as evil entities.   Hitler viewed Jewish people not as human beings but as viruses of the nation.  In a letter to Herr Gemlich, Hitler refers to the Jewish population as the “tuberculosis of the nations” (Hitler 1919).  In Mein Kampf, Hitler refers to Jewish people as “rats” and “parasites” (Hitler 1998, p 278).  In other words, Hitler did not conceive the Jewish population as people but as viruses and animals. As such, from his perspective, the Kantian maxim to treat people as ends does not apply to Jewish people because they are not real people.  Moreover, Hitler believed that he was doing something good for the greater humanity and that everybody should act as he did.  As such, Hitler would pass the Kantian test because he would want everybody to do what he was doing, that is, to get rid of certain people.  On the other hand, Gandhi valued himself and humanity in a different manner.  Gandhi did not exclude Jews and other people from his views of the population of humanity.  Thus, Gandhi did radically different things from Hitler.  People can have an essential identity, and interpret their essential identities in very different ways. Moreover, in the 1700’s and 1800’s in the United States, white men interpreted humanity in a specific manner.  They interpreted minorities and women as property that belonged to their owners.  Consequently, white men treated women and minorities in an inhuman manner because they believed that women and minorities were not on par with themselves.  In modern times, some people interpret a fetus as a human.  On the other hand, some people interpret a fetus as not being a human.  Therefore, how we interpret the Kingdom of Ends or humanity can change from person to person.  This suggests, I think, that practical identity is not the only source of normativity because people can interpret their essential identity to mean different things.  Thus, practical identity is not sufficient in providing us with a story concerning normativity.  The problem is that people can interpret “humanity” or the Kingdom of Ends to mean different things. If people can interpret humanity or the kingdom of ends in different manners, then people can also interpret their contingent identities in different manners as well.  For example, different people could interpret their identities as fathers in different manners.  One person could interpret his identity as a father in a literalist biblical manner and another person could interpret his identity as a father in a non-literalist biblical manner.  If a person interprets the Bible in a literal manner, then that person will interpret his identity as a father in a specific manner.  He will believe that as a father he should beat his child because the Bible says so.  The Bible says, “A rod and a reprimand impart wisdom, but a child left undisciplined disgraces its mother” (Proverbs 29:15).  In other words, a father could believe that his identity as a father requires him to beat his child.  This father would believe that he should beat his child with a rod because, according to the book of Proverbs, it would give the child wisdom.  Moreover, this father would believe that this law should be universal because the Bible says so.  On the other hand, a person can interpret his identity as a father in non-biblical or non-literalist biblical manner, which could mean that as a father he should not beat his child with a rod.  This person could interpret his identity as a father to mean that he should never do physical harm on his child.  This suggests that how people interpret their identities can dramatically change and thus what people should do would have to be radically different.  So it would be okay for the first father to beat his child with a rod and at the same time it would be okay for the second father not to beat his child.  This suggests that two different actions that are radically different could both be okay.  The first father would believe that every father should use the rod because the Bible says so and thus the law should be universal.   While the second father would believe that using a rod to imposed wisdom on a child should not be a universal law.  Thus, people’s interpretations of practical identities could lead to contradictions. Moreover, if Korsgaard is correct in her views of human consciousness as the source of normativity, then it would be difficult to prove to Hitler and the Nazis that what they should do is not kill the “unwanted” people of society.  Hitler believed that he was doing something good for humanity.  It would be difficult to prove to Hitler that he was doing something wrong because his interpretation of humanity was distorted.  Hitler did not just have a contingent identity that was bad. Rather, he viewed the essential identity in a specific manner.  So the problem lies in determining what exactly does “humanity” mean?  What does it mean exactly to say that we are part of the kingdom of ends?  Who is allowed to be part of the kingdom of ends? My second argument against Korsgaard is that people are not autonomous.  People do not always impose laws upon themselves.  Rather, cultures and societies can impose the laws upon people.  In India, for example, society imposes the label of “untouchable” upon certain groups of people.  Untouchables are thus not allowed to interact with so-called “ordinary people.”  The identity of untouchable was not a law or an identity that was imposed by the individual.  Rather, society or culture gives certain people their identities.  Additionally, a few decades ago in the United States, women and minorities were not autonomous.  Women and minorities were seen as second-class citizens and as such they were not allowed to do certain things.  Women were not allowed to participate in the democratic process.  As such, women could not command themselves to do what they thought would be a good idea to do.  Moreover, in modern times, some people are still not autonomous.  In some cases, people do not command themselves to do what they think is a good idea to do.  Rather, commercial corporations, such as McDonalds and Nike, and the main stream media tell people what they ought to do.  These corporations spend millions of dollars on advertisements in order to tell people what they ought to do.  So in some cases, corporations command, so to speak, people to do what they think it would be a good idea to do.  As such, certain people are not autonomous.  Some people do not put the laws upon themselves.   Society and large corporations can be the law givers.  Culture imposes laws upon people.  Consequently, society and corporations can tell certain people what they ought to do. Section Three: Conclusion In conclusion, Korsgaard argues that people have a reflective structure within their consciousness.  When a person has a desire to do X, that person can reflect on whether he should do X.  We need reasons for acting on certain desires or impulses. The reflective structure forces us to make laws for ourselves.  These laws are expressions of our conceptions.  People could have several identities; however, they also have an essential and necessary identity. Our practical identity determines which of our desires we can take as reasons for acting on them.  I have argued that Korsgaard does not offer a complete story when it comes to normativity.  Her argument is incomplete because different people can interpret their essential and contingent identities in several ways.  Korsgaard does not account for a hermeneutical problem concerning identities. Work Cited Hitler, Adolf.  Jewish Virtual Library.  2016. http://www.jewishvirtualliibrary.org/jsource/Holocaust/Adolf_Hitler’s_First_Antisemitic_Writing.   Hitler, Adolf.  1998.  Mein Kampf  Houghton Mifflim Company. Korsgaard, Christine.  1996.  The Source of Normativity.  Cambridge University Press.  “A Foucauldian Postmodern World” Tyrell VanWinkle Austin Community College-Round Rock Michel Foucault provides an empirical account of power in regards to the shaping of discourse. Historians often propagate their interpretation of the history as an objective and universal understanding of the world. For Foucault, the knowledge peddled is that of the bourgeois or traditional interpretation. He sought to show that knowledge and power were intertwined, ultimately producing given interpretations of history. The information that is gained examining humankind as the subject of study is then used in order to form specific discourses that govern what is permissible through guiding practices. Those instituted rules for what is the norm and what is deviant is called normalization. This process starts with the ‘subjectification’ of human being, which is the placing of humankind at the center of institutional studies. Humankind becomes both the observer, the scientist responsible for analyzing said object, and the observed, which is the object about which the scientist seeks to gain information. This information is then utilized through power structures and disciplinary techniques to discern the formation of discourses and the rules that apply to them. Instead of adopting the view of an historian, Foucault looks at institutional events through the methodology of archaeology. Foucault proposes that those traditional views held by historians are not objective and unbiased, but rather are a result of the dance done between power and knowledge. The power discourses within a given period of time are examples of the credible views of those with expertise. Those discourses become knowledge based on the projection of views discerned as correct by institutions that are considered the experts. Archaeology seeks to pinpoint the different ideologies, prevailing perceptions and practices that surround the permeating knowledge or ‘truth’ constructed by institutions. It is important to know this relationship between power and knowledge is symbiotic. Just as power is able to claim expertise of a given subject, upholding and enforcing institutionalized interpretations as fact, knowledge is able to produce power. Through observation, information is received and produced about a subject which is, for Foucault, tied to creation of a praxis. Foucauldian theories are then bolstered by his use of geneaology; the analysis of history that examines former time periods in an effort to find turns or discontinuities that illuminate the origination of these thoughts and ideas responsible for the development of particular institutional practices. Practices of subjectification and normalization are noticeable when looking to Foucault’s view of the Victorian regime where certain rules on discourse of sex are created by the Christian Church. Questions of who may speak about sexuality, to what degree, and when it should be spoken become established by this institution. Due to power relations, the Church was seen as the expert and the judge of such matters and their view of how discourse was to take shape; instead of being an interpretation it became knowledge to be propagated by the state. It is important to allow the concept of subjectification to resurface when understanding the production of knowledge. With the example of the Church, the object of study was of course humankind; the institution sought to outline man’s participation in sex. Here man became an object under observation by man which provided a space for capability of the expert opinion to produce knowledge on the discourse of sexuality. With considerable work done by Foucault to uncover the role that power plays in society, especially in the formation of discourse, it becomes rather apparent questions of utility might arise. This does not necessarily refer to the questioning of the ability for such knowledge to be useful, but instead how it ought to be incorporated in the political theatre. Unfortunately, although in regards to this question Foucault does the masterful job of analyzing and producing knowledge about power and domination in its relation to discourse, he lacks a proposition as to what a world beyond might look like. That is to say, what a world that acknowledges the role of power might look like. Of course, it must be understood that Foucault does determine it necessary to analyze and uncover the powers at work that produce societal discourses or the rules upon those discourses. Yet, this still leaves much up to the imagination of the individual when attempting to positively change a community in such a way that it might allow for Foucauldian theories develop connected practices. One finds a lack of direction when considering how to shape a society so that it may provide refuge from the negative implications of power or even earnestly attempt to rid the world of these issues altogether. Does one try to de-legitimize institutions’ ability to use power in such a way? Is society supposed to create a hospitable space that rids the possibility of deviance? Maybe there should be an acknowledgment of the way discourse and power are institutionally intertwined so we may attempt to discover the most justifiable rules for engagement. It is not the purpose of this paper to provide a meticulous political framework that outlines the exact rules that any community should hold. In fact, there will not be a conclusive political theory that ought to be adopted whatsoever by the end of this paper. Instead, this paper seeks to describe Foucault’s alternative interpretation of human institutions and the postmodern alternative set of goals for a society to endorse in acknowledgment of Foucauldian theories regarding power and its relation to discourse. Specifically, the two main subjects will be the subjectification and normalization of man. Foucault’s methodologies, the existence of normalization, and his view of subjectification will not be defended in this paper, but rather will be utilized in the way Foucault articulated them. Furthermore, because the purpose of this essay is to speak about how to make use of the tools and information Foucault provided, there will be little discussion about the process of producing such knowledge about power. Information employed in “History of Sexuality” will serve to analyze what a society might value if it were to attempt to exist as a solution to potentially negative power discourses. Before the work is done to conjoin Foucauldian theories of power and knowledge with a possible conception of politics stemming from them, it is necessary to provide an empirical basis of Foucault’s political critique. That is, to produce an allusion accurately embodying Foucault’s description of power and knowledge and the relationship they share. For this purpose, his analysis of the Victorian regime stemming from the 1700’s within “History of Sexuality” will be called forth. Foucault first calls into question the establishment of sex as a power discourse. As opposed to discourse engaged in by happenstance, this would be discourse that exists natural and organically, discourse has become a separate outlined praxis. This event marks the possibility of such a discourse to become saturated with rules and regulations, that is to become repressed. Foucault himself explains: The seventeenth century, then, was the beginning of an age of repression emblematic of what we call the bourgeois societies, an age which perhaps we still have not completely left behind. Calling sex by its name thereafter became more difficult and more costly. As if in order to gain mastery over it in reality, it had first been necessary to subjugate it at the level of language, control its free circulation in speech, expunge it from the things that were said, and extinguish the words that rendered it too visibly present. And even these prohibitions, it seems, were afraid to name it. Without even having to pronounce the word, modern prudishness was able to ensure that one did not speak of sex, merely through the interplay of prohibitions that referred back to one another: instances of muteness which, by dint of saying nothing, imposed silence. Censorship. (Foucault, 17) Foucault highlights authors such as Sanchez and Tamburini as they were a testament to writing that emphasized the increasing amount of discretion used when dispensing information about the activity of sex. (Foucault, 19) Such meticulous concern and observation of the individual in its relation to sex was produced by the continued need for confession. As Foucault articulates: This was partly because the Counter Reformation busied itself with stepping up the rhythm of the yearly confession in the Catholic countries, and because it tried to impose meticulous rules of self examination; but above all, because it attributed more and more importance in penance and perhaps at the expense of some other sins- to all the insinuations of the flesh: thoughts, desires, voluptuous imaginings, delectations, combined movements of the body and the soul; hence-forth all this had to enter in detail, into the process of confession and guidance. According to the new pastoral, sex must not be named imprudently, but its aspects, its correlations, and its effects must be pursued down to their slenderest ramifications. (Foucault, 19) Here marks the imposition of man into the seat of the object as well as the examiner of the self. This allows the abstract understanding of knowledge in its formation of power to be seen. There is an institutionalization of the need for penance; theistic requirements propagated that formulate the practice of confession and impose it as the traditional view of how individuals ought to act. Regardless of the reason as to why the act is necessary, that information contributes to the formation of practices accepted by a society. The state gains power from the knowledge they hold about the subject in their relation to spirituality. Conversely, it also introduces the relationship via powers ability to obtain knowledge. The fact that the bourgeois knowledge of spirituality has been accepted as fact prompts a return to the notion of expertise. Instead of it being a belief that individuals hold as subjective interpretation, the Church was viewed as the expert and thus its opinion is propagated as fact or ‘truth’. Altogether the new pastoral arrangement has become normalized; the articulation of sexual discourse has been situated into categories of permissible and impermissible which allows for man to subject himself to observation, the fear of that deviance. In order to expand the understanding of how this knowledge created power that expanded from singular institutions within a society to a state itself, one must look to the movement from the seventeenth to eighteenth century. The regulations around sex remained institutionalized by Christian institutions even as they are today in the present, but crucial consideration must be placed upon surrounding institutions and practices developing around them. With Christianity, there was a specific power play enveloping humankind; increasingly, there became the need to adhere to such rules, if one did not they were seemingly refusing and disobeying God’s will. (Foucault, 23) Whether for justified and sensible reasons, the creation of such notions of deviance was consciously willed in order to create the need for constant confession. Exposing oneself to observation and the entering of the private life into that of the public concern becomes commonly accepted; all for the sake of acting in consistency with some notion of moralism. In the Christian’s case, to act as a good Christian. Of course the Christian pastoral doesn’t necessarily walk a path that will bleed into power discourse of the state, although it’s considerably likely. Foucault, in examination of the eighth century, notices that this moralist demand for practices of observation explodes into even more institutions or cells of society. The economic and political concern of population articulated issues such as “life expectancy, fertility, state of health, frequency of illness, patterns of diet and observation”. (Foucault, 25) Foucault goes further to explain the link between this growing concern over population and its relation to the discourse of sex: At the heart of this economic and political problem of population was sex: it was necessary to analyze the birthrate, the age of marriage, the legitimate and illegitimate births, the precocity and frequency of sexual relations, the ways of making them fertile or sterile, the effects of unmarried life or of the prohibitions, the impact of contraceptive practices- of those notorious “deadly secrets” which demographers on the eve of the Revolution knew were already familiar to the inhabitants of the countryside. (Foucault, 25-26) This is linked to the capability of power and wealth to be attained based on the population of a state; a great society was not created through the Socratic and Aristotelian notion of outstanding and virtuous individuals, but rather linked to rules governing the discourse of sex. (Foucault, 26) Again, the community demands for observation of the individual, for them to govern their private practices in regards to calculated conceptions of the common good. This fusion of the discourse of sex into other institutions and other mechanisms within society produces the necessity of policing. Policing is the state’s articulation of power in regards to snuffing out or eradicating deviance. Techniques are used by the state to encourage and discourage any practices not consistent with or contradictory to normalized practices in the name of common and greater goods. Looking back to both the Christian pastoral and the new found political and economic power discourse of sex, it is an undemanding task to see the role smaller institutions of the society play in establishing the discourse and practices thereby of the state. Painstaking analysis of any power discourse still prescriptively asks the question, why does this matter? What valuable information is produced by understanding power relations between knowledge and the power of institutions as well as the state?  Foucault extracts the implications of these power relations proposing a positioning of humans into a sphere of biopolitics. Biopolitics, or biopower, is the power gained over life of the individual; the power stemming from influence that state and institutions have over the regulation of life itself. This is opposed to the concept of politics that gain power over death, which is the ability for a sovereign to exercise discretion in deciding if one should die. Foucault analyzes that power over death was “exercised in an absolute and unconditional way, but only in cases where the sovereign’s very existence was in jeopardy”. (Foucault, 135) One can draw parallels to the Hobbesian notion of the Leviathan where a state could act in any way to sustain survival, thus forming a reactive power that lived in a defensive paradigm.  Power over life is much more proactive; where the right over death could only be exercised in certain instances, the power over life is always imposed in the name of a greater good and for the betterment of society.  The development of such a power exists in two forms: One of these poles- the first to be formed, it seems- centered on the body as a machine: its disciplining, the optimization of its capabilities, the extortion of its forces, the parallel increase of its usefulness and its docility, its integration into systems of efficient and economic controls, all this was ensured by the procedures of power that characterized the disciplines: an anatomo-politics of the human body. The second, formed somewhat later, focused on the species of the body, the body imbued with the mechanics of life and serving as the basis of the biological processes: propagation, births and mortality, the level of health, life expectancy and longevity, with all the conditions that can cause these to vary. Their supervision was effected through an entire series of interventions and regulatory controls: a biopolitics of the population. (Foucault, 139) Foucauldian politics constitutes such power negatively based on its ability to bear ill fruit. This new form of power invigorates the power of the state, extraordinarily expanding its ability to shape into the horrors of the millennium. Racism, even Nazism, became possible with this new form of power over the individual: Racism took shape at this point (racism in its modern, “biologizing,” statist form): it was then that a whole politics of settlement (peuplement), family, marriage, education, social hierarchization, and property, accompanied by a long series of permanent interventions at the level of the body, conduct health, and everyday life, received their color and their justification from the mythical concern with protecting the purity of the blood and ensuring the triumph of the race. Nazism was doubtless the most cunning and the most naive (and the former because of the latter) combination of the fantasies of bloodand the paroxysms of a disciplinary power. (Foucault, 149) Although when looking at the Christian pastoral formerly described, one may ask how such a conclusion is reached. The answer is plain and exposed if the processes of normalization and subjectification are newly observed. When humankind is positioned in the place of the object as well as the examiner, depending on what the successful power regimes embody, potentially any practice can be accepted. With racism, institutions such as the home, school, and public social institutions propagate layering of races into groups with different privileges based on supposed rational thought. These ‘experts’ relentlessly traditionalize their interpretations of the world and profess the common good. In terms of eugenics, the claim to fame was one of cleansing, attempting to create the most pure and efficient race (roughly). If institutions in power propagate such views they can become knowledge as seen with the Christian pastoral. Once this is the case, given practices and techniques of self-examination, humankind begins to police themselves into conformity. The sovereign is able to also benefit by utilizing such ‘expertise’ in order to press their own policing of the body yielding possibly horrendous consequences. With Foucauldian theories of power and knowledge fleshed out, the final job is to propose a conception of society that one might hold if it wants to exist in acknowledgment to them. Primarily, the goal of this proposition is to challenge biopolitical power. That is, to prevent the state and institutions from existing in definite solidified fashion; to attempt to provide immunity for biopower’s more ominous unwelcome possibilities. In order to challenge the biopolitical power a state and its institutions hold over a society, it is required to understand the origination of norms propagated by institutions. The process of power and knowledge is the process in which these norms are constructed, but at the same time they always reflect some sort of axiological value. The pastoral outlined within this paper as well as the political problem of population both align with propositions of value. What a society deems important outlines how individuals ought and ought not act; they are responsible for the policing techniques and practices that exist as a biopolitical power. Martin Hagglund in his analysis of Jacques Derrida, another French philosopher of Foucault’s generation wary of the institutionalization of cultural norms, displays the process of accepting certain conceptions and how that ultimately results in exclusion of other interpretations which ultimately forms what is constituted as an accepted practice or deviant behavior: Thus, a rigorous deconstructive thinking maintains that we are always already inscribed in an “economy of violence,” where we are both excluding and being excluded. No position can be autonomous or absolute; it is necessarily bound to other positions that it violates and by which it is violated. The struggle for justice can therefore not be a struggle for peace, but only for “lesser violence.” (Hagglund, 82) This is exemplified by the relationship between norms and deviant behavior. Where norms are the constituted rules and regulations on particular discourses that breed a specific set of practices, deviance can be seen as the irreparably excluded position. Take discourse of sex, when there is a rule such as to not speak about exploits of the flesh around children, this norm creates deviance which is any position excluded by the practices endorsed. Here norms and deviance have an uncanny reflection of the nature of axiological claims; Foucault would likely agree with this due to his own analysis of normalization that reflects traditional moralism constructed by institutions and enforced by the state as well as by the individual.  The conclusion here is that there may not be a way to avoid biopower if a society wishes to created practices upon its axiological alignment. However, this does not mean a Foucauldian-inspired society is lost, in fact this gives us the first step in realizing such a construct that would likely demonstrate how one would discern between permissible and impermissible power discourses. For example, when looking at Nazism, it is a simple task to see Auschwitz as a reflection of the society that failed, an example of the horrendous atrocities that can prevail from statism. Regulations upon discourse of race or speech to prevent such practices like Nazism or racism as a whole likely are seen as justified. This is based on our conception of what is valuable and what practices are welcomed. Inversely, Foucault’s animation of problematic power discourses such as the repression of sexuality resulting from the seventeenth century Victorian regime should not be forgotten. It highlights the importance of questioning traditional axiological ‘truths’.  A modern example of power discourse that likely ought to be dissolved is the way in which gendered discourses are regulated within society. Many female scholars tirelessly work on illuminating the different institutional practices that work to exclude the incorporation of the female identity. Ultimately, it is important to realize that though all axiological conceptions may not be worthy of retaining, as long as communities hold on to them and seek to create practices in conformity to these values, they will exercise biopolitical power. Though acknowledging an axiological values importance in the political theatre is key to dismantling the domination that states and institutions hold over discourse, there is more work to be done to assure the bourgeois interpretations do not become the will to ‘truth’. For this, the second step comes from the nature of power discourses. Namely, they are subjective. This is different from subjective in the sense that they subject humankind to the center of observation, rather it is the notion of that norms and knowledge produced within a society are relational and biased. As discussed previously, Foucault’s work is based around the understanding that information or knowledge peddled by historians is not absolute fact. Instead, they are interpretations based on the interdependent relationship of power and knowledge. The importance that a society places on this relationship is seen by the role of the ‘expert’ opinion that drives a state’s and institution’s ability to create knowledge accepted as ‘truth’. If it is acknowledged that these opinions and interpretations are biased, there is an opening to those discourses lost to specific power regimes.  When thinking about normalization as a reflection of axiological values of a given time period accepting relativism is even more important. This is primarily the case because there is constant political and theoretical debate about the ethical. Even if one accepts theories of morality that attempt to become transcendental, in the sense that they are superior to all other moral theories, the theory is in reality subjective. Until it becomes infallible, it is only, at best, justified as a single interpretation pitted against many others. Additionally, unless it is infallible, there is the possibility this specific perspective is a result of specific power relations resulting in a particular philosophy as reasonable. Simply put, a society must be open to accepting reasons to reject, and in some cases completely alter, power discourses. Finally, it is of utmost importance that the institutions as well as the state become hospitable to the subjective claims. Even if society accepts interpretations as relational, the traditional or currently accepted interpretation can retain the throne based on exercised power. There has to be a system or articulation of institutions along with the state that allows for constitutively ‘other’ perspectives to enter. If given information about the subject is modified into practices it marks the construction of a particular power discourse; in other words, allows for specific rules and regulations of a given discourse to be solidified. If an institution is open, any alternative perspective ideally would have the chance to justify itself without facing baseless exclusion. When examining gendered discourse it becomes apparent how problematic close systems can be, as Sandy Langford-Mckinnon shows in her analysis of psychological institutions and the effects on female discourse: The psychological combines with the physiological when the emotional nature of women is seen as leading to their being susceptible to the stresses of the competitive world– i.e., the male-dominated institutions. Freud’s advice was that women should “withdraw from the strife into the calm uncompetitive activity of…home.” Male doctors, by accepting and implementing the normalizing practices of the non-medical society, were among those who encouraged this exclusion. (Langford-Mckinnon, 31) Here, information gained concerning the female body is used as a reason to regulate their ability to participate in specific realms of society. If women or men wish to challenge the ‘truth’ concerning the alleged emotional nature of women it becomes apparent open institutions are increasingly important. This can also be useful in exploring the necessity of a society that acknowledges the relative and subjective nature of interpretations. The perception of women as emotional as opposed to rational, thus creating the dichotomy of reason versus emotion, becomes increasingly difficult to challenge if viewed as indisputable fact. A society seeking to incorporate Foucauldian theories of power into societal practices will surely have their work cut out as avoiding statism and processes that lead to statism, such as subjectification and normalization, is a complex task to complete from within a state. This paper is not extensive enough to create such a framework, instead, it describes the post modern archeological methodology that is precursory to any successful assimilation of Foucauldian theories into practice. First, we acknowledge that as long as any society endorses certain values over others, there will necessarily be the creation of that which is accepted and that which is considered deviant. Instead, in order to avoid statism, we endorse the acceptance of the subjectivity of conceptions on a given issue, since any axiological position creates normalization. We must heed the possibility that even if we accept another subjective conception is equally or more justified, there is still the possibility of bias opinions becoming the objective standard. This danger exists when institutions become closed off to those with alternate conceptions and thus are not hospitable to the thought that alternatives are externally justifiable. Because of this, we find it necessary to create open institutions that accept criticism and let alternative conceptions plead their case fairly and adjudicate practices as well as attempt to determine what knowledge ought to be accepted. Overall, although Foucault gives no articulation of the range of alternative conceptions a society may wish to adopt, he does guide us towards realizing a post-modern society by becoming informed by Foucauldian theories of power. Foucault, Michel. The History of Sexuality, Vol 1: An Introduction. Translated by Robert Hurley. New York: Vintage Books, 1980. Print. Hagglund, Martin. Radical Atheism: Derrida and the Time of Life. Stanford, Callifornia: Stanford University Press. 2008. Print. Langford-Mckinnon, Sandy. Unmasking Foucault’s Discourse: Foucault’s Exclusion of the Exclusion of the Female In His Discourse On Power. Austin, Texas: The University of Texas at Austin, 1989. Print. ~ by texasphilosophical on August 6, 2016. Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s %d bloggers like this:
__label__1
0.767745
Green Herringbone Trousers - Holland Esquire - 1 Green Herringbone Trousers - Holland Esquire - 2 Green Herringbone Trousers These plain twill straight leg trousers are effortlessly casual but have all the hallmark’s of Holland Esquire’s refinement. Touches such as exquisite hand stitched detailing to the pockets, luxurious viscose half lining and wooden buttons made Italy. They feature turn up bottoms that can be altered to size, and an adjustable waistband to ensure you stay comfortable all day. For a two-piece suit that is both relaxed and refined team with the matching jacket.
__label__1
0.707875
Start a 10-Day Free Trial to Unlock the Full Review Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! Probability - Spin the Spinner Learners solve and complete 20 various types of multiple choice problems. First, they use a spinner to find the probability for each number. Then, everyone finds the sample space when flipping a coin. They also determine the probability of getting an odd number and a tail. 3 Views 16 Downloads CCSS: Adaptable Resource Details 6th - 8th 2 more... Resource Types 1 more... Common Core
__label__1
0.933255
Wednesday, September 28, 2011 What is a 'primary illness'? We have a definition of health, from a health perspective, as opposed to the more traditional 'medical perspective'.  We have an understanding of the primary causes of illness, from a health perspective as opposed to a more traditional medical perspective. We can define 'primary illness' as an illness that is the result of a single primary cause.  Scurvy is a primary illness caused by a deficiency of Vitamin C.  Dehydration is a primary illness caused by a deficiency of water. Water intoxication is caused by drinking too much water.  Broken bones can be caused by too much physical stress. We have identified 6 primary illness factors, 12 primary causes of illness - deficiency or excesses of genetics, nutrients, parasites, toxins, stress, growth and healing.  Each of those may have hundreds of 'primary illnesses', many of which are not studied by our medical establishment. The primary cause of illness, and the causes of all primary illnesses are simple imbalances.  Once we realize this, we can see that many illnesses have no name and are not studied.   In this article, I focus on primary nutritional illnesses: a) we know of over 100 individual nutrients that are essential to health. We do not have scientific agreement on which nutrients are essential to health. b) therefore, over 100 individual primary illnesses can be caused by deficiencies of individual nutrients c) and, presumably, over 100 individual primary illnesses can be caused by excesses of individual nutrients Some of these illnesses are clearly understood when the deficiency is severe, but not well studied when the deficiency is not severe. Severe deficiency of Vitamin A is known to cause blindness, poor immune system function, and poor bone growth.  Mild Vitamin A deficiency?  Not studied as far as I know. Is the name given to mild Vitamin A deficiency - 'Vitamin A deficiency' or is there no name?  What is the test for Vitamin A deficiency?  I believe is no well accepted, effective test for Vitamin A deficiency - I can only hope that someone proves me wrong someday. Vitamin B1 deficiency causes beriberi - or is beriberi the name of severe Vitamin B1 deficiency? What is the name of, what are the symptoms of low level Vitamin B1 deficiency? What is a good test for minor Vitamin B1 deficiency? Nutrition is the foundation of health.  Nutritional deficiencies and excesses can, in many cases be prevented by simple positive or negative actions.  Consume nutrients that are deficient, avoid or minimize nutrients that are in excess. Nutritional prevention is the most powerful tool of Personal Health Freedom, because it is most easily affected by personal choice. Personal choice is especially important in light of the poor quantity and quality of information about nutritional deficiencies and excesses. Nutritional illnesses are also very easy to acquire.  Simple in-attention to diet, dietary drift, or dietary simplicity (meat and potatoes diet, pizza diet) can easily result in minor nutritional deficiencies in a very short time period - and severe nutritional deficiencies over a long period. But are they detected? Health deficits (illnesses) is typically ignored until they become severe illnesses. We often think of ourselves as Perfectly Healthy, in truth we are probably all suffering from some health deficiencies that are not measured.  Our bodies have amazing power to overcome the dis-advantages of minor health deficits and even severe health deficits.  The medical paradigm does not recognize a health deficit until it becomes an illness to be treated. By this time, may primary illnesses or health deficits have combined to create a complex web of causes, symptoms and illnesses. Illnesses are mysterious entities, identified by symptoms, not by causes. Thus, the natural reaction is to treat the symptoms rather than search for causes.  If you go to a doctor because you have a bad cold - the cold may be treated.  It is unlikely that your immune system will be tested, and less likely that you will be tested for any of the many primary causes of a weak immune system. From one point, this is understandable - the medical establishment is, as they saying goes, 'up to their ass in alligators'.  But it is time to think about draining the swamp. We need a methodical, scientific study of illnesses and primary illness if we are to move our understanding of health beyond the current medical paradigm. And until we have the results of these studies, and maybe long after - we need the freedom to choose our nutrients, our foods. Many of governments are actively working to limit the nutrients we can consume - and also to limit the foods that can be studied in scientific studies. These limits are put forth as 'consumer protection legislation' - they are often simply health freedom limitations and a recipe for a health deficient future. Health be with you. Tracy is the author of two book about healthicine:  Tuesday, September 20, 2011 Health and Illness = Light and Darkness? I like to think in parallels.  Can we draw a parallel between the concepts of light and darkness and our understanding of health and illness? Illness is easily compared to darkness, or dark corners.  It is often hidden away, difficult to discern or recognize.  We can imagine a transition from light to darkness, corresponding to the similar transition from healthiness to illness. We often speak of the darkness of mental illness, the blackness of unhealthy tissue, and we know that black stools can indicate internal bleeding.  Health is viewed as bright and alive with colours. Like most parallels, this comparison provides some useful insights.  It closely matches the 'medical paradigm' where illness is bad and health is good. It is a useful metaphor - but its weakness is a focus on illness.  We need a metaphor that has a focus on health.  And as we develop a metaphor for health, as we view illness and health thru a different lens, we will see things that could not be seen thru the medical paradigm. Healthiness is not measured by brightness, nor colour, it is measured by balance. The more our nutrition, cells, tissues, organs, etc are in living balance - the healthier we are. Health is the living balance between deficiency and excess. Our bodies are always working to maintain the balance - when we lose balance - we become ill and may die. When we view health as thousands of balances, and illness as being severely out of balance - it is easy to imagine that we have many small illnesses all of the time. We cannot shine a light on illness to create or improve health.  We cannot improve health by 'cutting out' the darkness (illness). We can only improve health by changing the balance.  I often use Vitamin C as an example, but you can substitute many health factors and come to similar insights. Viewed through the medical paradigm, scurvy is illness, or darkness.  Scurvy can be prevented by consuming sufficient Vitamin C.  If scurvy is present, it can be treated with Vitamin C. The effects of severe scurvy cannot be 'cured' - if you lose teeth because of scurvy, Vitamin C will not grow them back. This is a useful paradigm to prevent or treat illness, but a poor paradigm to optimize health. Viewed through the healthy balance paradigm, scurvy is not simply an illness, it is an imbalance - a prolonged deficiency of Vitamin C. If you are suffering from a deficiency of Vitamin C - you can improve your health by adding Vitamin C to your diet. It is useless and trivial to say "Vitamin C prevents scurvy" because it is the same saying "Vitamin C prevents Vitamin C deficiency". Vitamin C does not 'cure' a Vitamin C deficiency.  Vitamin C does not prevent, nor cure scurvy.  A deficiency of Vitamin C is scurvy.  The medical paradigm only recognizes this when the deficiency is severe and prolonged. It may well be that people who are exposed to prolonged minor deficiencies of Vitamin C, or severe short deficiencies of Vitamin C develop conditions exactly like scurvy - but on a much smaller scale, not recognized as 'illness'.  When we can accurately measure 'healthiness', as opposed to only measuring 'illness', we will see the effects of Vitamin C deficiency earlier and understand much more about healthiness. The challenge, even in the simplest situation, is to determine what is 'out of balance'.  The symptoms of prolonged severe Vitamin C deficiency are well documented and easily recognized - and named scurvy.  However, a minor Vitamin C deficiency has symptoms in common with many other problems.  So, how can you know if you are suffering from a minor Vitamin C deficiency? Medical researchers do not attempt to define the 'healthiest' intake of Vitamin C.  Medical researchers cannot even agree on the recommended 'minimum intake' of Vitamin C.  The United Kingdom Food Standards Agency recommends a minimum of 40 mg per day.  The World Health (so called) Organization recommends a minimum of 45 mg per day (although they claim to be a 'health organization' they do not make a recommendation for optimal health). Health Canada (so called, they also do not make a recommendation for optimal health) recommends 75 to 90 mg per day as a minimum.  The National Academy of Sciences in the USA recommends 60 to 95 mg per day as a minimum.   Although each of these numbers may be presented as the 'recommended healthy intake', the numbers are very specifically designed to be used by prisons, armies, schools, etc to ensure that minimum nutritional needs are met.  No official organization recommends a 'healthiest daily intake of Vitamin C', or of any nutrient. How can you decide what is healthiest intake of Vitamin C for you?  This is your personal health decision. You need the right to decide for yourself - your personal health freedom. Vitamin C is a health factor that has a very wide 'healthy balance' area.  Our bodies can compensate for low or high consumption of Vitamin C quite effectively.  Vitamin C is very well tolerated in excess, or as  deficiency for short periods of time. In contrast, a deficiency of oxygen can have rapid, severe consequences while selenium and iron can have toxic effects at very low levels.  In each case, health exists at the balance between deficiency and toxicity.  If we try to understand this using the light vs darkness paradigm, it is as if there is only light in the centre - and darkness at both ends of the balance. A healthy body maintains thousands of health balances, as best it can.  When one or more of the health components goes out of balance, your body does its best to compensate and to bring you back to balance.  When you lose your ability to maintain the balance of life, you tip and die. Can you be perfectly healthy?  Can a light be perfectly bright?  Maybe, just before it destroys you.  I love photography, but I can't take a picture unless I can see both light and darkness. Can your body be only 'healthy' with no 'unhealthy' components? No.  Your body is composed of hundreds of different types of cells.  Each cell type lives, divides and dies at a clearly understood rate.  Some of your cells are young and vigorous.  Some are dead or dying.  This is a normal aspect of our healthy state.  Our healthy balance does not just exist for a moment, it is a living balance always adjusting and moving forward. Is health and illness like light and darkness?  This useful metaphor is insufficient to a full understanding of health, and of health freedom.  We need to move beyond it - to a new paradigm with a focus on health. A healthy balance. Tracy is the author of two book about healthicine:  Thursday, September 1, 2011 Toothpaste Rant Tracy is the author of two book about healthicine:
__label__1
0.515992
skip to content Primary navigation Video transcript Benefit Payment: How to Get Paid Slide 1 [Narrator speaks] In this video, I'll talk about requesting benefit payments. Slide 2 [Narrator speaks] Let's start with some basic information first. You can only receive unemployment benefits for weeks when you meet all eligibility requirements. When you request a week of benefits, you answer questions that help us determine whether or not you are eligible. Slide 3 [Narrator speaks] You request one week at a time, and you ALWAYS request benefits for a previous week; NEVER for the current week. You have only a limited amount of time to make a benefit request for any given week. You must make weekly benefit payment requests in a timely manner, even if you're waiting to find out if you are eligible. If you wait too long to make a request, benefits for that week may no longer be available. Slide 4 [Narrator speaks] To determine if you're eligible, we ask four types of questions: If you worked during the week you are requesting, If you had any other sources of income that you haven't already told us about, If you quit or were discharged from any jobs recently, if you've refused a job, or failed to apply for an available job. We'll also ask if you're available for work and actively seeking work. If you're unavailable for work due to training that's been approved by the Unemployment Insurance Program, we'll ask if you are making good progress in your training. It's important to answer these questions accurately, so let's look at each one. Slide 5 [Narrator speaks] The first question we ask is whether or not you worked or had a paid holiday during the week you are requesting. If you do not report all work, you will have to repay benefits that were overpaid. You can avoid misreporting your work and having to repay benefits by remembering these points: Answer "Yes" if you worked AT ALL during the week in any kind of paid OR unpaid work. For example if you worked at your old job, started a new job, temporary or part-time work, self-employment, or volunteer work. You should report the work for the week you did the work, and not when you were paid. Even if you think you already told us about it, report the work. If you're not sure whether you should report the work, call Customer Service BEFORE you answer the question. Slide 6 [Narrator speaks] If you did work during the week you requested, we'll ask you some additional questions: We'll ask for the total number of hours you worked between Sunday and Saturday of the week, Slide 7 [Narrator speaks] And for your gross wages for the week. Gross wages is the amount of your pay before subtracting deductions or withholding. [Screen text: Gross wages = hours x hourly pay rate] To calculate your gross wages, multiply the number of hours you worked by hourly pay rate. You don't need your paycheck to make this calculation. You just need to know your hourly pay rate and the hours you worked between Sunday and Saturday. If you're not sure how to calculate your hours and earnings for the week, call us and we can help. Slide 8 [Screen text: Other income?] [Narrator speaks] The next question asks about any income you haven't already reported to us. The most common types of income are severance pay, pensions, Workers' Compensation, and Social Security retirement or disability payments. If you're not sure how to answer this question, give us a call. Slide 9 [Narrator speaks] The third question asks about loss or refusal of employment. If you refused a job offer, quit, or were discharged from a job during the week for which you're requesting benefits, you'll need to tell us about it. If you're not sure how to answer this question, call Customer Service for assistance. Slide 10 [Narrator speaks] Finally, we'll ask about your search for work. To be eligible for benefits, you must be fully available for work. This means that if you are offered a job, you could start work immediately. If you are on vacation, cannot be reached by employers, have no transportation, or have other barriers to starting work immediately, you probably are not available for work. You also have to be physically able to do your usual work. Slide 11 [Narrator speaks] Here is some information to help you meet this requirement: If you usually work full-time, you need to be available for full-time work. If your occupation commonly works different shifts, you need to be available to work those shifts. If you're in school, and NOT in a training program approved by the Unemployment Insurance Program, you cannot let your school schedule interfere with your search for work or your availability for work. Slide 12 [Narrator speaks] Each week, you also need to actively look for work. This means working on your resume, contacting employers, and networking with others who can help you connect to a job. When you are receiving unemployment benefits, looking for your next job should be your full-time job. It's a good idea to keep track of what you're doing to look for work in case we ask. Visit a WorkForce Center if you need help finding work. If you're not sure how to answer the work search questions, call us and we can help. Slide 13 [Narrator speaks] Now let's look at how to request a benefit payment. There are two ways to submit your request, online or by phone. You can log in to your online account from 6:00 A.M. to 6:00 P.M., Monday through Friday. You can request a payment by phone from 6:00 A.M. to 6:00 P.M., Tuesday through Friday. On Tuesdays and Wednesdays, you need to follow a call schedule based on the last digit of your Social Security number. Slide 14 [Narrator speaks] Now, let's go over the basic steps for requesting benefits online. First, you'll need to log in to your account. The easiest way to get to the login page is to start at, and click Applicants. Then, on the Applicant page, click Request Benefits. This will take you to the login page. Slide 15 [Background image: Applicant Self-Service System My Account Home page.] [Narrator speaks] Once you log in, you'll go to the Home Page of your Benefit Account. Next, click the option to Request Benefit Payment. You'll find this option listed in the left navigation, and also in the center of the page. If you do not see the Request Benefit Payment option, it means one of three things: You've already requested payment for all the weeks available to you at this time, Your account is inactive and needs to be reactivated, in which case you can click the Reactivate option, Or, you're past the end of your benefit year and you need to apply for a new account. In this case, choose the Apply for Benefits link. For this video, let's assume you see the Request Benefit Payment link. Slide 16 [Background images: Applicant Self-Service System Request Payment Home page and Address Verification page are displayed as mentioned in narration.] [Narrator speaks] Once you select Request Benefit Payment, you'll be reminded of the information you need, and asked to verify your address. Then, you can start the actual payment request process. Slide 17 [Background image: Applicant Self-Service System Initial Questions page] [Narrator speaks] When you get to the Request Benefit Payment page, make sure you pay attention to the period of time for which you are requesting benefits. Remember, it's always a one week period, Sunday through Saturday, and always in the past. Never the current week. Read each question carefully and think about your answers before checking a box. Remember, you are responsible for answering each question accurately. Depending on your answers, we may need to ask a few more questions. When you've answered all the questions, you'll confirm your answers, and then we will calculate your payment for the week. Slide 18 [Narrator speaks] If the week you are requesting is your first week, it's probably going to be your non-payable week. The first week in which you are eligible and have a benefit account is called your "non-payable week," and you do not receive a payment for that week. The non-payable week is required by law and everyone has to have one before they can receive benefits. Your payment amount for your non-payable week will display as zero dollars. Slide 19 [Narrator speaks] Once you complete your non-payable week, your payment amount will be between zero dollars and your Weekly Benefit Amount, depending on your answers to the eligibility questions. If you worked more than 32 hours, or had gross earnings greater than your weekly benefit amount, your payment amount will be zero dollars. The same is true if you are not eligible that week for any other reason. Slide 20 [Narrator speaks] If you have an issue on your account that needs to be resolved, your payment amount will show zero dollars until the issue is completed. At that time, we will send you any payments that you requested and for which you are eligible. Slide 21 [Narrator speaks] I hope you've found this video helpful in understanding how to request benefits. Here are some reminders: You always request benefits one week at a time, and it is always for a past week. Report ALL work. Tell us about any new sources of income. Be available for work and look for work. And request your benefit payments in a timely manner. Don't delay requesting benefits, even if you're waiting to find out if you are eligible, or you may lose them. Slide 22 [Narrator speaks] If you have questions, check our website, or call Customer Service. [Screen text: Website: | Call Customer Service: 651-296-3644 (Twin Cities area), 1-877-898-9090 (Greater Minnesota), 1-866-814-1252 (TTY for the hearing impaired) | Links for more information: Using Your Password, Information Handbook, How To Request Benefit Payments, Video Library] back to top
__label__1
0.732435
Jane Eyre Beauty and the Representation of Authenticity: Women in Jane Eyre In the novel Jane Eyre, author Charlotte Bronte places great importance on the appearance of her characters, repeatedly evaluating their attractiveness through narrative descriptions and dialogue. Her heroine, Jane, is mentioned countless times as plain, small and unpleasant looking. Jane's rival, Blanche Ingram, is described as the opposite; she is beautiful and ornate, heavily adorned with jewels and bright colors. Rochester chooses to marry Jane over Blanche, and by doing so he emphasizes the importance of a heroine's female authenticity, or worthiness of trust, belief and reliance. In Jane Eyre, Bronte uses Blanche and Jane's differences in beauty to illustrate female authenticity, or lack thereof. Jane is unadorned by jewels and fancy colors, reflecting a more genuine, direct person. It is Blanche's construction of beauty that impairs her authenticity; her ample decorations, colors and even her way of speaking are conventionally beautiful, but are merely adornments that disguise her true self. Even in the beginning chapters when Jane is recalling her childhood, Jane's unattractiveness is clear. Jane is excluded from playing with Mrs. Reed's children unless she achieves " - a more attractive and... Join Now to View Premium Content GradeSaver provides access to 771 study guide PDFs and quizzes, 5187 literature essays, 1578 sample college application essays, 204 lesson plans, and ad-free surfing in this premium content, “Members Only” section of the site! Membership includes a 10% discount on all editing orders. Join Now Already a member? Log in
__label__1
0.999877
There will be 200 fewer new buses, nine fewer renovations of subway stations and no more new double-decker cars on the Long Island Rail Road under a scaled-down five-year transit rebuilding plan announced yesterday by the Metropolitan Transportation Authority. The agency's chairman, Peter E. Stangl, said Monday that he had decided to cut back its proposal for renovating the New York region's subway, bus and commuter rail lines. Yesterday, Mr. Stangl detailed the specific cuts he was proposing. He told the M.T.A. board that he wanted to trim a total of $900 million, or 8 percent, from the $11.5 billion plan proposed in February. The reductions come after some state lawmakers argued that such a large expenditure would be difficult, given the state's budget problems, declining mass-transit ridership and what several legislators described as waste in current transit rebuilding efforts. Mr. Stangl said he had ordered the reductions after taking over as chairman in April, and not in response to the lawmakers' criticisms. Renovation plans were curtailed for each of the transit agencies overseen by the M.T.A. The Transit Authority, which runs the city's subways and buses, will buy 1,500 new buses during the five-year plan, not the 1,700 buses proposed in February. Rehabilitation of the station for the Grand Central shuttle has been dropped. So has a new $57 million bus depot proposed for Staten Island, which would have been the borough's third depot, transit officials said. The revised plan also eliminates a proposed $170 million reconstruction of a subway bottleneck, near the intersection of Nostrand Avenue and Eastern Parkway in Brooklyn, where the Nos. 2, 3, 4 and 5 lines intersect. Instead of 107 subway stations being renovated, as the February plan had proposed, 96 stations will get face-lifts. Installation of huge new fans, which are used to ventilate the subway during fires and other emergencies, will also be curtailed: 39 new fan plants, as they are called by transit officials, will be built, and not the 57 proposed earlier. New fans became a priority after criticism over the performance of existing fans during a fire in December near the Clark Street station in Brooklyn that killed two people. Some less expensive proposals, like the rehabilitation of the ceilings and walls of the Brooklyn Battery and Queens Midtown tunnels, have also been dropped. So has a proposal from the Metro-North Commuter Railroad, which serves Westchester and southern Connecticut, to reconfigure the interior of Grand Central Terminal to improve the flow of passengers. The Long Island Rail Road had to shelve plans to buy new double-decker cars, as well as new locomotives that can run on both diesel fuel and electricity. The railroad recently bought about 10 new bi-level cars and several of the advanced locomotives, transit officials said. Many of the most innovative proposals in the original rebuilding plan, such as electronic turnstiles and automatic fare cards for the subway, remain in the revised plan. The proposal, if approved by the M.T.A. board and the State Legislature, would be the agency's third five-year rebuilding effort. About $16 bilion has been spent during the last decade to overhaul the subways, buses and commuter lines. The rebuilding plan, which would be paid by the city, state and Federal governments, is to be presented to state legislative leaders early next month.
__label__1
0.572519
Wednesday, July 06, 2011 Inspired by Inness and Tennyson Experience is an Arch, 24x30, acrylic on  linen, Jan Blencowe, copyright, 2011  view this painting in a sample frame on my website Th Alban Hills, 1873, George Inness Ever since I first saw this George Inness painting with a central archway formed by trees in the near middle ground inviting the viewer to wander through I have wanted to use this motif in a painting. I kept my eyes open for a situation like this one and was rewarded last fall. As soon as I came upon this line of trees in the marsh it instantly reminded me of Inness' painting the Alban Hills.  I decided on a medium large canvas 24x30, and decided to continue my exlorations of leaving portions of the watercolor like underpainitng showing in some areas and adding some wildlife to the piece.Can you see the Eastern Cottontail in the lower left? I wanted to say a word here about using another artwork or a photograph (that's not one you've taken yourself) as inspiration for a painting you create. This is a good example of what is actually permissible. I borrowed a concept. The concept of using trees that form an arch as a motif in my painting.  If the Inness painting were a photograph I came across on the internet I could still have created the painting I did, borrowing the concept. BUT I could not have copied the photograph and simply changed a bit of the color or added building etc. so that my painting was a close copy of someone's photo with just a few obvious changes. The same goes for using another artists painting as inspiration, you can use the concept of the painting (trees that form an arch) but what you create has to be a totally new and original piece of work.  Now Inness' work is in the public domain, like Van Gogh, Monet etc. and artists can and often do make copies of masterworks for clients who want a hand painted version rather than a print. I wanted to give a good example of using another artists work (painting or photograph) as inspiration in a legitimate way by further exploring a concept they use in your own work.  OK new topic. This whole archway concept made me think of this portion of the Poem Ulysses by Alfred, Lord Tennyson. I've given a nod to Tennyson's poem in the painting's title. I am a part of all that I have met; Yet all experience is an arch where-through Gleams that untraveled world, whose margin fades For ever and for ever when I move. How dull it is to pause, to make an end, To rust unburnished, not to shine in use! As though to breathe were life. Life piled on life Were all too little, and of one to me Little remains: but every hour is saved From that eternal silence, something more, A bringer of new things; and vile it were For some three suns to store and hoard myself, And this grey spirit yearning in desire To follow knowledge like a sinking star, Beyond the utmost bound of human thought. No comments:
__label__1
0.593638
Erysipelothrix infection produces H2S gas and necrotizing infections W Lee Hand, MD Hoi Ho, MD new information is published. The literature review for version 15.3 is current through September 2007; this topic was last changed on June 7, 2007. The next version of UpToDate (16.1) will be released in March 2008. INTRODUCTION — Erysipelothrix rhusiopathiae, a pleomorphic gram-positive bacillus, causes both a self-limited soft tissue illness and serious systemic infections. E. rhusiopathiae is widespread in nature and infects domestic animals, such as swine, which may be the major reservoir of the organism [1]. Erysipelothrix is also found in sheep, horses, cattle, chickens, crabs, fish, dogs, and cats. Infection in humans is usually due to occupational exposure. Thus, abattoir workers, butchers, fishermen, farmers, PATHOGENESIS — Little is known about the pathogenesis of human E. rhusiopathiae infection. The following observations have been made in in vitro and animal studies: Virulent organisms have a capsule that is antiphagocytic and may contribute to intracellular survival (in the absence of opsonization with specific antibody) [6,7]. Intracellular survival of virulent organisms in macrophages is associated with a reduced stimulation of the oxidative respiratory burst [7]. The SpaA protein is a surface antigen of E. rhusiopathiae. The pathogenic significance of this protein was suggested in a mouse model in which antibody to SpaA was protective against a lethal challenge [8]. The enzyme neuraminidase may contribute to the pathogenicity of Erysipelothrix, as it caused inflammation and edema in a rabbit skin test model [9]. CLINICAL FEATURES — The clinical spectrum of human infection includes: Localized cutaneous infection Diffuse cutaneous disease Systemic bloodstream infection Localized cutaneous infection — The localized cutaneous form of illness, known as erysipeloid of Rosenbach, is the most common form of human infection due to E. rhusiopathiae [4]. Fingers and/or hands (the sites of exposure) are usually involved in this infection. As an example, erysipeloid has been described on the fingers or hands of fisherman or seafood packers who suffer minor trauma while handling contaminated shrimp, crab, or fish [10,11]. Erysipelothrix rhusiopathiae infection in humans: Infection with Erysipelothrix rhusiopathiae in humans is called "erysipeloid" and dates back to at least 1870. The disease that is referred to as "erysipelas" in humans is actually a form of cellulitis caused by infection with Human infections occur primarily via direct contact with infected animals and are, thus, occupational diseases for people such as veterinarians, abattoir workers and fisherman. (In the later case, erysipelas is called"fish handler's disease"- the organism is commonly carried subclinically in the mucoid slime covering the scales of fish.) Infection occurs via contamination of skin wounds (most commonly on the hands) and leads to a unique, raised, cellulitis lesion that is highly pruritic (intense burning sensation) and characterized by purplish-red discoloration and edema of the skin. A more diffuse cutaneous form can also occur when lesions progress from the initial site to other cutaneous sites on the body. Systemic symptoms (e.g., fever, malaise, muscle aches, headaches) accompany these cutaneous lesions more commonly than when only solitary skin lesions develop at a wound site. Occasionally the infection may also spread to deeper tissues, leading to arthritis in the joints of the fingers, or There are two reports of septicemic Erysipelothrix infection in humans who ate undercooked pork. Chronic meningitis caused by Erysipelothrix rhusiopathiae A 47-year-old man presented with headache, nausea, vomiting and fever. Laboratory findings including analysis of cerebrospinal fluid suggested bacterial meningitis. Erysipelothrix rhusiopathiae was identified in cultures of cerebrospinal fluid. The patient recovered without any neurological sequelae after antimicrobial treatment. It is interesting that intracranial infection by E. rhusiopathiae reappeared after scores of years and that it presented with absence of an underlying cause or bacteraemia. A Case of Multiple Brain Infarctions Associated With Erysipelothrix rhusiopathiae Endocarditis Sang-Bae Ko, MD; Dong-Eog Kim, MD; Hyung-min Kwon, MD; Jae-Kyu Roh, MD, PhD A 63-year-old woman was admitted to our hospital because of fever and altered mentality. Brain magnetic resonance imaging showed multiple infarctions at the basal ganglia, cerebellum, and subcortical white matter with petechial hemorrhage, which was more easily seen on gradient echo images. Erysipelothrix rhusiopathiae was cultured from her blood, and echocardiography showed septic vegetations in the mitral valve. She recovered fully after 6 weeks of appropriate antibiotic treatment. Arch Neurol. 2003;60:434-436 Necrotizing Fasciitis Caused by Erysipelothrix rhusiopathiae The paradox of this case is the recovery from the first-day culture of the distinctly uncommon E. rhusiopathiae as the dominant organism. It is a Gram-positive aerobic or facultatively anaerobic rod, isolated first by Robert Koch from mice and later by Louis Pasteur from swine.[3] Rosenbach isolated it from a patient with localized skin lesions and coined the term erysipeloid, implying a forme fruste of erysipelas. The human disease can manifest itself as a localized skin lesion (erysipeloid), a diffuse cutaneous eruption with systemic symptoms, or bacteremia sometimes associated with endocarditis.[3] The route of transmission of E. rhusiopathiae to humans is usually by direct contact between contaminated fish or fish products, animals or animal products, or soil and a break in the skin.[3] Our patient's pet goldfish is a likely source of infection through her touching the fish or the fish tank and then scratching her inner thigh. In a study in Sweden, E. rhusiopathiae was isolated from 60% of the cod and 30% of the herring tested,[4] but no studies were performed in exotic or pet fish.
__label__1
0.933803
On Mon, 2003-11-10 at 16:27, Adam Heath wrote: > On Mon, 10 Nov 2003, Joe Wreschnig wrote: > > benefit from compiler optimizations. Some CPU bound things just aren't > > going to be helped much by vectorization, instruction reordering, etc. I > > mean, integer multiply is integer multiply. > But if the target cpu supports pipelining, and has multiple multiplication > unroll, and thereby have faster code, because of more efficient parallization. > (sorry, read Dr. Dobbs last week). I knew someone would chime in with this. :) AIUI this is only possible when there is no data dependency issue (i.e. multiply no. n+1 does not depend on no. n), otherwise you still have to serialize them. This is also a good example where optimizing for one chip might slow another one; say you've got 2 multiplication units on chip A, but only 1 on chip B. You unroll the loop partially when compiling. On A, this helps, because you can do both multiplies at once. On B, this may slow it down because of greater icache usage from the unrolled loop, or because B could be doing (e.g.) an add and a multiply but not two shows how complicated optimizing compilers can get, and why you can't way to tell is extensive, controlled benchmarking. Joe Wreschnig <[email protected]> Attachment: signature.asc Description: This is a digitally signed message part Reply to:
__label__1
0.830148
Goodnight Teddy bear As he gets ready for bed, Teddy Bear tells his Mama Bear all about the exciting experiences of his day. Teddy Bear's wild imagination and world of fantasy allows him to overcome any obstacles he has faced that day, allowing him to drift into a relaxed and peaceful sleep.
__label__1
0.998921
Dementia is a general term that is used to describe a group or range of symptoms that are a result of loss of brain function. Some common signs of dementia include’s • Memory and thinking difficulties • Difficulty coping with daily tasks and functioning independently • Language & communication difficulties • Changes in mood, judgment or personality Dementia usually occurs among the elderly population (aged 65 and above). If an individual younger than 65 is diagnosed with dementia, it is called ‘early-onset’ dementia and is quite rare. Though memory loss generally occurs in dementia, memory loss alone doesn’t mean you have dementia. For a diagnosis of dementia, at least one more of the following core mental functions must also be significantly impaired to be considered dementia: • Communication and language • Ability to focus and pay attention • Reasoning and judgment • Visual perception Each person is unique and experiences dementia in their own way. The way people experience dementia depends on many factors, including physical make-up, emotional resilience and the support available to them. On a general note, people with dementia may have problems with short-term memory (or remembering things that have happened in the recent past), keeping track of personal items and belongings, paying bills, preparing meals or carrying out other household activities, planning & remembering appointments, and in extreme cases, self-care and recognizing family and friends. The illness affects each person differently. It is important to remember what is true of one Person with Dementia (PwD) may not necessarily be true of another PwD. In India it was shown that in 2010 there were 37,00,000 people with dementia and this number will grow to almost 80,00,000 in 2030. The prevalence of dementia among those aged 65 and above is approximately 5%
__label__1
0.823313
Explain a calorimeter whose heat capacity is negligible Course:- Chemistry Reference No.:- EM13298347 Assignment Help >> Chemistry One piece of copper jewelry at 114°C has exactly twice the mass of another piece, which is at 51.0°C. Both pieces are placed inside a calorimeter whose heat capacity is negligible. What is the final temperature inside the calorimeter (c of copper = 0.387 J/gK)? °C Put your comment Ask Question & Get Answers from Experts Browse some more (Chemistry) Materials Explain the normal boiling point of methanol is 64.7 degrees celcius. Explain a solution containing a nonvolatile solute dissolved in methanol has a vapor pressure of 580.2t How many mL of a 22.5% (v/v) ethanol solution would you need to measure out in order to have 12.5 mL of ethanol? The Molar Mass of ethanol is 46.07g/mol. How many moles of carbon, hydrogen, & oxygen are present in a 100g sample of ascorbic acid? Ascorbic acid (vitamin C) contains 40.92% carbon, 4.58% hydrogen, and 54.50% oxyg what the final orbital should be ? would you expect Rh to be paramagnetic or diamagnetic? Explain your answer to each question? please explain in details so its easy for me that during one such stroke the opposing pressure in the tire was 30.0lb/in2 above the normal atmospheric pressure of 14.7 lb/in2. Calculate the number of joules of work acc Le Châtelier's Principle and Equilibrium Lab Assignment In this lab activity, you will explore how stress applied to a variety of systems at equilibrium will affect the direct The LD for a drug is the dose that would be fatal for 50% of the population. The LD for aspirin in rats is 1.75 grams per kilogram of body weight. Compute how many aspirin t
__label__1
0.714384
Pursuant to the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”), covered entities (e.g. healthcare providers and health plans) must notify the Department of Health and Human Services (“HHS”) of breaches of unsecured protected health information (“PHI”).1 The information provided to HHS provides companies with a high level of insight concerning the types of breaches that occur in the healthcare industry. The data collected by HHS concerning breaches affecting 500 or more individuals in 2015 shows that unauthorized access or disclosure, such as misdirected mailings, break-ins of physical premises, and employees accessing PHI that is not necessary for their duties, is the most common form of data breach in the health sector – surpassing theft of hardware, which was the leading cause for health data breach in 2014. The unauthorized access mostly occurred on paper records. While hacking events tend to be publicized in media, it ranks only third in leading causes for health data breaches. The percentage of reported breaches caused by unauthorized access or disclosure.2 The percentage of unauthorized access or disclosure caused by paper records.3 The percentage of reported breaches caused by theft of hardware of all types.4 The percentage of reported breaches caused by hacking/IT incidents.5 Things to consider when reviewing your information security program in light of HHS data: 1. Implement different access levels for employees’ access to PHI based on their job duties; 2. Immediately stop access to PHI by terminated employees and escort them if necessary; 3. Require a two-step verification process to ensure that mail and email recipients’ information is correct before sending invoices or appointment reminders; 4. Transition from paper records to secure, encrypted computer databases; 5. Shred paper records when no longer needed; Prevent break-ins by implementing physical safeguards such as security alarms, security guards, and locks on windows and doors.
__label__1
0.998927
HeavyTopspin.com | Twitter tennisabstract.com beta Match Charting Project Data: Tommy Robredo This page displays the wealth of data available on Tommy Robredo as gathered by the volunteer contributors to the Match Charting Project. It comprises shot-by-shot records of 11 matches, including 7 on hard courts, 4 on clay, and 0 on grass. Click the links to see much more data on serve, return, rally length, net play, and shot tendencies. Move your cursor over any percentage to see comparisons to tour average, as well as surface-specific data. | | | | | | | Contributors to this page: js (x5), Edged (x2), 1HandBH, DavidAurora, ChapelHeel66, Isaac. Find out how to chart matches yourself | Main Menu
__label__1
1.000003
Girls, We Can Do Better. Stop blaming men and get back to changing the world. So, here it is. Tuesday, March 8. International Women’s Day. Project meeting at 4:30 PM, and until then I’ve nothing to do on my task list. You would think it would be a perfect time to write a nice little inspirational post about combating sexism and how far we’ve come. You would be wrong. I didn’t want to write this essay. I really, really didn’t. I was so desperate not to write this essay I got out the paint and brushes from the basement and covered the water stains on the stairway ceiling, from back before I got my roof replaced last fall. Ok, here goes. Time to stop stalling. A lot of online feminist discourse seems to consist of two things: 1. ) Calling out men for sexism. 2. ) Idealizing exceptional women who have “broken out” in one way or another. The first two headlines under International Women’s Day on Medium are perfect examples: “She’s 22, from rural Zimbabwe, and a mogul in the making “Meet The Youngest Female Billionaire The problem is, men aren’t likely to change just because you tell them to. And ask either Lindewe or Elizabeth Holmes, and they will tell you that nobody achieves their goals without a lot of help and support along the way. Which puts women at a significant disadvantage. The myth is the “Mighty Girl,” or the “Disney Princess.” Beautiful, gifted, alone as she triumphs against the odds. The reality is that unless you want to be a ballerina or a diva, teams are what get things done. I would argue that this myth is actually harmful, rather than inspirational. Women are not taught to value or invest in relationships with other women. This makes it much more difficult to find partners and collaborators. Where would Jobs have been without Wozniak? Lennon without McCartney? We are taught to put children first, spouses second, self last. Are girlfriends left do anything other than look good in the background of a selfie? Feminists are not responsible for this attitude, but we of all people should be doing what we can do counter it. Because there is a very real possibility that we may be moving backwards, or at best standing still. A 2015 Harvard University study found a majority of teen girls still preferred males in positions of leadership to females. Said one respondent, “Girls wouldn’t vote for themselves, so why would they vote for another girl?” I have been blessed with many remarkable women in my life: friends, relatives, and coworkers. I also lost my husband to a married female friend — someone I had vacationed with and invited to my home. I chose and choose not to blame or distrust all women based on the harm done by one. Over the past ten years I’ve been part of a variety of startup projects. Contrary to popular belief, women do exist in tech. Here are a few things I have learned over the years: Women often hold each other to impossible standards. There is behavior that we would let slide with a male coworker or boss, that we will not tolerate in another woman. And women have long memories. We need female co-founders. Why not build our own companies and institutions, and not just climb the ranks in male-dominated ones? Mean girls are not feminists. If you gossip about other women behind their back, you are perpetrating the patriarchy. Repeat. Repeat. You are part of the problem. Ask somebody what’s going on. Wait until you know all the facts. Also, some stuff that’s universal: Don’t take on anyone on your founding team who isn’t interested in equity, but simply wants a paycheck. Extra due diligence when hiring friends. If you don’t feel like founding a company: All of the above are equally applicable to an artistic collaboration, a nonprofit, an open source project, or grassroots activism. The scale doesn’t have to be huge. If childcare is an issue, ask your partner — or organize childcare within the group. Not only will you be taking your own souls and lives and bodies and aspirations as women seriously, you will be giving other women permission to do the same. The important thing is learning to interface with other women and view them as equals and people. Whatever happened to consciousness raising? One hundred thousand women belonged to groups in 1973. The Second Wave had it going on. So, there you have it. I said it. Happy International Women’s Day! Men can’t save us. We’ve got to do it for ourselves. Feminism is an incredibly important concept, but we have to shift our attention toward each other. If we wait around for sexist and oppressive behavior to stop, we might just witness the extinction of our species and many others first. And there is so much to be angry and outraged about online, that it just sort of saps the life out of us. My advice is to take that energy and direct it toward connecting with the women you already know, who might actually need it.
__label__1
0.801268
Nausea and Travel Sickness Motion sickness, or travel sickness as it is better known, occurs when the brain is confused about the motion that the body is experiencing. The eyes are seeing stillness in the environment  but an organ in the inner ear, the labyrinth, which an important part of our vestibular (balance) system is experiencing motion. This mismatch… READ MORE Headaches and Migranes I started having migraines when I was about 7 years old and found them a very scary experience. There is something quite frightening about experiencing that degree of pain in the part of the body where the brain is stored! Luckily, my migraines would disappear after I had vomited, and gone to sleep- pain killers wouldn’t… READ MORE Dental Phobias Are you someone who is so scared of going to the dentist that you will absolutely avoid going, even to the detriment of your own dental hygiene? If the answer is yes, the chances are you have a dental phobia. A phobia is more severe and intense than a fear which happens when the body… READ MORE Break Your Bad Habits Having just looked at a list of bad habits posted on the internet, I can honestly say that whilst they are habitual repetitive behaviours, many of them are also disgusting! One site listed nail biting, throat clearing, lying, interrupting, chewing the end of a pen, smoking and swearing in its top 20 list of bad… READ MORE I once heard that people are more motivated by pain than pleasure and in many instances I believe this to be true. Pain provides us with the natural survival instinct to escape and can often get us moving away from a troublesome situation- whether we are in physical danger or suffering with emotional pain. Pleasure… READ MORE Why Chose NLP? NLP was created by observing and modeling people who were exceptionally talented within the field of therapy. NLP stands for Neuro-linguistic programming. “Neuro” relating to the brain and “linguistic” refers to the language used, and how it is used. “Programming” describes the patterns and habits you create, learn and persistently follow. As we experience the… READ MORE NLP Training NLP and Hypnotherapy training in Hertfordshire, Bedfordshire, Buckinghamshire and London is here to proudly present to you certified and approve NLP and Hypnotherapy training courses. As you are certainly aware, computers come with huge manuals that most of us have never read!We are sure you feel, as we do, thatin the 21st century, training and… READ MORE What to look for in an NLP and Hypnotherapy Trainer NLP and Hypnotherapy are fast growing must-have skills these days. However, in a market where practically anyone can call themselves a practitioner, how do you know that the company or individual training you, is professionally qualified and skilled in the area of expertise. Do you know how to tell the difference between Joe Bloggs off… READ MORE The Challenges When Losing Weight Losing weight can be a real battle, and until I became a hypnotherapist, I didn’t realise there were so many different reasons why people have challenges losing weight. Some of the reason I have discovered are far from the obvious eating too much, eating the wrong foods, eating the wrong quantities and lack of exercise.… READ MORE Public Speaking If you’ve ever had the experience of literally “feeling” people’s eyes on you, you can understand the power that a public speaker can have. A master of public speaking will appear calm, confident, charismatic and completely at ease in speaking with his audience. His voice is as level as if he were speaking to his… READ MORE Improving Interpersonal Skills If you ever have to communicate with another person, then you are using interpersonal skills. You have been using these skills your whole life already, and with varying degrees of success. Sometimes it is as easy as breathing to gain rapport and get your point across to others. At other times you might be feeling… READ MORE Boost Your Creativity The Encarta dictionary defines creativity as “the ability to use the imagination to develop new and original ideas or things, especially in an artistic context.” Creativity and imagination are largely linked with right brain functions, whereas analytic and rationalized thinking are more associated with the left brain. As individuals, we may tend to be more… READ MORE Life Coaching Life Coaching is an effective means of establishing where you are currently in your life, where you going, and how it is you will get there by focusing on specific areas- from self confidence, to enjoying your work, creating time for yourself or living your dreams. Some people find it difficult to comprehend what life… READ MORE
__label__1
0.996074
You are here Why Physics & Astronomy? Our programs will challenge you to probe and solve the most provoking and fundamental scientific concepts. With a strong emphasis on problem solving, computer literacy, and research valuable for employment, our goal is to provide a solid stepping stone to graduate school or a career in research. Choosing Waterloo will give you the opportunity to interact with outstanding teachers and researchers, including those at the Perimeter Institute for Theoretical Research and at the Institute for Quantum Computing. Access to a strong cooperative education program will provide exposure to practical applications of your knowledge. Come; explore; find where you fit in! Important information How to apply to your program Tuition, and estimated fees Financial aid, scholarships & bursaries Housing and residences What is co-op? After graduation
__label__1
0.540181
Category Archives: writing Side by Side Tayla and Kira are sisters in crime, They are both calico in design. Books are knocked off coffee tables, Plants are sent into upheaval. The two cats have different personalities. One is outgoing and wishes to be petted, While the other is often wanting to be fed. In other words, the second cat will imply by her “Since I wish to have dinner, I will allow myself to be petted . . . If you must.” There are times their owners use water pistols, Instead of saying repeatedly the word, “NO!” They hope to dispel ‘bad’ behaviors, But often they are ignored. The funniest part of this story to me is . . . When my close friend, the kitties’ Mommy, is being ‘bossy’ to her mate, Her dear husband, the kitties’ Daddy, squirts her while saying, “Bad Kitty!” Written by Robin O. Cochran Tayla is a mainly brown, gray and white calico cat. Kira is a mostly white with brown, tan and gray patterns on her. This includes one that looks like a butterfly tattoo. My friends, Jenny and Dave, were the subject of a love story post, “Love Found in a Video Store.” I’m the one who discovered him and ‘match made’ the two, back in 1993. It is 22 years since they met; 21  years since they married. **Inspired by my friend, Luanne Castle’s post written about visiting an animal shelter with her husband. While there, they played with the kitties and walked dogs, too.  I admired how she gave us a serious reminder of one of the other activities that goes on there. She mentioned pit bulls and other breeds, including chihuahuas,  are often put down first. This was to remind us of what happens when they are not adopted and which breeds are chosen first. Luanne has been having a hard time lately due to recent serious losses in her life. Maybe we can go visit her and shower her with good wishes and hopes for her cat (Pear Blossom) and daughter’s cat (Isabella Rose) to be better. You may wish to order Luanne Castle’s fine collection of poetry, “Doll God.” You may be interested in reading her other creative stories, poetry with meaningful, intriguing subjects: Lighthouses and Sailing Away: July, 2015 grocery shopping shortly upon my arrival. tent on the side yard and all the family present. on vacation.. After carefully looking over the bakery, rows of frozen desserts while edge of Ohio. 1. Vermilion Lighthouse. 2. Fairport Harbor West Lighthouse. 3. Port Clinton Lighthouse. 4. Huron Harbor Lighthouse. 5. Toledo Harbor Lighthouse. will be moved. evening on an island. 6. Ashtabula Lighthouse. 7. Marblehead Lighthouse. 8. Old Fairport Harbor Lighthouse. 9. Cleveland Harbor Lighthouse. along the “Flats” than on Cleveland’s downtown lake’s edge. 10. Conneaut Lighthouse. 11. South Bass Island Lighthouse. reasonably priced. 12. Lorain Harbor Lighthouse. music, while she tried to sing the lyrics. dreaming by the Fire. One of Longfellow’s famous and beloved poems, with just three passages shared in this post, the opening, middle and closing one, below: “The Lighthouse Henry Wadsworth Longfellow The rocky ledge runs far into the sea, And on its outer point, some miles away The Lighthouse lifts its massive masonry, . . . “And as the evening darkens, lo! how bright, Through the deep purple of the twilight air, Beams forth the sudden radiance of its light, With strange, unearthly splendor in the glare!” . . . the glare of the lighthouse, dying and the dramatic . . . “Sail on!” it says, Sail on, ye stately ships! And with your floating bridge the ocean span. Be mine to guard this light from all eclipse, Be yours to bring man nearer unto man!” The End. beautiful passages, in my mind’s eye.) since 1979. radio, daily and on longer trips to Mom’s. along with innocence, with canvas dreams. Another passage near the end… “30 Rock.” Christopher Cross. tube, down a cool and easy river. Christopher Cross singing his upbeat songs, using his fantastic, smooth hear the newer songs. If only in my dreams… while you were by some form of water? back home. . . If not, hope you are having a wonderful weekend! Levity in Brevity Just sending some smiles and funny little jokes collected by my Mom over the past few weeks from her friend, “Pooky,” otherwise known as Joyce. Joyce is older than Mom, knows how to get on the computer and prints out all kinds of colorful emailed jokes, some are illustrated by John Wagner, with “Maxine” comic strip pictures attached. She has tried to get my email address from my Mom, for which I am eternally grateful for Mom. She has not asked me for my email address. Mom handwrites Joyce notes but sometimes they are very short notes. She repeats herself, they may just talk about the weather and her dog, Nicki. I have read them and helped her out, adding a few details and saying, “Just an extra note from Robin.” This is a little silly but remember my Mom is 86 and it didn’t embarrass her… “An elderly man goes into confession and says to the Priest, ‘Father, I’m 80 years old, married and have 4 kids. I have 11 healthy grandchildren. Last night, I had an affair or fling with two young women. I was able to perform with both of them. . . The Priest answered, ‘Well, my son, when was the last time you were in confession?’ His reply was, ‘Never Father. . . I’m Jewish.’ The Priest asked, ‘So then, why are you telling me?’ The excited elderly man exclaimed, ‘I’m telling EVERYBODY!'” My Mom wrote at the bottom of this, just in case I didn’t get the joke: “He is so proud of himself!” My Mom put three ***’s by this one: **”I’m thinking of leaving my body to Science. Even scientists can use a good laugh now and then.” (This had the famous Maxine and her dog with his eyes crossed.) Another Maxine my Mom gave this two **’s: **”The older I get, the harder it is to find Mr. Right. Darn cataracts!” This one my Mom emphatically agrees with (usually!) She gave this one 4 ****’s: ****”Sometimes I like to turn the TV off and just sit quietly, with my thoughts. Then, when I am sure the commercials are over, I turn it back on.” This picture has Maxine with a big bowl of popcorn and her television remote control in her hand. The dog has a bowl of some kind of food, with it on his lap. It is cute. My Mom also enclosed a note which was full of x’s and o’s, as well as quick ‘sound bytes,’ like: “Stay Warm!” “Take Your Vitamins!” “Wear gloves and warm socks!” “Tell everyone Great Grammie O. Loves them!” and last, but not least. . . “Please don’t send the jokes back!” Oh, What a Night! believe we all have wonderful memories of particular musical sorrow. Then there are the songs which transport us out of I felt transfixed during each of the musical numbers in “Jersey of the members lost his daughter. I felt excited when another unique beat and message. Some of Frankie Valli and the Four Four Seasons. I had heard the songs but didn’t really know the way the group’s story began, nor what happened to the Only to meet again at the Rock and Roll Hall of Fame Induction performance. The director, Clint Eastwood, is known for wishing details to fit the situations and fulfilling the character of the times NOT following the book was when he filmed the movie It is about an Italian homemaker and her adventures over one weekend, while her children and husband are at the state fair. Robert James Waller has the homemaker wearing jeans, (possibly to emphasize her figure) while Clint explained in an interview, he felt this woman could have been his own mother, so she would wear a common house dress. There are more examples in many of his movies, some where is just the background sound behind the story. I liked finding out during the credits his son, Kyle Eastwood was a musical assistant and helped with the soundtrack. Also, Clint’s daughter, Francesco Eastwood plays one of the wives in the film. Frankie Valli’s character was played by John Lloyd Young, (up close and personal in the movie) he captures your attention and his voice is very similar to Frankie Valli’s. If you saw the musical play, you may know the characters each take turns talking directly to the audience. It is a very interesting technique for telling their individual I felt sympathy for the way the real man became  part of the underbelly of his neighborhood, by being  pulled into the mob and illegal dealings by his friend and eventual member of the band, Tommy.  You realize his gambling, drinking and other vices, such as trying to trade with stolen goods, would eventually ‘catch up’ with Tommy. As a viewer, you may possibly worry about his pulling his good friend, Frankie down.  Their musical career eventually helps them to get out of their neighborhood but they could barely escape the ties. The raw emotions of a death and funeral of one of the member’s children, still just a teenager, rocks their group to the very core. Christopher Walken’s scenes as the ‘benefactor’ and supposed friend among the mob members ‘steal the The executive producers are Frankie Valli and Bob Gaudio. The slow building of the band, its members and their story unfolds and is beautifully portrayed In a semblance of order, illustrating the sequence and growth of the band’s body of work is shown in this list of songs, “Who Loves You, Pretty Baby?” “Big Girls, Don’t Cry” “Walk Like a Man” “Rag Doll” “Bye Bye Baby” “You’re Just Too Good to Be True” “My Eyes Adored You” “Can’t Take My Eyes Off of You” and repeating the title song, “Oh, What a Night.” The members of the band, Nicky (Massi), Tommy De Vito, Bob Gaudio (writer/lyricist) and Frankie performed at the R and R Hall of Fame, after 24 years apart. * They were inducted in 1990 into the Rock and Roll Hall of Fame. They joked,  saying singing together came natural, even after all the years. They only had to lower the octave and sing in a lower key. *They were inducted into the Vocal Group Hall of Fame in 1999. *In 2012, they performed together in England at the Royal Albert Hall, honored for their body of music which included 29 Top Ten (on American music award charts) Hits. which came out in 2014 is to persuade you to celebrate someday soon, by listening to one or more of Frankie Valli and the Four Seasons’ lifetime of songs. They grew up together on the streets of New Jersey, sang and lived quite fantastic lives. The movie captured it nicely. Too bad it didn’t win any and competition. What is your favorite song from this group? Which is one you played the most? If you never really liked their music, did one of your family members enjoy them? Mark My Words your creating a painting, taking a photograph,  preparing a special what meanings it has, along with a few expressions that include various forms of the word, “mark”in them. The definition for ‘mark’- 2. A symbol, name or other identifier. 3. A name, logo or other indicator. our legacy and how we helped make an impression upon another’s psychology or philosophy. They contain the current meaning and suggestions for leading a ‘purposeful life.’ daughter’s races. The excitement and anticipation of the races, into a wooden block to ‘mark’ their place. Then, an announcer says these dramatic words: “On your mark. . . Get ready, beginning of the race. their own parts of the woods. In concert and symphonic band, our musical teacher and director in the woodwinds area, with the clarinet section. appreciate my Grandpa Mattson who would call my clarinet, a songs and scales. or listening to the metronome during piano lessons, please share. The younger Mark Ruffalo, with Jennifer Garner was one of my Boys’ song, “Good Vibrations.” There are countless other “Mark’s” such as Mark Harmon who well as his being a part of history. country’s literature. He shared remarkable stories of life upon the Mississippi and going out West. His wry perspectives of the times upon my thoughts and writing, too. There are many who enjoy the dramatic colors and designs of a young teenager’s graffiti.  They leave their own distinctive ‘mark’ a cemetery where respect should be displayed or designations of being a member of a  ‘Gang.’ I enjoy when my grandchildren take colored chalk and leave their down on a square. make me weep. Are you guilty of this ‘bad habit?’ find a piece of paper or a classy bookmark. clip with a butterfly on the tip. I have marked many passages in my Bible, since I received underlined places. Tucked into the pages, there are several pieces of paper with scribbles made by my children during carefully by my youngest daughter at around 8 years old of Jesus on the Cross. this: “Matthew, Mark, Luke and John.” with something I really enjoy. after the ending of each season. These all say, “Mark Down Prices.” found in different departments. Now, even better than the ‘Markdowns’. . . are the ‘Slashing Prices!’ Join Me for a ‘Spell’ Sometimes, it would take awhile but there would be words shared a few tendencies to use Kentucky or Tennessee expressions. Did word analysis. I hope you will add your  favorite interpretation of “spell” in the comments’ section. The vastness and variation of the definitions for “spell” are amazing. I started this with only three actual uses and found out there is so much more dimension to the word. 2. a state of enchantment. 3. strong, compelling influences or attraction. 4. a short indefinite period of time. 5. a period of weather of a particular kind. 6. one’s turn at a task or work. 3. to allow someone to rest awhile. used, ‘watching the grass grow.’ I enjoy movies where you can see, through the director’s guidance calling their first meeting, ‘magical.’ There have been many love songs, where there are descriptions finding this memorable. Details during the meeting come back and “Strange Magic,” was a fantastic and beautiful animated children’s It contains purple blossoms from the flowers on the edge of the Carrie and I saw this on Sunday evening. this film. There were several adult couples holding hands and giggling looking king of the “dark forest.” who performs it. I was a guest and enjoyed watching the ladies working together. and watched me, giving me suggestions and compliments on my This math knowledge skill my oldest daughter had inherited  from her father, I tried to promote and encourage.The balancing out the ones are not successful in either case. children and my ‘clients’ caught up during the summertime really always rewarded everyone or would just encourage clapping for ones who were successful. promoting self-worth, too. writing poetry to enchant our fellow bloggers. in my life. but create important references for the weather man or woman. using it with snow. Here is a serious “pause” in my blog: concerned for this huge snowstorm you are weathering.” because their parents are Buckeye fans. Let me hear the first two letters: “O – H!” always answered with, “I – O!” the letters for “Daddy.” this shouting or chanting game, you finish with the letters and “What does that spell?” Their reply is yelled, “______!” short drive to gymnastics, the park or pool. the word, ‘spell’ included. it describes what one is.) Como singing this would make you smile and reflect on love. her again and again.” You may hear someone’s laugh and “the sound of her laughter will sing in your dreams.” captured your interest or meant something to you. Sharing a Mystery about a Sister Be prepared to read about a woman’s story, one which may or may not have been relevant and meaningful to the musical world. I feel there is a true basis and possibility that she made a big difference in how her famous brother became who he was. I have to admit, I was on my  own personal “movie fest” over the weekend. Originally I was thinking, I would just post some of my favorites and give short film Somehow, this evolved into something ‘bigger’ than I expected. It was time-consuming and yet, I felt like a private investigator with her mind open and ready for understanding and analyzing the facts. I looked up, using different sources, to find out more about this fascinating woman. Now that I may, or may not, have your attention, I will tell you the riveting movie that led to my research. “Mozart’s Sister,” a French film which needs you to read the sub-titles. In the movie,  which came out in 2011, Rene Feret is the director and a young actress who is his daughter, Marie Feret, plays the sister to her character’s famous younger brother. Historical details that were  discerned through research shall follow this summary of this fine movie. First, here are three splendid comments from famous reviewers, starting with one who’s deceased.  Roger Ebert, “Chicago Sun-Times,” was always one of my favorite reviewers. He is such a trustworthy man to recommend movies. (Of course, many of you will recognize his name and the television show which I used to enjoy- “Siskel and Ebert at the Movies.”) Here is what Roger Ebert said of, “Mozart’s Sister:” “Marie Feret is luminous.” (in this role.) David Noh, “Film Journey” says: “A triumph!” Ronnie Scheib, “Variety” Magazine: “A treat for classical music lovers and cinephiles alike.” What was a turning point in this movie which motivated me to investigate and research? What happened to make me seek the truth? When Leopold Mozart, father of Maria Anna (also referred to as Marianne and affectionately known as, “Nannerl”) tells his only daughter when she is interested in writing musical compositions, “Harmony and counterpoint are not understood by women.” Of course, this caused me to say indignantly to my television screen which was innocently displaying the film, “That’s outrageous!” Big sister, “Nannerl,” is helpful to toddler brother, “Wolfie,” and helps him practice his keyboard lessons on a harpsichord. This baroque instrument is lovely sounding. The scales and other early beginning lessons are closely supervised by their father. At age 5 or 6, “Wolfie” is paraded in front of wealthy families and is also given an audience with royalty. He is a cute boy and shows great potential and musical aptitude. The film shows Wolfgang using creative interpretation of the music and dramatic arm flourishes. He was supposedly beginning to write his own musical compositions at age 4 or 5. In the beginning of the movie,  their coach’s wheel breaks after going over a rut in the country road. It is late and the Mozart family stays in a nearby nunnery. It is interesting to note that there are two sisters living there. Their story emphasizes the difference in the way male and female genders were treated in this period of time. The two girls have been shuffled and taken away from the palace, being raised by nuns. At one point, there is a name mentioned of the two girls’ brother, who is being raised to be a ‘Royal.’ The part that Maria Anna plays, and is asked to carry out a charade, is to transport a letter to their brother, if the Mozart family should be ever happen to appear at Court. Anna Maria treasures this new friendship and promises to keep the letter safe and take it to their estranged brother. This movie would engage someone who has been enjoying the inner workings of the staff and upper class levels or tiers of British society on the PBS show, “Downton Abbey.” Although this is a whole other period of time, there are still the ideas of class structure and family expectations being expressed. Definitely, it is an eye-opener in both the film about the late 1700’s and the television series of the 1900’s. Traditions and historical details about clothing, customs and roles women and men played also are featured in both of these storylines. At the end of the film, there is not much said about Nannerl’s  being anything but helpful to her brother.  There are no illusions that she may have helped Wolfgang Amadeus Mozart compose his greatest In the movie’s middle,  there is a nice romantic interlude, where Maria Anna disguises herself as a boy, in a white-haired wig, to give the hand written letter to the young Monarch from his sister. They use the young man’s title in the film as ‘Louis XV.’ This story becomes a very sweet part of the movie. I will not tell you about how it unfolds, hoping you will someday pursue viewing this one. I will say it depicts Nannerl’s character as having spunk, showing independence and also, her romantic side. Before the credits roll, there are a few sparse details given. The written lettering after the movie ends mentions Maria Anna helped to write some of her own sonatas as a young woman. It mentions she helped Wolfgang transcribe his first writings, since he scribbled them. There is a subtle undertone of the possibility that she was his ‘muse.’  As his sister, she may have written (created) some of his early works. The movie has places that explain traditional upbringing of “fine young ladies.” The women are encouraged to wait on men, not to further their education. Maria Anna tries to ‘rock the establishment.’ Her mother has disappointment and her father shows anger for her independent streak. She doesn’t wish to follow the social order of the period. I was rooting for her, all the way! If you enjoy history and reading about a famous person’s family, you may enjoy this part of the post. . . Wolfgang Amadeus Mozart lived from January 1756 until December, 1791. There is confusion about why he died at such an early age of 35. He was the son of a musician and teacher of music, Leopold Mozart. His mother was named Anna. He was born in Salzburg, which later became part of, or known as,  Austria. Wolgang’s father and mother had seven children, only two that lived beyond infancy. The oldest living child was a daughter named Maria Anna, nicknamed, “Nannerl.” There were four years between the two children, sister and brother. When Wolfgang was 3 years old, his sister was learning her lessons, which included language, music and reading. She was practicing with her brother close by her side. Later, she would be by his side, while he was the one leading the lessons. This relationship lasted probably all of their childhood. “Wolfie” was her little shadow, trying to do everything she did. There is a notebook that Leopold made for Maria Anna, which is known as “Nannerl’s Notenbuch” or also written as, “Notenbuch fur Nannerl.” In English, this was “Nannerl’s Music Book.” This amazing composition book demonstrated the first lessons that Leopold gave to her, along with her brother. It consists of only (originally) 48 pages, now only 36 pages remain.  This book has her father’s exercises for her practicing beginner harpsichord pieces. This also included anonymous minuets and some of her father’s  original  works.  Two composers,  Carl P. E. Bach and George C. Wagenseil, had their pieces transcribed as passages in this musical exercise book. In 1982, a man (just a coincidence) named Wolfgang Plath studied the handwriting within the Notebook and attributed the variety to consist of five different handwriting samples or sources. There are evidences of the collaboration between Leopold, the father, and his son, “Wolfie.” Leopold took his family touring around countries and the cities of Vienna, Austria and Paris, France. Maria Anna Mozart was born in 1751 and lived 78 years, until 1829. When she became a young lady, it was considered inappropriate for her to continue to publicly play the harpsichord, piano or sing. Up until she was 18, Maria was part of her musical touring family. A biographer considered her to be a great singer and an, “Excellent harpsichord player and fortepiano player.” Sadly, there is no mention about Nannerl being a conduit, or letter transporter, between the sisters raised in a nunnery and a member of Louis XV’s “Court” or “Royalty.”  This was the main part of the plot I enjoyed in the movie I reviewed earlier. At age 18, Maria Anna went home to Salzburg with her mother, to teach musical lessons and stay at home. The following reason was mentioned in one source, “This was due to her being of marriageable age.” Wolfgang and his father both wrote letters to Maria Anna which some have been saved. Wolfgang during the 1770’s, was touring in Italy and mentioned Nannerl’s writing musical compositions and Wolfgang goes so far as to ‘praise her musical works.’ There are no references in her multiple letters from her father to any of her own musical compositions in his correspondence. An interesting note (and slightly salacious fact) is mentioned in some of the biographers’ notes about Maria Anna’s and Wolfgang’s close, intimate relationship. When they were young, they developed a “secret language” and they had an “imaginary kingdom.” They pretended they were married and carried out their positions while playing together, as “Queen” and “King.” There are a few indications and there is evidence of Wolfgang’s using sexual wordplay which he used in other letters to his lovers or girlfriends. This can be found also in the words he chose and were included in his writing to his sister. One historian considers this to be a ‘strange relationship’ for a sister and a brother. As an aside, my two brothers and I would play ‘house’ but we would not have myself be the “mother” and one of my brothers be the “father.” We would instead play that one of the brothers was the “father” and other brother and I were his “children.” Like the old television show, “Family Affair,” where the uncle has “Buffy” and twins “Cissy” and “Jody.” (I used to love this show, with Sebastian Cabot playing the butler/nanny and Brian Keith playing the bachelor uncle. did you know it ran from 1966 until 1971?) Or I would play the ‘mother’ role and the brothers were my ‘kids.’ We usually had company or neighbors over.  Once in awhile, they would ‘marry’ one of my girlfriends, or once in awhile, I would ‘marry’ one of their guy friends. I mention this to confirm that I would also think it strange that the siblings played ‘Queen and King’ together over a Kingdom. A sad note about Maria Anna’s independence shown in the movie, “Mozart’s Sister.” This is not to be found anywhere in any biographies or any letters. She is shown to be subservient to her father, allowing him to forbid her to marry a man named, “Franz d’Ippold.”  They were both young, he was a Captain and a private tutor. When he proposed, there is an implication she would have liked to say, “Yes.”  There is a letter in the family’s collection where her brother, Wolfgang, tried to persuade her to stand up to her father. Ultimately, Maria Anna was ‘forced’ to turn down Captain Franz d’Ippold’s proposal. Years went by, Maria Anna was allowed to marry at age 32, when asked by a man named Johann Baptist Franzvan Berchtold  “un Sonnenburg.” They were  married in 1783.  Listen to the “fun” life Maria Anna participated in:  She became the wife of a widower with five children she helped to raise. She had three more of her own children with Johann. When she had her first born son, she named him Leopold. Her father insisted on taking the her only son to raise him in Salzburg at his home. The biography doesn’t mention her mother’s role in this drama. From 1785 until he died in 1787, Leopold Sr. wrote letters and in a journal telling about his toilet training Jr. and teaching him how to talk. There was no mention of the boy’s illness nor a reason why he should not have been raised as a baby until age 2 by his own mother.  There is some speculation for her father’s thinking he would raise another musical prodigy. Since he felt he was the reason Wolfgang A. Mozart turned out the way he did. After all, Leopold Mozart, Sr. did write and publish a violin music textbook. Wolfgang Amadeus Mozart was known for his classical musical compositions, which included over 600 works. They include symphonies, concertos, operas and choral Beethoven, while young, lived in the shadow of Mozart. During his early years composing his own original music, he was constantly compared to Mozart’s body of work. Composer, Joseph Hayden said of Mozart’s legacy: “Posterity will not see such a talent in another 100 years.” Wolfgang A. Mozart married Constanze and had two sons. He died at the early age of 35 years old. His magnificent “Requiem” was never completed. His music is still revered and considered the best in classical Maria Anna was never given any credit (that I could find out about) for her influence on her brother’s music nor were any of her musical compositions published. The book, “Nannerl’s Notenbuch” is not considered to be anything but her lesson book to practice and play music using the hand written I need to see the movie, “Amadeus,”  (again) to see if there are any musical or notable references to his sister. If you have a good memory or recently seen this, let me know in the comments whether there is mention of Anna Maria Mozart please. I strongly recommend, “Mozart’s Sister” as a film to savor and enjoy, while wishing the story line really happened. Truthfully, being an older sister myself, how could “Nannerl” NOT have had an influence upon her little brother, “Wolfie?” Either way you look at this famous musician’s life, Wolfgang Amadeus Mozart made a huge impact on the musical world.
__label__1
0.84532
"Understanding how germs spread" About: Rotherham Hospital / General surgery (as a relative), The ward which my wife is in had a bug detected so the staff and visitors were told to put on apron and gloves, which they conformed to. The doctors came in on their ward rounds and did not bother to put the gloves and aprons on. As my wife has just had a big operation I am concerned. Is it that they are excluded from the precautions? They will probably go onto other wards. I can understand now why some bugs spread over the hospital. Dear brickman, thank you for your posting on the Patients Opinion web site. We were most concerned to hear of your observation that any of our staff are failing to comply with prevention of infection precautions and would be grateful if you could contact us directly on 01709 424461 to identify the relevant area and take appropriate action. Our Infection Control team have advised that when a patient is found to have an infected wound, the bay in which they are nursed is treated as ‘in isolation’ until the other patients in the bay can have their status clarified (through swabs). The precautions we require our staff to take are that they must wear aprons and gloves for any contact with the patients in that bay (or the environment such as bed making etc) and that these gloves and aprons must be changed between each patient. For those not providing direct care e.g. passing a cup of tea or asking a question, do not need to wear a glove or apron nor is it necessary for visitors to use gloves and aprons unless they are visiting more than one person, or if they are providing personal care. We do, of course, request that upon arrival and departure all visitors clean their hands either by washing them or use of alcohol gel provided. Providing the doctors were not making physical contact with any of the patients in the relevant bay they would not have needed to put gloves and aprons on when coming into that bay, but we appreciate how worrying it is when you have been given to understand differently. To that end we have asked our Infection Control team to ensure staff of any areas that currently have precautionary isolation bays, to understand their responsibility to consistently communicate the required precautions with all staff, relevant patients and their visitors. As a hospital with a strong track record in minimising infection transmission, we are never complacent and continue to audit all staff compliance with hand washing on a monthly basis. We would like to thank you for alerting us to your concern. With best wishes,
__label__1
0.570789
Learn More This study explored metabolic mechanisms of future (delay) discounting, a choice phenomenon where people value present goods over future goods. Using fluctuating blood glucose as an index of body-energy budget, optimal discounting should regulate choice among rewards as a function of temporal caloric requirement. We identified this novel link between blood(More) This study presents a domain-specific view of human decision rationality. It explores social and ecological domain-specific psychological mechanisms underlying choice biases and violations of utility axioms. Results from both the USA and China revealed a social group domain-specific choice pattern. The irrational preference reversal in a hypothetical(More) Central to research on human reasoning and decision making over the last few decades has been the idea that human choices and decisions are governed by a few rational principles or heuristics. However, various empirical findings have shown that human reasoning and decision making behaviors often violate a small set of rational principles or utility axioms(More) Event-related potentials (ERPs) and behavioral ratings were collected from 30 female subjects who were exposed to picture slides. The slides belonged to five affective categories whose content was babies, dermatological cases, ordinary people, male models, and female models. Based on the day of testing relative to their menstrual cycle, the subjects were(More) The Monty Hall problem (or three-door problem) is a famous example of a "cognitive illusion," often used to demonstrate people's resistance and deficiency in dealing with uncertainty. The authors formulated the problem using manipulations in 4 cognitive aspects, namely, natural frequencies, mental models, perspective change, and the less-is-more effect.(More) This study examined the neural basis of framing effects using life-death decision problems framed either positively in terms of lives saved or negatively in terms of lives lost in large group and small group contexts. Using functional MRI we found differential brain activations to the verbal and social cues embedded in the choice problems. In large group(More) The tri-reference point (TRP) theory takes into account minimum requirements (MR), the status quo (SQ), and goals (G) in decision making under risk. The 3 reference points demarcate risky outcomes and risk perception into 4 functional regions: success (expected value of x ≥ G), gain (SQ < × < G), loss (MR ≤ x < SQ), and failure (x < MR). The psychological(More) Behavioral ratings on several affective scales (non-erotic/erotic, unpleasant/pleasant, simple/complex and low arousal/high arousal), and electrophysiological responses (event-related brain potentials) to emotional pictures, were collected from 30 female subjects, at different phases of their menstrual cycle. The pictures belonged to 5 emotional categories,(More)
__label__1
0.975361
Common Signs and Symptoms of Multiple Myeloma A Complex Cancer of the Plasma Cell in the Bone Marrow Multiple myeloma can lead to bone fractures. Multiple myeloma can lead to bone fractures. ultura RM Exclusive/PhotoStock-Israel/Getty Images Multiple myeloma is a cancer of the plasma cells—a type of infection-fighting cell—in a person's bone marrow. When the initial symptoms of multiple myeloma begin to appear, they often go unnoticed because they are vague and non-specific. In fact, symptoms are similar to those of other, less serious illnesses and conditions, often causing a delay in diagnosis. Also, some people with multiple myeloma never experience any symptoms, but signs shown up on blood work (for example, a decline in kidney function). Signs and Symptoms of Multiple Myeloma The signs and symptoms of multiple myeloma, like bone pain, fatigue, and numbness and tingling, can be described as complications of the disease. While a combination of these symptoms can suggest multiple myeloma, the actual diagnosis of multiple myeloma is detected though blood tests and other medical tests like a bone marrow biopsy. Bone Pain and Bone Loss/Fractures in Multiple Myeloloma Bone pain is a commonly experienced symptom of multiple myeloma. Low back pain is frequent, but pain can occur in other areas, like the ribs, hips, and skull. Bone pain can be caused by other non-malignant conditions, so it isn't exactly an initial red flag for multiple myeloma among doctors. Bone loss, called osteoporosis, and unexplained fractures are also a symptom of multiple myeloma. The spine, ribs, and pelvis are common sites of bone breaks, or fractures, caused by bone weakening due to multiple myeloma. Elevated Calcium Levels in the Blood in Multiple Myeloma Elevated levels of calcium in the blood, termed hypercalcemia, is also a sign of multiple myeloma. As myeloma cells break down bone, calcium is released into the bloodstream. Hypercalcemia can cause: • nausea • loss of appetite • fatigue/weakness • excessive thirst/urination • constipation • mental confusion • kidney failure Anemia in Multiple Myeloma As myeloma cells begin to replace red blood cells in the bone marrow, normal cells are crowded out. This leads to a decline in the number of healthy red blood cells. When the amount of red blood cells decreases in the body, the resulting condition is anemia, which causes paleness, dizziness, weakness, fatigue, and shortness of breath. Other blood cell counts may similarly be affected by multiple myeloma, including infection-fighting cells, called white blood cells, and platelets, which are involved in the clotting process. A low number of white blood cells can make a person prone to infections, especially pneumonia, urinary tract infections, and kidney infections. A low number of platelets can predispose a person to bleeding. Kidney Damage in Multiple Myeloma High levels of calcium and excess myeloma proteins in the blood are filtered through the kidneys, causing damage. As kidneys begin to fail, they lose their ability to regulate fluids and electrolytes in the body and remove waste products from the body. Swelling, especially in the legs, is a symptom of kidney failure, along with weakness and itching. Kidney failure tends to occur most often in people with advanced cases of multiple myeloma, but is occasionally the first sign of multiple myeloma. Hyperviscosity in Multiple Myeloma Thickened blood, or hyperviscosity, can also be a sign of multiple myeloma and occurs from the excessive amount of proteins made by the cancerous plasma cells. With hyperviscosity, the blood is resistant to flow, and symptoms may include: • nose bleeds • blurred vision • numbing or tingling sensation in arms or legs • heart failure Unexplained Weight Loss in Multiple Myeloma While losing weight without effort may be welcomed by many, it can be a symptom of many types of cancer, including multiple myeloma. Numbness and Tingling Sensations in Multiple Myeloma When the bones of the spine are weakened or fractured by multiple myeloma, they can collapse onto nerve roots and compress them. This may cause a condition called radiculopathy, leading to symptoms like numbness, tingling, pain, and/or muscle weakness. If the spinal cord is affected, a condition called spinal cord compression occurs, which is a medical emergency. In spinal cord compression, there is severe back pain, sensory disturbances or weakness of the legs, and/or loss of control of the bowel and bladder. What Should I Do If I Have These Symptoms? If you are experiencing any of the above symptoms, check in with your doctor. Multiple myeloma is a complex disease, and its symptoms mimic many other health conditions. American Cancer Society. (2015). Signs and Symptoms of Multiple Myeloma.  US National Library of Medicine. PubMed Health. About Multiple Myeloma. Continue Reading
__label__1
0.957839
Since the establishment of the Republic of Slovenia’s International Trust Fund for Demining and Mine Victims Assistance (ITF) in 1998, the United States has provided more than $52 million (U.S.) in humanitarian demining assistance to the countries in southeast Europe. This includes the recent expansion of funding assistance to countries in the Caucasus region. Together, the United States, the ITF, the mine-affected countries in the region and an impressive number of donors have demonstrated the success of regional cooperation. you may Download the file to your hard drive.
__label__1
0.931352
Thursday, October 8, 2015 Conflict Resolution Techniques For Awakening Times | 8 ways to find Peace in Conflict and Find Resolution... Jeremy McDonald helped inspire Julian and I to do the outward work of this blog back in 2013. He wrote an interesting article on conflict resolution I think is very poignant in this time of awakening.  As a people we are fundamentally interconnected to all life and other beings. The ability to empathize with other life is hardwired into our biology. Mirror Neurons are cells within the brain that react to stimuli as if we were being affected.  In these awakening times, where the realities of interconnectedness are finally being rediscovered, we have access to knowledge that can help us heal ourselves and our relationships with others. For it is the collective power of each person's consciousness that manifests the world in which we all live. Instead of wasting our creative energy fighting amongst ourselves why not work together to create prosperity for all? But what makes us get into conflict in the first place? The answer to this question penetrates to the core of what makes us who we are, that we are ultimately aspects of a singular and all encompassing consciousness, experiencing life in a dream of illusory separation. Other people are essentially reflections of ourselves and as such when we perceive another person rejecting our point of view, this causes us to feel hurt and harmed, because literally an aspect of who we are is not in harmony with us. In this space of emotional instability a storm of conflict can grow, fueled by a primal desire to be accepted and loved. Since love and acceptance is ultimately what we seek from others, finding away to first give ourselves love so as to heal the inner discord and then acting compassionately with others is the foundation for all conflict resolution.  For how can we hope to have another accept our point of view if we cannot find a way to accept theirs? Discourse is a term referring to a mutual exchange of ideas; a discussion. In most cases an argument is created because this flow of ideas, the metaphysical currency of consciousness, is being blocked by a defensive reaction; fear. In most cases there are beliefs, worldviews or perspectives that are being challenged by another, and this threatens our sense of self (the ego) potentially causing an emotional reaction of defensiveness.  The following excerpt is from an article discussing a diagram called the Argument Pyramid. Since the flow of ideas is an expression of our unique perspective, an aspect of who we are, the goal of discussion is to communicate and receive concepts. And when this flow stops, when we fall away from a central point of discussion, this is where argument tends to develop.  Here is the excerpt: The goal of any discussion about ideas, beliefs or points of view is ideally done by sharing information completely, as it relates to a central point. This is not to force another to accept our beliefs, in fact a good discussion should challenge our accepted truths, expanding them with insights and enriching our point of view in the process.  I am hardly a perfect example of objective discussion and diplomacy, but ideally when someone is diverging away from the central point, I try to address their emotional charge using compassion and understanding. Rejecting someone's point of view usually creates further argument, as they try and justify their position. By accepting their position as is, and then building from there using compassionate questioning, we can attempt to create an emotionally easy space for vulnerability. This acknowledges the fact that the central point is no longer being discussed and attempts to address the emotional needs of others.  We get all sorts of incendiary comments on the blog and social media. Here is a good example from a Facebook share of the Summary of Cosmic Disclosure Episode 10: "You know, I can't watch these guys cause their so interested in making money." I responded with the following: In law this is called conditional acceptance, where one party's perspective is acknowledge by the other party, while a the same time citing inconsistencies in an inquisitive way. Free will beings have different points of view as an inherent property of existence, therefore accepting their currant position will help create a space for vulnerability while focusing on the critical points of discussion.  In my view, we have farm more to unify on then to let divide us. In our present day and age, it is perfectly acceptable to argue and conflict with others and it is even encouraged by society to polarize in groups which are focused only on hating or rejecting something. But the cost of fighting amongst ourselves is socially destructive and causes us internal pain from rejection.  By developing the ability to use skills of compassion and empathy, by honoring the free will and points of view of others, we can resolve conflict and finally recognize the advantages of cooperation. For our world could be a paradise of abundance if we only set aside petty differences and worked together to ensure a prosperous future for all life.  - Justin  Source - Jeremy McDonald  Each one of us in our daily lives runs across many situations of conflict and these situations pop up in our lives over and over again to help us grow. So we can choose to take them as something happening to us or we can choose to navigate through them with the idea that each situation has been presented to us as an opportunity. Basically, in short we can either choose to be a victim and look at the world as beating us down or we can flip that idea around and see it as always working in our favor. Even the times that are tough become positive when we become grateful for opportunities that help us grow. When it comes to conflict between people we could consider the following ideas to help us navigate through these situations and come out on the other end as a win-win situation for both parties. 1. Give up the need to be "RIGHT" I've honestly found in most conflict, arguments and disagreements there is a serious breakdown in communication. Often times I work with people and I ask them have you spoke to the person you are upset about? I usually get a very aggressive response stating they have tried but no one is listening! As I probe deeper into the topic, I typically find the individual is upset because the person they are upset with will not "change" so they themselves can feel better. It's very interesting to watch "we have all done it" we think to ourselves if "THEY" would just listen to me everything would be fine! Or if they would just stop and listen to me they would see how I am feeling! I can tell you in these scenarios the person who is upset is also not listening to the person they are upset with either. As you can probably imagine they are also not listening to themselves. Overall, they are wanting the other person to see their point of view but at the same time they reject the other person's point of view. This leads me to think about Stephen Covey and his writings on one of the Habits of Highly Successful people, which is: Seek First to Understand and then to Be Understood - Basically if you stop and LISTEN - Get Still and listen to not only the other persons needs but also to stop and get honest about what you are truly wanting you would be able to gain a clearer perspective. Often times we are just upset and we are not thinking about what we are saying and not fully aware of what we are feeling. Which leads me to the next step... 2. Process your thoughts and understand them before you address them with another person. Typically what I find is people allow things to build up and never really think about whatever is upsetting them. As they let their reactions build up and keep pushing the causes aside, this poison festers inside of them and leaves their mind to wander and think things that are either not true or are completely distorted. This is why you should take sometime to think about what it is that is truly bothering you and while you are at it.. try to think about the other persons perspective while you are processing your own. Once you have processed what is going on within you, if you still feel like you need to talk about your feelings then now you do so calmly. 3. Speak your truth and do it with compassion... I can guarantee you nothing comes from yelling at another person or telling them what they are doing wrong.. if you spend your time speaking your truth by just addressing how many things another person is doing wrong then I can tell you it will push on their ego, and most likely it wont be received well... Try saying this as you speak to others about what is bothering you: "I need to talk to you about something that has been bothering me and let me start by saying my emotions in this situation are about me not about you... " Then go on by saying: "I wanted to know what you mean when you say ___________________ because I received it this way ________. These two statements make sure that you take ownership for your feelings and are not projecting them on to the other person and you are also asking for clarification on what their intention was. Which leads me to the next step... 4. We want to pay attention to another persons intentions You have no idea how many times I work with couples, friends, co-workers that look at another and say "YOU ARE MAKING ME FEEL THIS WAY!" and the truth is no they are not. No one can make you feel emotions, only you can create that.. No one has this kind of power over you.. YOUR REACTIONS ARE YOUR OWN! This is why asking for clarification one what another's intentions are helps you understand what they intended prior to expressing how you feel. [Even though a good intention can have negative effects, knowing the intention allows us to empathize with the other person, helping to calm reactionary emotions within.] Recently, I had a friend accuse me of being rude on a social media site. While she was doing this, I was expressing to her what my intention was. As we continued speaking she insisted I needed to correct how I was treating her. I continued to ask her how is it that I can make you feel this way when my intention was coming from a place of curiosity and love? I finally had to back away from the conversation because the more I attempted to talk to her the more she got upset. In this instance she was basing her reactions on her perceptions of the situation and not what my true intentions were. A person's intention is the most important part of communication and understanding them helps us really break down and streamline conflict resolution. This is because if you understand where someone is coming from then you can look at your own reaction and assess whether you need to look within yourself or set a boundary with your friend. [In other words, did they really intend to harm us or did I misperceive their behavior towards me?] 5. Communicate clear boundaries with people There is nothing wrong with saying to people this is what I like or do not like; nothing at all. How we deliver this message is very important, for after we have gained clarification and after we have expressed how we have felt, we can say these are things I do not like or do like in a calm way. Once we have processed our own reaction and taken the time to understand another's perception, we then can share our boundaries from a place of mutual respect and love. 6. It's important to not feel the need to have the last word Are you are a person who has the desire or need to have the last word all the time? Then stop and listen to what is going on inside of you. Why do you need to have the last word? If you begin to start thinking: "well they need to hear my perspective and they are not listening to me," then it's probably because your desire to be accepted is based on fear and not love. [If having the last word is about feeling like 'you've won the argument' then it shows you that your need for acceptance is probably co-dependent. Everyone at some level simply wants others to receive their point of view openly, and when people in our lives refuse to accept us, it can cause us to go on the attack. But by going within and realizing we can't force another to accept us, we can have patience and allow another to become still as well. Developing the ability to be patient helps avoid the feeling of urgency that can cause us to try and force others to accept our point of view.] 7. Use these tools for "SELF-ASSESSMENT" not for analyzing another person... I know some of you are reading this and thinking about someone right now thinking "if they only followed this they would not have the problems they are having." Come on admit it? It's ok because we have all done this! Now stop and take a moment to turn the reflection inward and process the conflict for yourself; not another! [Our perspective on the conflict and how it makes us feel are the things that create emotional turbulence. By focusing on ourselves we can heal the inner angst and give ourselves the best chance to resolve the conflict with another by not maintaining a consciousness of reactivity. This helps us develop an appreciation for finding a mutually beneficial solution, instead of trying to force another to accept only our terms, conditions and perspectives. And if no resolution can be found at the moment, from this space of inner calm we can let it go to resolve the conflict another day; ideally after the other person has processed their own perspective.] 8. Last but not least and most important... You cannot, and I repeat, CANNOT change anybody! I know we have all heard this but as our world changes and the dynamics of our relationships change our fearful mind can lash out and try to fulfill the need to feel better by controlling our surroundings; attempting to control others. This again is a time to stop and take a look at ourselves and use these steps as a guide to help us become more self aware.  [Self knowledge, understanding why we react, why we need to feel accepted, why what another person thinks is so important to us, is essential to resolving conflict because in truth the only real power we have is to change from within. Once we gain inner stillness, we can have compassion with others and share our perspectives without the need for them to be accepted. Honoring another person's perspective is honoring their free will choice and the deep psychology behind all conflict is a perception that our free will is not being honored. Ultimately the cause and solution to all conflict comes from the ability to empathize with another, to see with their eyes and in doing so honor their free will. And if another person's desire is to cause harm to us, we can set a boundary that honors ourselves as well as the other person. This is the basis of morality, ethics and harmonious co-creation; the goal of justice and the golden rule. The ability to respect another's point of view completely, while at the same time ensuring they do not harm us or others.] Once again realize that the whole world is presenting you with many opportunities to grow and develop yourself into a greater person. This is done so you can find a greater sense of peace, happiness and understanding of yourself and others. Happy Self Awareness! Also check out this article which goes into co-dependency as well as control dramas: Much love to you! READ MORE from Jeremy McDonald @ Sign-up for RSS Updates:  Subscribe in a reader Sign-up for Email Updates: Delivered by FeedBurner View and Share our Images Curious about Stillness in the Storm?  See our About this blog - Contact Us page. We hope you benefit from this not-for-profit site  to give back, with this website, helps others in gaining  knowledge, liberation and empowerment. This website is supported by readers like you.  [Click on Image below to Contribute] Support Stillness in the Storm Sign up for Gaia TV Sign up for Gaia TV By signing up through this link you also support SITS
__label__1
0.88755
Right energy mix elusive despite rich resources Massive exploration was under way in both the coal and uranium industries 60 years ago as electricity demand soared after World War II. In the early 1950s, Australia embarked on a revolutionary plan to build and operate a nuclear power plant at Mt Isa in north-west Queensland. Around the same time, state governments piled money into boosting exploration and development of coal deposits in a bid to supply the growing steel industry. Within a decade, the industrialisation and urbanisation of Japan, and the Asian tigers in the 1970s and 1980s, provided a ready market for high-quality black coal while nuclear power was stymied by a combination of government opposition and wider community concerns. The decisions (and dithering) made in that decade have gone a long way to shaping Australia’s resources industry. Six decades on the nuclear industry remains at a standstill, with no facilities in operation despite the country holding the second largest uranium reserves in the world. By the 1960s, as pastoral exports began to decline, so too did Australia’s relationship with Britain. The ramping up of large-scale exports of iron ore, coal and aluminium resulted in a geographical change in our most important export markets. By 1967 Japan, previously a bit-part player in commodities, had overtaken Britain as Australia’s largest export destination. By the 1980s, it was taking a third of all exports, with China and Korea also turning into important partners. Coal, although eclipsed by oil in the 1960s as the world’s most widely used energy source, remains a bedrock of Australia’s export sector to this day. Japan continues to import around 40 per cent of Australia’s black coal supplies, followed by South Korea, Taiwan and China. While Japan’s industrial cartels still hold plenty of sway for both Australian and global commodity markets, China’s insatiable hunger for Australia’s high-quality iron ore has fundamentally reshaped Australia’s mining sector. China’s metamorphosis into the world’s largest steel maker has fed an extraordinary iron ore boom for dozens of multinational and local miners. Some 70 per cent of West Australia’s Pilbara iron ore is shipped directly to Chinese buyers. Today, booming mining exports to Asia mean Australian miners and oil and gas firms are on the cusp of splurging a whopping $90 billion on new mines, roads and equipment, marking the biggest investment boom in the nation’s history. Capital expenditure on minerals and energy projects that are committed to or already under construction is estimated to have reached a record $173.5 billion in April, a massive 31 per cent increase from October. With global consumption of natural resources expected to triple to 127 billion tonnes a year by 2050, the future mix of Australia’s resources industry will be driven by coal and iron ore along with one relatively new but growing entrant to the country’s export mix: liquefied natural gas. Australia is poised to become the world’s second largest gas exporter ­after Qatar, with plans to increase production fourfold to 100 million tonnes a year by 2015. The development of unconventional gas sources is also tipped for massive growth given the International Energy Agency expects global gas demand to double by 2050. The US Department of Energy has estimated Australia has the potential to hold almost 400 trillion cubic feet of recoverable shale gas, with the potential to emulate the gas production boom under way in North America. The total includes about 85 trillion cubic feet in the Cooper Basin, twice as much gas as at Chevron’s huge Gorgon liquefied natural gas project in Western Australia. Renewables also have a major part to play, though much of the success of solar and wind initiatives will hinge on the federal government’s ambitious pledge to reduce the nation’s greenhouse gas emissions by 80 per cent by 2050. Treasury’s long-term modelling, released as part of the carbon tax package, predicts the renewable sector could be expected to grow 18 times its current size by 2050. Yet it is difficult to see how Australia will hit such a target without a contribution from nuclear power. While more expensive than both gas and coal, it remains a cheaper alternative to renewables and is a reliable producer of baseload power. Sixty years on, Australia still finds itself grappling with trying to find the right energy mix. The Australian Financial Review
__label__1
0.716042
What goes up must come down, and not the other way around, right? Wrong. Drive somewhere like Confusion Hill in California or Magnetic Mountain in Canada, put your car in neutral, and watch one of the most widely accepted laws of physics turn on its head. Your car will seem to defy gravity, slowly rolling uphill. Science Channel Put your car in neutral at the bottom of a gravity hill, and it will appear to roll uphill. Gravity hills, also known as magnetic hills, mystery spots, and spook hills, have been popping up by the hundreds all over the world. Visitors are flooding to these sites, even paying small fees to experience the eery, seemingly supernatural effects of what has been referred to as “antigravity.” So what's really going on? There are many possible explanations for what could make an object break one of the sacred laws of science:  Mysterious magnetic sources beneath the earth’s surface could be slowly pulling you towards them. A glitch in spacetime could cause the laws of physics to unravel into backwards chaos. An army of angry ghosts could be muscling your car to the top of the hill with nothing but their bare ghost hands. Or maybe it’s just an optical illusion. All of these sites have one thing in common (other than their apparent disregard for gravity): the horizon is either curved or obstructed from view. This is key. Horizons provide us with a very useful reference point when we're trying to judge the slope of a surface. A study published in Psychological Science in 2003 found that false horizon lines can be deceiving to observers perceiving landscapes. Phenomenal Travel Videos A ball rolling on a gravity hill appears to stop and roll back up the hill. Without a true horizon in sight, objects such as trees and walls — which your eyes use as visual clues to determine perpendicularilty — can play tricks on you. If these objects are leaning slightly, they might make you think you're looking at a downward slope, when in actuality you may simply be looking at a flat (or even uphill!) surface.
__label__1
0.961203
Rainier Christian Schools Superintendent of Schools Rainier Christian Schools Renton, Washington, United States Date Posted: 03/12/2014 Categories: Education Job Type: Full-Time Job Description: Rainier Christian Schools (RCS) is accepting resumes for its Superintendent position.  RCS has three elementary schools, a middle school, and a high school located throughout South King County, Washington.  It has a vision to “develop the whole person for the glory of God” by serving 675 students and their families. The Superintendent is the spiritual, educational, financial, and administrative leader of RCS. The Superintendent's chief responsibility is to promote the continuous Christian walk and growth of students, faculty, and staff. The Superintendent is to provide visionary, pastoral, and biblically sound spiritual leadership to RCS. As the education leader, the Superintendent is responsible for improving student achievement; setting high learning expectations; and developing, communicating, and implementing a vision for comprehensive school improvement. The Superintendent is to provide leadership in the development of the financial and physical resources of RCS. Candidates need a proven ability of financial stewardship and management.  The Superintendent is to lead the administrative team to ensure unity, cooperation, and problem resolution; shepherd and support the administration and staff spiritually at RCS; provide biblical accountability and discipleship; and ensure that RCS policies are implemented at all levels.  The Superintendent's leadership enables the staff and faculty to serve the needs of the student to whom God has entrusted to RCS. The Superintendent is accountable to the RCS Board of Directors for the continued operation of RCS in accord with Biblical principles and the policies adopted by the Board. This is a twelve month position with a salary range of $65,000 to $79,000 DOE.  Benefits include medical paid 100 percent by employer, 50 percent tuition discount, sick leave, and vacation. Resumes may be sent to the RCS Board of Directors at Job Duties: The Superintendent carries the full obligation of implementing the policies of the RCS Board of Directors.  In addition there are specific tasks that have been assigned by the Board to the Superintendent.  These tasks include but are not limited to: 1. Identifying, hiring, evaluating, disciplining and/or termination of Administrators and other supervisory personnel. 2. Develop and propose the salary schedule for the Administration, faculty, and staff. 3. Oversee and provide guidance in the hiring, training, discipline and dismissal of faculty and staff by the Administrators. 4. Act as the focal point for the administrative team in preparation of the annual budget. 5. Provide leadership in the development of the educational programs, including but not limited to, curriculum enrichment and integration, accreditation, teacher skills enhancement, resource acquisition and maintenance. 6. Provide leadership in the development and use of facilities, including acquisition, building and maintenance. 7. Act as the lead representative of RCS in educational, governmental, civic, community, and church forums. 8. Provide leadership in the managing of RCS finances to ensure that expenditures are in line with the approved budget. 9. Handle serious discipline issues.
__label__1
0.803803
Josué Michels Advancing Blockchain Technologies and the World’s New Banking System Will technological advancements once again change the course of the world? The seemingly impossible is suddenly reality: The German people have found an alternative to Merkel! What now? Terrorist attacks on Christmas markets and New Year’s Eve celebrations once again raise the question: Where is God as humanity suffers? With the euro crisis, the Russian threat, the refugee crisis and Brexit, many leading European politicians are expressing doubt in the continuity of the EU. Did the dream of the United States of Europe fail? Faced with the unfathomable, Europe unites in fear. Europe’s identity crisis as seen from its diary Are you concerned about the present state of the world? Bitcoin, fintech, blockchain—will it affect me? German bishops remind politicians of their Christian obligation. Remembering millions who lost their lives in World War I The Vatican stretches forth its hand to rescue a tumbling Latin American nation. Fintech, Blockchain, Start-ups, Banking Revolution … but what about me? It might sound paradoxical, but there is good news despite today’s horrific headlines. Striking Bavaria means striking the Catholic heart of a sleeping lion. A new white paper for the German Army and a new interpretation of the Basic Law Political shifts in the southern hemisphere are offering a major economic opportunity for Germany. Many blame the huge influx of refugees for increased violence in Germany—but there is more to it. Germany’s reluctance to take on more leadership has ended.
__label__1
0.849305
Getting the right roy henderson roofing chattanooga tn replacement for an experience roof Figuring out how to find the most versatile roy henderson roofing chattanooga tn replacement for an fallen roof – or identifying the best choice for a new building – is no easy task. The perfect roofing solution for one building may be the worst option for another just down the street. That’s because no two buildings are precisely alike, even if they closely resemble each other. So how do you choose a new roof, given all the choices in the marketplace? You can start by asking a series of questions, before you choose the roof, the roy henderson roofing chattanooga tn or the manufacturer. 1. What is this building’s mission statement? Before calls are made to roy henderson roofing chattanooga tn, the first item to address is the company’s mission statement as it relates to the building. For example, as more roy henderson roofing chattanooga tnmove toward operating 24 hours daily, seven days a week to satisfy global customers, the data center must never spring a rooftop leak. Water on computer systems generally spells disaster. A special set of concerns arise for cooling-dominated climates. Does the roy henderson roofing chattanooga tn contribute to air conditioning savings and address other key issues? Is it part of a total energy program? There is a growing concern about urban heat islands. Reflective, white roofs have become of interest in those areas for a few reasons. They keep the building cooler, reduce air conditioning costs and also minimize the heat-loading of the surrounding environment. 2. What physical and other elements influence the roy henderson roofing chattanooga tnsystem selection? When it comes to roy henderson roofing chattanooga tn, you need to list the attributes of the roof area itself. A good roy henderson roofing chattanooga tn would detail the roof’s size, shape, slope, deck construction, edge detailing, protrusions, rooftop access and existing roofing system. Along with this basic information, you need to find out why the original roof is no longer adequate. 3. What flexible-membrane roy henderson roofing chattanooga tn options are available? Modified bitumen membranes incorporate the formulation and prefabrication advantages of flexible-membrane roy henderson roofing chattanooga tn with some of the traditional installation techniques used in built-up roy henderson roofing chattanooga tn. Modified bitumen sheets are factory-fabricated, composed of asphalt which is modified with a rubber or plastic polymer for increased flexibility, and combined with a reinforcement for added strength and stability. 6. Does the system require a wind uplift rating? 7. How much does the completed system add to the dead load weight of the roy henderson roofing chattanooga tn structure? In choosing any roy henderson roofing chattanooga tn option, the facility executive should be aware of the load-bearing capacity of the roof deck to make sure the right flexible-membrane option is chosen. In new roy henderson roofing chattanooga tn construction, savings in structural steel can often be achieved by installing one of the lighter flexible-membrane systems. 8. What are the expertise and financial strengths of the roy henderson roofing chattanooga tn you are considering? roy henderson roofing chattanooga tn need to be chosen with great care. The introduction of new roofing materials and application techniques within the past 10 years has led to many changes. A professional roy henderson roofing chattanooga tn should be familiar with different types of roofing systems, to help you make the best decision for your facility, based on your budget. The installation of different roy henderson roofing chattanooga tn systems varies considerably. roy henderson roofing chattanooga tn Education and training are the most important elements in the installation of roofing systems. Make sure the good roy henderson roofing chattanooga tn you choose has had detailed and ongoing training on the system being installed. 9. What is warranted and by whom? There are two basic categories of roofing warranties. The roy henderson roofing chattanooga tn contractor’s warranty typically covers workmanship. The manufacturer’s warranty covers at least the materials, though many cover additional items. Even if the manufacturer’s warranty is broad, it will not completely protect you if the roof is improperly installed. Payments Accepted: Cash, Credit Card, Paypal, Financing Available Area Serviced: Chattanooga Metropolitan Area Price Range: $$ Business Phone: (423) 822-6993 Business Email: Address: 5200 Lantana ln Chattanooga, TN 37416
__label__1
0.954407
Jain Flag - Jainism Flag Adinath Temple Adinath Bhagwan Adinath Bhagwan Our History Jain Festivals Stavan / Songs Jain Pilgrimage Teerth Yatra Business Listings Jainism History 24 Tirthankaras | Literature | Jain Tirth | Namokar Mantra Bhagwan Mahavir said, Lord Mahavir's preaching was orally complied by his disciples into many texts. This knowledge was orally transferred from acharyas (gurus) to the disciples over the course of about one thousand years. In olden times, monks strictly followed the five great vows of Jainism. Even religious scriptures were considered possessions and therefore knowledge of the religion was never documented. Also, during the course of time many learned acharyas (elder monks) complied commentaries on the various subjects of the Jain religion. Around 500 A.D., which was one thousand years after Lord Mahavir's nirvana (death), Jain acharyas realized that it was extremely difficult to keep memorizing the entire Jain literature complied by the many scholars of the past and present. In fact, significant knowledge was already lost and the rest was polluted with modifications and errors. Hence, they decided to document the Jain literature as known to them. In this time period two major sects, namely Digambar and Swetambar, were already in existence. A thousand years later (1500 A.D.), the Swetambar sect divided into three subsects known as Swetambar Murtipujak, Sthanakvasi, and Terapanthi. Differences exist among these sects in their acceptance of the validity of the documented Jain scriptures and literature. Agam Literature This consists of original scriptures complied by Gandharas and Srut-kevalis. They are written in the Prakrit language. Non-agam Literature This consists of commentary and explanation of Agam literature and independent works, complied by elder monks, nuns, and scholars. They are written in many languages such as Prakrit, Sanskrit, Old Marathi, Gujarati, Hindi, Kannad, Tamil, German, and English. website desiging company india Jainism History Jainism Principles Jain - Jainism - Tirth - Teerth Every soul is divine and has the potential to achieve God-consciousness. Any soul which has conquered its own inner enemies and achieved the state of supreme being is called “jina” There is no overarching supreme being, divine creator, owner, preserver or destroyer. Every living soul is potentially divine and the Siddhas, those who have completely eliminated their karmic bonds to end their cycle of birth and death, have attained God-consciousness. Sammed Shikharji Mangi Tungi 24 Tirthankara - Tirthankar Jiyo Aur Jine Do..... (Mahavir Bhagwan) Bhagwan Rishabha (Adinath) Ji Bhagwan Ajitnath Ji Bhagwan Sambhav Nath Ji Bhagwan Abhinandan-Nath Ji Bhagwan Sumatinath Ji Bhagwan Padmaprabha Ji Bhagwan Suparshvanath Ji Bhagwan Chandra-Prabha Ji Bhagwan Pushpadanta Ji Bhagwan Shitalnath Ji Bhagwan Shreyamsanath Ji Bhagwan Vasupujya Ji Bhagwan Vimalnath Ji Bhagwan Anantanath Ji Bhagwan Dharmanath Ji Bhagwan Shantinath Ji Bhagwan Kunthunath Ji Bhagwan Aranath Ji Bhagwan Malinath Ji Bhagwan Munisuvrata Ji Bhagwan Naminath Ji Bhagwan Neminath Ji Bhagwan Parshvanath Ji Bhagwan Mahavira Ji
__label__1
0.987572
Windows/Mac/Linux: Gobby is a free, cross-platform collaboration tool that makes it easy to collaborate on text documents over the internet with anyone. Every time you start a session with Gobby, you choose a highlight color that Gobby uses to indicate which sections of the text are being edited by which users. Gobby works like a charm for any text document, but with support for syntax highlighting, it really shines for collaboratively editing source code. It's not quite as simple to get started with as something like Google Docs—for example, you'll need to be able to send other collaborators your IP address—but it's an incredible tool in its own right. In fact, it's the collaboration tool that Ubuntu founder Mark Shuttleworth told us he uses. Gobby [via MakeUseOf]
__label__1
0.768797
MetStat® has teamed with Weather Decision Technologies (WDT) to produce the Extreme Precipitation Index (EPI), which is a new product that objectively conveys the rarity of precipitation in real-time.  More often than not, extreme precipitation result in flooding – the most frequent severe weather threat and the costliest natural disaster facing the U.S. (90% of all natural disasters in the U.S. involve flooding).  The EPI is a real-time measure of the Average Recurrence Interval (ARI) of precipitation; when the EPI is high, the likelihood of flooding is high.  Often referred to as the “return period”, the ARI represents a precipitation event (amount per unit time) as the average number of years (climatologically) between equivalent events for a specific location.  An ARI of 100 years is the same as a 1% probability of an event occurring in any given year (“a 100-year event”).  The general public as well as hydrologic engineers and emergency managers often have better sense of the consequences of a 100-year storm versus an absolute amount of precipitation, therefore making the EPI a powerful way to convey the magnitude of occurring or forecast precipitation events. Precipitation frequencies have been calculated in terms of amount and period (e.g., how often 10inches of rain may fall in a 24 hour period). These frequencies are provided in precipitation frequency atlases such as NOAA Atlas 2 and Technical Paper 40, but undergoing revision at the NWS Hydrometeorological Design Studies Center (HDSC) as part of NOAA Atlas 14. In 2009, MetStat® first demonstrated a real-time operational ARI product based on observed precipitation, but now MetStat® and WDT have teamed to provided forecast EPI maps.  Using WDT’s gridded national quantitative precipitation forecasts (QPF), EPI forecast maps are created for 6- and 24-hour time increments.  WDT’s Weather Research and Forecasting model (WRF) features an objective analysis system with the WRF Four Dimensional Data Assimilation (FDDA) scheme. The objective analysis system employed is the Local Analysis and Prediction System (LAPS) analysis, developed and maintained by the Global Systems Division of the NOAA Earth Systems Research Laboratory. With LAPS, WDT is able to assimilate the IR, water vapor, and visible satellite image channels from geostationary satellites as well as WDT’s quality controlled three-dimensional radar mosaics. These two data sources, combined with traditional in situ observations, provide a more accurate initialization of the initial model moisture field through a three-dimensional cloud analysis. This technique has been shown to improve forecasts of precipitation and reduced model spin-up time. 24-hour QPE for Hurricane Irene (ending Aug. 28, 2011) 24-hour Extreme Precipitation Index Forecast 24-hour Extreme Precipitation Index Observed WDT’s WRF domain and terrain. This domain is updated four times per day and provides a 5-day forecast on an 11.7 km grid. The Forecast Package includes EPI maps, updated four (0300, 0900, 1500 and 2100 UTC) times a day, based on the Quantitative Precipitation Forecasts (QPFs) from the Weather Research and Forecast (WRF) mesoscale numerical weather prediction model.  Advances in the science of numerical weather prediction have significantly increased skill and resolution of QPF over the last decade and mesoscale models such as WRF provide excellent forecast guidance, particularly for strongly forced events typical of those that lead to widespread heavy rainfall and flooding. The maximum products make it easy to identify areas of potential flood risk for the next five days by looking at one single map. The EPI is a color-shaded map of the average number of years between the recurrence of a similar precipitation event, otherwise known as the Average Recurrence Interval (ARI) or “return period.” The EPI allows users to quickly ascertain areas with the most unusual precipitation and potential for flooding rather than using simple precipitation amounts, since what is deemed heavy rain in one part of the U.S. may be typical in another. EPI maps provide an objective, timely and accurate depiction of the magnitude and extent of high-impact precipitation and allow users to make appropriate decisions. The conversion of precipitation to a EPI removes the distraction of heavy, but not abnormal, precipitation thereby highlighting only the high-impact, most unusual precipitation. ARI is defined as the average, or expected, period of time between exceedances of a given rainfall amount over a given duration. For example, suppose five inches of precipitation at a location is equivalent to an ARI of 100 years. This means five inches of precipitation is only expected to occur, on average, every 100 years at this location. Since the ARI is an average, a similar or even larger precipitation amount could occur again this year, next year or any other year. It does NOT mean an event of 5 inches will not occur again for 100 years. The ARI can also be described as a probability or percent chance of occurring in any given year. The table below converts the different terminologies and provides some potential flooding consequences. It is important to understand that the ARI of precipitation does not necessarily equate to a flood of the same ARI. Floods can be caused by heavy rain, spring snowmelt, dam/levee failure and/or limited soil absorption. The degree of flooding from heavy precipitation depends on the precipitation intensity, storm duration, topography, antecedent soil conditions, ground cover, basin size and infrastructure design. Precipitation associated with a ARIs as low as 1 to 5 years can cause significant urban flooding since most urban storm water systems are designed for 1 to 10 year ARI precipitation events, yet this may not equate to any flooding in well-drained rural areas. ARIs for the design of highway and other transportation infrastructure typically vary from 10 to 25 years. However, it is a near certainty that rainfall associated with ARIs greater than 100-year will cause major flooding, regardless of anything else. Dams and levees are generally designed for rainfall ARIs much larger than 500 years, but can be compromised during ARIs of 100-500+ year events. Categorical description of potential flooding consequences (when EPI is rainfall) EPI/ARI Probability of occurrence in any given year Percent chance of occurrence in any given year Rivers at all-time peaks, potential dam/levee over-topping, catastrophic flooding possible 500 yr 1 in 500 0.2% Rivers near all-time peaks flows, devastating flooding possible 100 yr 1 in 100 1% Rivers above flood stage, major flooding possible 50 yr 1 in 50 2% Rivers at/near bankful, low lying flooding 20 yr 1 in 20 5% Streams at bankful, high river flows 10 yr 1 in 10 10% Street flooding and small streams near bankful 5 yr 1 in 5 20% Minor flooding 2 yr 1 in 2 50% Little or no flooding 1 yr 1 in 1 100% For a free demo of EPI forecast maps please contact us.
__label__1
0.995552
Town Hall Remodelation THE STORY OF A CURTAIN AND A STAIRCASE. Renovation of Benavente Town Hall. Spain. The project is a complete renovation of the interior of an old neoclassic building from the middle of the nineteenth century. Only the outer shell of the building is conserved. It is a Neotuscan-style structure consisting of a façade and a portico. In the seventies renovations were carried out which completely destroyed the whole interior. After some years in disuse it recovered its function as a Town Hall. The intervention revolves around very few materials and two pieces proposed in the project: a red curtain and the new staircase. With these two pieces the guidelines were set for the intervention. The space is defined by the flows of changing views and movements which its visitors will subsequently produce i.e. the new space does not depend on the perimeter structure formed by the walls of the old building. a. The perception of both pieces depends on movement: in the case of the staircase its perception is only comprehensible while using it, going up and down it and penetrating it from the lift. Its flows vary not only in accordance with the program to which it responds but also to its obvious exposure to the changing light from the east and west. Climbing it permits perceiving the vertical component of the space. As opposed to a process of horizontal spatial sequences, in a process of continuous narration, the climb to another space permits a discontinuity which causes the space to escape vertically, to leak. The staircase ceases to be merely an element of connection to become a vertical space. b. The curtain was designed based on two memories. The most evident one is a reinterpretation of one of the least-known pieces of the Barcelona Pavilion by Mies van der Rohe, its red curtain, which is always closed over the access wall, and has a height of 3.10m. Its current location in the Barcelona Pavilion has caused its initial function to disappear. It is placed at the beginning of all the routes. This prevents the visitor from seeing the shadows of the wall, where the curtain is protected and throws itself towards the brightness of the courtyard-lake where the statue of Geog Kolbe is. The second memory, and the one which gives the curtain meaning, makes a reference to the mythical Penelope who wove and unravelled material as a metaphor of the changing narratives of history. The curtain envelops a space for meetings and decision-making for the local governing body, a space of contradictions and of impossible attempts to achieve a coherent meaning of the life of the town. With the curtain the space becomes a place. The mobility of the curtain makes it opaque and transparent, not converting it into a box for administrative decisions but rather the box of some nightmares and a lot of dreams which come true. 15 photos Building Activity • OpenBuildings OpenBuildings added a digital reference about 5 years ago via • added 2 digital references about 5 years ago via
__label__1
0.911298
Loneliest Young Star Seen By Spitzer And WISE Alone on the cosmic road, far from any known celestial object, a young, independent star is going through a tremendous growth spurt. The unusual object, called CX330, was first detected as a source of X-ray light in 2009 by NASA’s Chandra X-Ray Observatory while it was surveying the bulge in the central region of the Milky Way. Further observations indicated that this object was emitting optical light as well. With only these clues, scientists had no idea what this object was. But when Chris Britt, postdoctoral researcher at Texas Tech University in Lubbock, and colleagues were examining infrared images of the same area taken with NASA’s Wide-field Infrared Survey Explorer (WISE), they realized this object has a lot of warm dust around it, which must have been heated by an outburst. Comparing WISE data from 2010 with Spitzer Space Telescope data from 2007, researchers determined that CX330 is likely a young star that had been outbursting for several years. In fact, in that three-year period its brightness had increased by a few hundred times. Astronomers looked at data about the object from a variety of other observatories, including the ground-based SOAR, Magellan, and Gemini telescopes. They also used the large telescope surveys VVV and the OGLE-IV to measure the intensity of light emitted from CX330. By combining all of these different perspectives on the object, a clearer picture emerged. “We tried various interpretations for it, and the only one that makes sense is that this rapidly growing young star is forming in the middle of nowhere,” said Britt, lead author of a study on CX330 recently published in the Monthly Notices of the Royal Astronomical Society. The lone star’s behavior has similarities to FU Orionis, a young outbursting star that had an initial three-month outburst in 1936-7. But CX330 is more compact, hotter and likely more massive than the FU Orionis-like objects known. The more isolated star launches faster “jets,” or outflows of material that slam into the gas and dust around it. “The disk has probably heated to the point where the gas in the disk has become ionized, leading to a rapid increase in how fast the material falls onto the star,” said Thomas Maccarone, study co-author and associate professor at Texas Tech. Most puzzling to astronomers, FU Orionis and the rare objects like it—there are only about 10 of them—are located in star-forming regions. Young stars usually form and feed from their surrounding gas and dust-rich regions in star-forming clouds. By contrast, the region of star formation closest to CX330 is over a thousand light-years away. “CX330 is both more intense and more isolated than any of these young outbursting objects that we’ve ever seen,” said Joel Green, study co-author and researcher at the Space Telescope Science Institute in Baltimore. “This could be the tip of the iceberg—these objects may be everywhere.” In fact, it is possible that all stars go through this dramatic stage of development in their youth, but that the outbursts are too short in cosmological time for humans to observe many of them. How did CX330 become so isolated? One idea is that it may have been born in a star-forming region, but was ejected into its present lonely pocket of the galaxy. But this is unlikely, astronomers say. Because CX330 is in a youthful phase of its development—likely less than 1 million years old—and is still eating its surrounding disk, it must have formed near its present location in the sky. “If it had migrated from a star-forming region, it couldn’t get there in its lifetime without stripping its disk away entirely,” Britt said. CX330 may also help scientists study the way stars form under different circumstances. One scenario is that stars form through turbulence. In this “hierarchical” model, a critical density of gas in a cloud causes the cloud to gravitationally collapse into a star. A different model, called “competitive accretion,” suggests that stars begin as low-mass cores that fight over the mass of material left in the cloud. CX330 more naturally fits into the first scenario, as the turbulent circumstances would theoretically allow for a lone star to form. It is still possible that other intermediate- to low-mass stars are in the immediate vicinity of CX330, but have not been detected yet. When CX330 was last viewed in August 2015, it was still outbursting. Astronomers plan to continue studying the object, including with future telescopes that could view it in other wavelengths of light. Outbursts from a young star change the chemistry of the star’s disk, from which planets may eventually form. If the phenomenon is common, that means that planets, including our own, may carry the chemical signatures of an ancient disk of gas and dust scarred by stellar outbursts. But as CX330 is continuing to devour its disk with increasing voracity, astronomers do not expect that planets are forming in its system. “If it’s truly a massive star, its lifetime is short and violent, and I wouldn’t recommend being a planet around it,” Green said. “You could experience some pretty intense heat for a few centuries.” Author: Mitch Battros
__label__1
0.997181
Materials Innovation Survey How are leading companies enabling their teams to be more innovative? How are they taking advantage of the latest material advancements? Please share your experience. We are conducting new research on product innovation. This survey is targeted at those who either develop new materials or look to use new materials to support their innovation goals. We are
__label__1
0.878663
Downtown Dining. Neighborhood Location. La Grange’s Newest Restaurant New. American. Craft. None of these words are defined by a single other word or action, and 1416 aspires to keep that consistent. New may mean a twist on a classic, it may be something you would have never expected. American is a melting pot of cuisine and culture, and expect nothing less here. Craft can be the art of the cuisine to the syrup to the house made shrub that finished your cocktail. 1416 is industrial, chic, and rustic all at the the same time; and so is everything that is on our menus, even when on the roof top deck. Monday - Saturday  |  5pm - Close Sunday  |  Closed
__label__1
0.986502
Mauresque (1869) painting 9 of 36 by Frédéric Bazille Mauresque (Woman in a Moorish Costume), 1869 by Frédéric Bazille Information about painting Bazille was a member of the group of painters who evolved to become the Impressionists. Here the artist displays an interest in one of the Oriental themes that fascinated nineteenth-century painters and writers. These popular themes included exotic subjects that they imagined derived from French colonies in North Africa; women in harem costumes were among the most popular. This composition is a contrast of textures, as the woman’s colorfully patterned costume stands out against the smoothness of the wall and floor. Bazille died shortly before his twenty-ninth birthday during a battle in the Franco-Prussian War.
__label__1
0.999973
Advertising Console How to accelerate growth in meditation practice? by Gurumaa-Ashram Q: I am practising meditation for the past few years but the coming and going of thoughts still continues. I am not able to become thoughtless. This is sure that during meditation I do not feel like breathing at all as only 25% of breath is left. After getting up from meditation, I do not feel like speaking also. As per your opinion, is this meditation or something else? What should I do? Gurumaa Answer: The first thing Sandhya, to have complete thoughtlessness in meditation, for that you must have continuous tolerance and patience and keep the process of meditation active. When the thought comes, you must act as a witness of the thought instead of starting to fight about why the thought is coming. It has come, be a witness to it and be impartial. If you remain impartial, the thoughts slowly fade out. If you would like to compare, if we say that in a normal way if in a minute one has 10 thoughts, at the time of meditation the count goes down to 5 only. You have to see whether the thoughts have become less or not. If they have become less, even this is an achievement. If you do not want thoughts to come, you have to do just one thing i.e. stop insisting that thoughts should not come. Be a witness and whatever thoughts may come, good or bad, remain impartial, unbiased, then your thoughts will definitely start to reduce. Look, when you sit in meditation, only then you come to know that these thoughts are there. For example, if we look at this stadium or the mat on the floor, it will seem to be very clean. But if we were to sweep on it with a broom, the dust will start flying. In the same way, when you sit for meditation and you see your thoughts, actually meditation is the broom which sweeps up the hidden thoughts. In this situation, you have to be content and a witness. We get attached to things in two ways through love and through hate. You remember your friends and you also remember your enemies. You are tied by the attachment of remembrance and the truth is that you remember your enemies more. Like, you have a party in your house and you invite people. Everyone came, but one nephew of yours did not come. Whom will you remember in the whole party? Hasnt he come? Miser, to save the price of a gift, he has not come. Everyone is sitting, but he cannot be seen. Ideally, you should welcome those who have come, but no, you are only concerned about the one who has not come. Your mind is running after him. Our mind remembers enemies more and friends less. The more the number of desires, expectations and wants you have, the more will be the thoughts. So if you do not want any thoughts to come, then get rid of your wants, expectations and desires. Kabir Saheb says in a bhajan of his, No taking and no giving, be concentrated. We are not going to be friendly with anybody nor are we going to have enmity with anyone. Friends and foes bind the mind and I have to keep my mind free. The body is made from five elements namely - earth, water, fire air and ether. This cage is made out of the five elements and inside it stays my myna bird that speaks. From inside it, this myna bird of my - the soul, speaks a lot. You will say, we are householders and that we have to give and take. But my brother, Kabir was also married. He did not give and take even after being married. Meet all with love and say goodbye to those who are going and do not remember them after that. But what do we do? When someone comes, we stick to him and meet him, asking details and digging a pit of his life, you do not feel content even till then. Once he leaves, we keep talking about his goodness or badness. You stick to him like a chewing gum. Worldly relations are like chewing gum, the sweetness goes fast but you still go on chewing.
__label__1
0.580131
Taxonomy is the practice and science of classification. The word comes from the Greek taxis = 'order' + 'law' or 'science'. Taxonomies, or taxonomic schemes, are composed of taxonomic units known as taxa (singular taxon), or kinds of things that are arranged frequently in a herarchial structure, typically related by subtype-supertype relationships, also called parent-child relationships. Meristics is an area of ichthyology which relates to counting quantitive features of fish, such as the number of fins or scales. A meristic (countable trait) can be used to describe a particular species of fish, or used to identify an unknown species. Meristic traits are often described in a shorthand notation called a meristic formula. A full description of a karyotype may include the number, type, shape and banding of the chromosomes, as well as other cytogenetic information. DNA Research mtDNA sequence analysis is a valuable tool for determining whether individuals are biologically related through their mothers’ side of the family. For this reason, it is commonly referred to as a maternal lineage test. Osteology is thescientific study of bones. A subdiscipline of anthropology(US) archeology(EU), osteology is a detailed study of the structure of bones, skeletal elements, teeth, morphology, function,disease, pathology, the process of ossification (from cartilaginous molds), the resistance and hardness of bones (biophisics), etc. The adoption of a system of binomial nomenclature is due to Swedish botanist and physician Carl Linnaeus (1707 – 1778) who attempted to describe the entire known natural world and gave every species of mineral, plant or animal a two-part name. However, binomial nomenclature in various forms existed before Linnaeus, and was used by the Bauhins, who lived nearly two hundred years before Linnaeus. Before Linnaeus, hardly anybody used binomial nomenclature. After Linnaeus, almost everybody did.
__label__1
0.91191
by Ralph Brauer | 7/15/2008 11:33:00 AM brain drain Part One explored the impact of the brain drain from public service to private profit. However, there is one area where the brain drain thesis does not hold true--at least in part--and that is government employment. But as we shall see, there are even ominous developments there. A study by Kenneth J McDonnell of the Employee Benefit Research Institute showed: As of September of 2007, overall total compensation costs were 51.4 percent higher among state and local government employers ($39.50 per hour worked) than among private-sector employers ($26.09 per hour worked). McDonnell found one major reason for this is: Benefit participation rates are higher for state and local government employees and the costs of providing these benefits are higher. However, McDonnell's study and others with similar findings need to be contrasted with the continuing privatization of government jobs. The centerpiece of this has been the controversy over private contractors in the Iraq War followed by the privatization of Veterans Administration hospitals, but in fact privatization has taken place from the federal to the local level and involved everything from prisons to insurance. Curiously some of the most interesting data on privatization comes from a right wing think tank, the Reason Institute. It has released an annual privatization report for two decades. In its 2007 report it noted: Since 2003, 12 percent of the federal workforce has faced a competition, winning 83 percent of them and generating savings of $6.9 billion. It doesn't take a Harvard degree to conclude that the federal employees "winning" these competitions were in part doing so because of wage or work place concessions. In essence privatization has become the new hammer to pound government employees into submission. The one thing keeping them from doing that is that government employees have the strongest unions in the country. The Bureau of Labor Statistics notes: Within the public sector, local government workers had the highest union membership rate, 41.8 percent. The Aging Federal Workforce Still all is not well for government employees. One trend should give us pause and add to concerns about the brain drain: the age of government workers. First, the average government employee is older than the average "civilian" employee. According the Bureau of Labor Statistics: About 75 percent of government employees were 35 and older, while about 60 percent of private sector workers fell into that range. In addition the average age of federal employees has slowly been creeping upward from 44.1 in 1994 to 46.8 in 2004. Remember, this is the average age. Essentially a goodly number of the most experienced government workers are nearing retirement, a trend that could have huge systemic implications for the brain drain and keeping the playing field level. The Impact of Aging on Other Public Sector Jobs As someone who formerly was involved in national school reform, I used to remark in speeches that the biggest unrecognized problem in American education was our aging teacher workforce. The Statistical Abstract reports that over one third of our public school teachers are over 50 and almost 60% are over 40. The percentage of teachers under 30 is less than half that of those over 50. Even more daunting are the figures on teacher experience. An astounding 72% of teachers with ten to 20 years of classroom experience are over 40. A quarter of those are over 50. Two-thirds of our teachers with over 20 years experience are over fifty. In essence our teaching force exhibits many of the same demographic characteristics as our federal workforce. If we move to social work, we see a similar pattern According to the National Association of Social Work, 39% of all social workers are over 53 and 73% are over 43! Three-fourths of them were born before John Kennedy was President. More than a third of the social workers had more than 20 years experience. In short, the public sector is facing a brain drain that will be caused by an aging workforce, a workforce that is not being replaced by younger workers. The Brain Drain and Race There remains one more issue to add to the brain drain equation--race. How well is the public sector performing in attracting people of color? Federal government employment statistics are not encouraging. Employment of all people of color gained only 1.2% between 1998 and 2004, while the civilian labor force made a slightly higher gain at 1.3%. Yet after those eight years the percentage of people of color employed by the federal government still remained behind that of the civilian labor force. The total percentage of people of color employed by the federal government in 2004 was 14.6% as compared with a national percentage of people of color of 25%. In other words, people of color are being employed by the federal government at only 58% of their total population percentage--this after decades of affirmative action! It is interesting to compare these data with those of other public sector jobs. In education, for example, people of color make up 16% of the nation's public school teaching force which is 10% better than the federal government but still below what it should be based on population percentages. The national social work survey shows the most dismal results, which are especially discouraging considering the disproportionate number of people of color served by these workers. According to their data, people of color compose only 13% of all social workers. The dismal employment statistics of people of color are, of course, no secret, but that they should be so in the public sector which at least purports to support the ideals of equal opportunity is a national embarrassment. Yet, few have connected these statistics to the issue of the brain drain. Not only is the public sector failing to employ young college graduates, it is failing in employing people of color. The bottom line of this is that people of color have less impact on public policy if they form such a small percentage of public sector workers. While much is made by politicians of both parties about high level appointments of people of color, the day-to-day operations and the decisions that come with them are made by everyday workers further down the chain of command. If social work, education and the federal work force are all staffed by over 80% whites, what happens to the average person of color who is a client of those agencies? From a policy perspective, what kind of decisions are made by a largely white work force as opposed to one that might have more representation from people of color? Some might argue with calling this part of the brain drain, but in fact it is a central part of it. Any management textbook will tell you that decision-making is best accomplished when managers, whether in the public or private sector, have access to a diversity of opinions. If all management hears is a white, middle class perspective, then we will get white, middle class policies. Some might retort that aren't we a white middle class society, which may be true from a demographic point of view, but won't be for long in many states which will have a majority of people of color in the next generation. More important is the fact that we live in a diverse world. If we fail to recognize diverse perspectives, we will fall behind in that world. A Perfect Storm? This series started out with the example of how large corporations have a huge advantage over the private sector in terms of their use of systems thinking and system dynamics modeling. While some of that discussion may have seemed a bit dense and even out in left field or further, there was a reason for leading with that example because the best way to understand the brain drain is from a systems perspective. First, we have the trend of increasing college loan debt which forces graduates away from public sector jobs. Then we have the increasing privatization of government and the age of government workers, particularly in the federal government. Finally, there is the issue of the racial brain drain. All are interrelated and feed on each other. College loan debt makes it less likely young people will enter the public sector. Privatization means there are fewer jobs for them and puts pressure on those who still have the jobs to make work and policy concessions. The aging government workforce means that within the next decade we will face the retirement of a large number of our most experienced workers. The racial brain drain means our public sector lacks diversity in both people and perspective. In systems terms, the perfect storm could emerge when fewer young people will be able to enter the public sector at a time when we will need them more than ever to replace those who are retiring and fewer people of color will be staffing a public sector work force at a time when the percentage of people of color in the United States is growing. The growing use of privatization leads to the inevitable conclusion that just like our undermanned forces in Iraq, we will turn to private contractors like Halliburton and Blackwater to do those jobs for us. The Social and Political Implications of the Brain Drain If America's once-vibrant public sector succumbs to privatization, fails to attract talented young people to replace an aging public sector work force, and continues to falter in its commitment to attract people of color this nation is headed for trouble. When you connect the brain drain with budget cuts imposed on the public sector because of tax cuts for high rollers, it doesn't take a conspiracy nut to ask the question as to whether the Counterrevolution is intent on forcing the public sector into becoming a corporate fiefdom. If you think this scenario fanciful, ask the people in Richland, New Jersey. Situated on Route 40, the town serves as the headquarters for Dalponte Farms, which grows mint for Bacardi rum. Both Bacardi and Richland had mint green on their minds when they cut a deal that would rename Richland "Mojito" in honor of a cocktail being publicized by Bacardi. In exchange, the rum manufacturer offered the township $5,000 for a park gazebo, playground equipment, and a revitalization project for Route 40. Systems people like to talk about gaps and their impact on the system and there may be no bigger gap facing America than the reality of the brain drain and the ideals of this country. The brain drain already threatens to profoundly change the lives of our children by further tilting the playing field toward the private sector. Yet have you heard either Presidential candidate talk about this issue? Do either of them have plans that will remedy it? Or are we headed for another election in which the choice is between one candidate who promises to take us back to 1900 and another that promises business as usual? In the end the choice is clear: do we want a United States of Mojito or a nation that fulfills the ideals of our Declaration of Independence and Constitution? Do we want a level playing field or no playing field at all? Coda: Academia: Since the main readers of this blog reside in academia, a special section is needed to describe the brain drain's impact on our colleges and universities. The loan crisis impacts students who want to be professors just as surely as it does those who want to be lawyers, except lawyers can at least graduate with the hope of landing a six figure job. There may be a coming perfect storm in the public sector, but our nation's graduate schools may already be experiencing that storm as student loan costs increase while government aid to education decreases. This formula promises a severe decline in our nation's most important intellectual capital--its research colleges and universities. Some might argue there is a tuition difference between law school and graduate school, plus grad students can apply for assistantships that are not available to law students, but typically it takes longer to get a PhD than an LL.D. Then there are the expenses of writing a dissertation which depending on the topic can involve extensive travel to research libraries or sites. There is a certain irony in the fact that America's colleges and universities are part of the problem in that even at well-endowed institutions, large loan debts are common. My son graduated from a prestigious private college with one of the highest endowments in the nation [that's what happens when Warren Buffet is on your Board of Trustees], public or private, an endowment that according to a friend who was approached to serve on the Board of Trustees would allow the college to essentially offer each student a free ride. Currently as far as I know, of the high-prestige, high endowment schools, only Princeton has a policy that approaches this ideal. Yet there are very few Princetons and a lot of prestigious public universities that have been hit by both state and federal budget cuts that have them reducing hiring, freezing salaries, and cutting programs. Some of them have succumbed to the academic equivalent of privatization, hiring part-time, non-tenure-track "private contractors" who manage to cobble together jobs at several institutions in order to make ends meet. There is also another systemic dimension to academic budget cuts: as colleges and universities fail to--or cannot afford to--enroll more graduate students, the teaching burden of regular faculty increases, or at some point departments must contract. This produces a negative reinforcing loop that can lead to what we in systems thinking call a "death spiral"--a negative loop that functions like a whirlpool. In a desperate search for money to replace these budget cuts virtually every college and university has cut deals with corporate vendors to give them exclusive rights to everything from what goes in campus vending machines to the computers sold at the book store. The University of Minnesota even entertained an offer from none other than Victoria's Secret. The desperate situation of America's P-12 public schools is well-known, the desperate situation of America's institutions of higher education is less publicized. But if our colleges and universities continue to suffer where will those P-12 teachers come from? or the needed public sector workers? In a system dynamics model one of the symbols is a faucet, which stands for a flow--something that regulates the movement between two quantities called stocks. Your bank account is a stock; the interest rate for it and your withdrawals and deposits all impact the inflows and outflows from that account. In higher education there is an increasing feeling someone has turned off the faucet. Labels: , Links to this post: Create a Link
__label__1
0.686772
Kino Lorber Comments Comments (0) A jarring contradiction lies at the heart of Josef von Sternberg's mesmeric WWII-period curio Anatahan. The setting of the film is a flagrantly artificial jungle environment photographed, as voluntarily clarified by a title screen, “in a studio constructed for this purpose in Kyoto,” while at the same time Sternberg models the film like a documentary record, going out of his way to neutralize the sensationalism of his ripped-from-the-headlines premise. To the extent that makers of narrative cinema generally fall on either side of a line separating unabashed illusionism from the willful acknowledgment of all that lies outside the screen, Sternberg plays not so much in the middle as on both margins at once, reveling in the cognitive dissonance that it entails, and Anatahan is all the more radically destabilizing as a result. The film's starting point is the real historical incident of a Japanese squadron found stranded alive on the titular island years after the defeat of their army. What Sternberg freely imagines are the seven years of toil and hardship endured by these men while separated from their homeland, which constitutes an act of speculative empathy that puts the project squarely in the realm of storytelling. Complicating this understanding, however, is the filmmaker's decision to narrate the tale himself in a droll tone that pinballs between Job-like questioning, poetic musing, and impartial reportage, including the use of such documentary-tinged phrases as “we can only reconstruct the events” and “we can only surmise what happened.” Sternberg the narrator uses the first person to situate himself as a member of the dramatic ensemble, though it's never addressed whose consciousness he's supposed to represent, or if indeed he even has a physical counterpart. Atop this ambiguously Jungian narrator, Sternberg then opts to leave the characters' Japanese dialogue unsubtitled, which only further impedes emotional identification with the beached soldiers. What we're left with is an implicit directive to observe as though assembling a field report, even as it's demonstrably clear that we're watching is a scenario that's been carefully staged and presented to us in a highly specific manner—namely, Sternberg's lush, overripe mise-en-scène, which tends toward theatrical lighting, painted backdrops, and stringently arranged tableaux. All of these formal quirks make Anatahan quite the cutting-edge accomplishment for 1953, though the film hardly gives the impression of a work made for aesthetes alone. On the contrary, Sternberg's distancing maneuvers imply humility in the face of a story far outside his lived experience, and it's a vantage point that enables the director to very directly probe existential questions that long consumed him. What are the benefits and pratfalls of community? What are the perils of manhood? How is power assumed and abused? When one flirts with barbarism, can dignity and self-worth be restored? And does nature give a damn about any of this? Joseph von Sternberg goes out of his way to neutralize the sensationalism of his ripped-from-the-headlines premise.  The last question is answered early and often in recurring cutaways to waves crashing on imposing black rocks, which throw into relief the comparatively diminutive trials of the castaways—and, as some of the only location shots in the film, call to mind the unforgiving Irish coastlines of Robert Flaherty's Man of Aran. The other thematic prompts, meanwhile, aren't so emphatically resolved. Sternberg instead funnels them through his spare dramatic scenarios, on top of which he lays inquisitive voiceover counterpoint. In spacious group framings, we see men fawn, bicker, and kill over Keiko (Akemi Negishi), the lone female of the group and the fragile soul of the film. We see them develop their homely living quarters, composed of bamboo and covered by the unwieldy overgrowth of the jungle itself. We see them weather storms, carry out prayers and songs, and gaze desperately at the sea with the kind of posed stillness that Guy Maddin would pay tribute to in The Forbidden Room. Through it all, Anatahan's abundant foliage is exploited for myriad pictorial effects, whether to cast noirish slices of shadow across actors' faces or to conceal and reveal certain details within the frame. Other times, provocative statements are smuggled in by Sternberg through framing alone, such as when one soldier's pistol, placed at phallus level, enters the top half of a shot of Keiko kneeling on the ground, a juxtaposition that conflates sexual assault with the rise to political power (a timeless congruence indeed). Akira Ifukube's anxious, dirge-like music, a near-constant whine of tremolo violins and descending piano melodies, coats these fraught visuals in an additional layer of gloom, setting a mood best summed up by Sternberg himself: “The only real enemy most of us ever have is lonesomeness.” Kino Lorber 92 min Josef von Sternberg Josef von Sternberg, Tatsuo Asano Akemi Negishi, Tadashi Suganuma, Kisaburo Sawamura, Shôji Nakayama, Jun Fujikawa, Hiroshi Kondô, Shozo Miyashita
__label__1
0.537657
Cha-am Jamal, Thailand Archive for July 2010 The process of national reconciliation does not stand a chance even of getting started. It called for a ban on the use of shackles in order to conform with the United Nations’ principle on human rights. Home care from migrant women perfectly answers their needs. Where will they live the last days when the hospital bills have exhausted their life long savings? When they insist on staying underground they get arrested. The legal wage and health welfare do not come with their legal status. Death has to stare you in the face in order to understand her feelings. Thailand is quickly becoming an ageing society with little support for things to come. Most elderly women cannot dream to have this. Cha-am Jamal Pyongyang backhanded the United Nations. International observers determined the events that sank the Cheonan. The government has repeatedly slapped the UN and other diplomatic groups. North Korea is a long-time security threat. Pyongyang should stop its rabble-rousing threats. North Korea denied the act. That was a serious act of war. The dynamics of glaciers that feed rivers are best understood in terms of a mass balance which states essentially that input – output = accumulation. The input term is usually the amount of precipitation in the glacial basin that flows to the glacier. The output term is the meltwater that feeds the river plus evaporation. If the accumulation is positive the glacier is growing and if it is negative it means that the glacier is shrinking. If it is shrinking it could mean one of two things. Either the amount of precipitation is declining relative to the melt rate; or the melt rate is increasing relative to precipitation. There is a big difference between these two scenarios in terms of water flow in downstream rivers. In the former case, the water flow in downstream rivers would remain unchanged while in the latter case, there would be an increase in flow possibly associated with rising river levels and flooding. In the absence of rising river levels downstream, it is not possible to conclude that the glacier is retreating because of an increase in the melt rate. Yet, all instances of glacial retreat are presented by the IPCC as an effect of increased melt rate caused by global warming without providing the necessary data on changes in the flow rates of downstream rivers that the glacier feeds. Cha-am Jamal According to Nature News ( , melt water supplies 1.5 times other sources to the Indus and 0.25 times other sources to the Brahmaputra and glacial melt water constitutes 40% of the total melt water with the balance coming from seasonal snowfall. The melt water percentage is 1.5/2.5 = 60% for the Indus and 0.25/1.25 = 20% for the Brahmaputra. The glacial melt water portion of the total melt water is 40% for both rivers. Therefore, the glacial melt water percentage is 0.6*0.4 = 24% for the Indus and 0.2*0.4 = 8% for the Brahmaputra. Cha-am Jamal Reference: Our beaker is on the boil, Bangkok Post, July 21, 2010 In its 2007 assessment of climate change, the IPCC had warned that global warming is causing Himalayan glaciers to melt and recede and that this process, unchecked by their prescribed intervention of carbon emission reduction, would dry up Asia’s great rivers including the Yellow, the Yangtze, the Mekong, and the Ganges and leave more than a billion people without water (Himalayan glacier melts to hit billions of poor, Bangkok Post, December 7, 2009). Skeptics were quick to point out that glacial meltwater plays a very minor role in feeding these rivers and that therefore the loss of glaciers would not affect these rivers in the way postulated by the IPCC. The IPCC was forced to make a full retraction of this assessment. Soon thereafter they started looking for rivers in the region that do depend on meltwater from Himalayan glaciers in order to resurrect their glacial-melt agenda. They came up with the Indus and Brahmaputra rivers as possible candidates on the  basis of their dependence on glaciers (Our beaker is on the boil, Bangkok Post, July 21, 2010). The Brahmaputra does receive a greater portion of its water from glacial melt than the Ganges, but at about 8% or so it is still too small a fraction to cause the river to “dry up” without glacial meltwater. The Indus, however, is a different story for there the complete loss of glacial meltwater would cause a 24% decline in flow and that would indeed be a catastrophic impact. There is a small problem with geography, however. The source of these rivers is not in the region where the receding glacier is identified. In particular, the source of the Indus is in the Karakoram range where most glaciers – including the Siachen glacier that feeds the Indus – are growing and advancing and certainly not receding. The IPCC’s case that global warming will cause the Indus and Brahmaputra to run dry is based on data from the wrong glacier and is therefore not valid. It is yet another example where the IPCC has attempted to generalize local data when such generalization is not possible. All glaciers in the Himalayas are not receding. Many are advancing and many more are at steady state – neither advancing nor retreating; but you won’t hear about them from the IPCC because they cannot be used to evoke fear and loathing of carbon dioxide. Cha-am Jamal Such questionable censorship seems to have gained ground.
__label__1
0.623116
1. Meteopool 2. WebcamsEuropeGermanyImbringen (L)Webcam Imbringen The webcam is located at in the inland in a height of 379 m NHN. The ground is visible for a clear identification of precipitation. The Sky is clearly visible to see clouds and weather. Sunrise: UTC 05:23 CEST 06:23 Sunset: UTC 17:58 CEST 18:58 Directed to 8 km 8 km 10 km 13 km 31 km Please be as precise as possible about what is wrong: Security question. What is 100 plus 12?
__label__1
0.985995
Hyponoetics - Essays Paranoetic Knowledge Abstract: The basic distinction between rational and Paranoetic Thinking, also called Transrational Thinking, leads to the double-aspect of knowledge: rational or acquired knowledge vs Paranoetic or Transrational Knowledge. The latter is grounded in Hyponoesis (Universal Mind) and is the product of a higher faculty of our mind: Paranoesis (Transrational Thinking). Paranoetic Knowledge needs to be destinguished from mystic experience. The crucial distinction between rational and Paranoetic Thinking (or Transrational Thinking) is the following linguistic aspect: Normally we declare: It is 'I' that thinks, 'I think this or that', 'I use my thinking faculty', etc. We tend to emphasize our person as the agency of thinking. Thinking as a faculty belongs to every human being, it is its specific characteristic and therefore we attribute thinking (intellectuality, rationality) to each individual. Furthermore, we recognize the fact that our thinking is distinct from the thinking of another person, not formally, but materially, that is, the knowledge I acquired and the knowledge I have at my disposal is different from the knowledge someone else possesses. Because knowledge is believed to have been acquired or learned in some way or another, it is not astonishing at all that we indubitably relate our thinking to us as individual beings. But this very notion of our thinking being part of our individuality restricts our faculty of thinking considerably. That's why most philosophers observe that our knowledge of the world is limited by nature. This 'by nature' means either the lesser degree of perfection of the mind compared to God's mind and knowledge or in modern science the natural constraint of the brain as bodily organ. Since modern neurobiology holds that all knowledge is acquired somehow, as John Locke already asserted, the amount of knowledge a person is able to accumulate is limited, not to mention the fact that we tend to forget  acquired knowledge over time. If we consider on the other hand the paranoetic aspect of knowledge, which is illimitable and all-encompassing, we come to realize the following: instead of 'I think', as in rational and individual thinking, there is 'IT thinks'. IT stands for a supra-rational (not irrational) and supra-individual unlimited thinking. It is not a particular person with all its natural and inner limitations (the limitations of individuality), but the 'I' becomes one with the universal 'IT', i.e. Hyponoesis (Universal Mind). It is not 'I' as a person who thinks, but it is the whole universe, the Absolute and supreme Mind. The 'I' is extended infinitely to the 'IT'. My individuality ceases to be dominant over my thinking. It is Universal Thinking. The 'I' does not exist anymore, only the 'IT'. The individual form recedes into the original form, the original Mind. Therefore, Paranoetic Knowledge is all-knowledge, omniscience in its purest form. If 'IT' thinks and not 'I', every limitation that is pertinent to my individual form as a human being and to my individual form of thinking, is cast off and my mind becomes Hyponoesis (Universal Mind, my thinking blends into the Universal Thinking and becomes one with it. But this state of thinking should not be confused with states of mystical experience. The latter comprises empirical experience, the experience of becoming and being one with the Godhead or the Absolute. But if we extend our egocentric thinking to a cosmocentric thinking, we do not experience oneness in the sense the mystic does. Only our thinking has completely changed. We are individual by our body, but universal by our mind. In our mind we have suddenly a vast and inexhaustible repository of information and knowledge right at our fingertips. It is a different kind of knowledge, not comparable to our rational and acquired knowledge. This knowledge is not within the frame of time and space, not impregnated and restrained with empirical and sensualistic data from our sense perceptions. Paranoetic knowledge does not think in terms of discrete parts of knowledge but rather in terms of so-called Hologemes: the whole, not the parts. The part, however, is not lost in the whole; it is contained in the whole, we possess it through the whole and not, as with rational thinking, through other parts or more complex parts. In rational thinking we never think the whole (s. Kant: The apprehension of the manifold of appearance is always successive (KrV, A189,B234)), but only parts ( analyatical thinking) or complexities (synthetical thinking). The whole, however, with all its parts is only accessible to Transrational Thinking or Paranoetic Thinking. Mystic Experience and Paranoetic Knowledge Both, mystic experience and Paranoetic Knowledge, transcend the realm of ordinary consciousness and rational thinking. But there are some fundamental differences between these two modes of transcendence: 1. Mystic experience, as the word already states, is an experience of a higher dimensional state of consciousness. It implies empirical data, such as emotions, feelings, sense perception, which all transcend everyday experience, especially concerning perspicuity and range of perception. The act of Paranoesis (Transrational Thinking), however, does not imply any kind of emotions or psychical experience, but is reduced to cognition and thinking only. Paranoetic Knowledge is not an experience but a cognitional or supra-intellectual act, transcending rational and logical thinking. 2. Whereas the mystic experiences oneness physically and psychically by being one with everything and the Godhead, the Transrational Thinker knows that everything is one in its essence or substance, but she does not experience it actually. It is a mental act of understanding, but an understanding that by far surpasses the comprehension of rationally minded people. Whereas the mystic is predominantly enshrouded in an overwhelming experience and thereby does often not understand what she experiences, the Transrational Thinker always knows and understands the higher dimension without being overwhelmed by distracting emotions of experience. 3. The mystic can describe her experience only by means of symbols, metaphors and analogies, but rarely is she capable of articulating what she experienced with the necessary acuity of the mind. She experiences the oneness, but how can she explain it in rational terms, which themselves are restricted to our everyday experience and not to higher states of consciousness? The mystic has no appropriate vocabulary at her disposal. Language is too limited, too rational, too ordinary. But despite this fact, the Transrational Thinker is more able to translate her superior knowledge into rational language, although we cannot make the assumption that what she expounds in rational form will be understood by the community of philosophers or scientists. A philosopher or scientist will rather listen to a "rationalized" account than to a metaphorical and ornate one by a mystic. 4. The knowledge the mystic obtains about what she experiences is not particular, but general or universal knowledge. It does not deal with singular truths but with universal and holistic truth. She understands the truth without being able to give a systematic account of it or to describe it rationally. The Transrational Thinker, however, by virtue of his highly developed thinking faculty, is able of rendering a detailed and systematic description of the truth she has come to know by means of Paranoesis or Transrational Thinking. The Transrational Thinker is omniscient, all-knowing, because by means of Intuition he has access to the cosmic or universal repository of information, that is, to Hyponoesis (Universal Mind). 5. The mystic experiences oneness, and thereby becomes what she experiences, that is, she becomes one with everything she perceives. This is an existential experience. The Transrational Thinker does not experience the oneness existentially but only through her supreme capacity of thinking. She paranoetically grasps the oneness by understanding why everything has to be fundamentally one, why we experience and perceive duality and a plurality of things and so on. Thus the mystic's experience is a grassroots experience, ontologically changing the world of reality. This expansion of consciousness to a higher reality, this extended spectrum of perception gives the mystic a more thorough and encompassing experience than the Transrational Thinker. For the latter, there exists only an idealistic or noetic oneness in the comprehension of her extended or higher form of thinking. 6. The mystic transcends all human limitations, those of the body as well as those of her psyche and her life as a whole. The Transrational Thinker only transcends his thinking, whereas the mystic only partially transcends thinking, because thinking is only a minor and negligible part of her comprehensive experience. More often than not, the mystic believes that thinking is only obstructing the experience of the Godhead, therefore it has to be relinquished completely so as to surrender oneself wholly to the sweeping experience of oneness. The mystic is not aware of the mind-transcending power of Paranoesis (Transrational Thinking). 7. From the above mentioned it follows that the mystic does not think or know consciously in this higher state of experience. That is however just what the Transrational Thinker does. She is fully aware of her thinking capacity and of the possibilities that are open to her by hooking up to the infinite repository of knowledge of Hyponoesis. 8. That leads me to final point of difference, maybe the most important one. If we consider the accounts of mystics concerning their experiences, we inevitably state that they cannot attain this experience by will. They describe it as a feat of God's grace that only happens a few precious times in their life. It is a supernatural gift of the heavens of inestimable value. But here the Transrational Thinker has a great advantage. Once she developed Paranoetic Thinking by applying a particular method, she can attain this state of thinking by volition, whenever she wants and needs to. Moreover, she even could think the rest of her life employing this supra-rational mode of thinking without losing the contact with rationality or other people. But the mystic, assuming she could keep up the mystic state of experience, would have problems with her life from hereafter, because the supreme bliss of this state would have her forget about the daily petty sorrows and problems, occupying the mind of normal people.
__label__1
0.510887
Enter your email address: Delivered by FeedBurner Saturday, September 6, 2008 Admitting you made a mistake is difficult for anyone. However, refusing to face up to being wrong or causing a problem can take a toll on your reputation, relationships, and work life. As difficult as it can be, admitting the mistake can allow you and others to move on. Let it go. Don't pressure someone for forgiveness. More serious conversations should be in person. Less serious issues can be handled via phone or email. You can't control others responses. However, you can make sure YOU move on. Have you noticed how depressed many people around you are? Well this page is to help you create a happier work environment and make everyone happier, whether it is a friend or a total stranger. Smile, smile, smile. Whenever someone see a smiling face, it reminds them that they may have reasons to smile themselves, and it will create a happier mood. Greet everyone you see. This will make people feel that they have been noticed and accepted in their environment. Surprise people with small cheery gifts. Homemade cupcakes or a box of doughnuts for early morning are sure to bring a smile to someone's face. Compliments go a long way. People like it when you notice things about them because more than likely, they put a lot of thought into it. Promote Optimism. If something is going wrong for someone, twist it and show them the positive. Tell them how it would be worse if it went the other way. Small things go a long way when making people happy. Make sure the people who you are cheering up want to be in a good mood. Make sure not to kill the mood by saying something really negative or sad. Don't overdo it or people will think that you are up to something and get the wrong opinion. , Humulus Lupus . Blog Roll
__label__1
0.514574
The arts we believe in and enjoy sharing with you. Group or private instruction available. Mantis Boxing 'Hook in', to the world of Mantis Boxing. Expose yourself to the ancient Chinese Martial Art that has survived the annals of time. Hook, Grapple, and Pluck your way through a journey of radical, highly effective techniques, and skills.  Brazilian Jiu-Jitsu A challenge you can be proud of. Build strength, speed, and agility. Work together in team oriented, high intensity training. Grow stronger, and bond through challenging workouts.
__label__1
0.692011
Scientists from Harvard Medical School claimed that they have finally found the ‘elixir of youth’ which can not only slow down the aging process, but also reverse it. We worked on understanding the mechanisms of aging and conducted experiments on mice. For this purpose we selected mature enough animals that were older than two years, and began to introduce the experimental drug. To our surprise, just in a week the mice got younger. They became vigorous and energetic as if they were six-month old,” said Prof David Sinclair of Harvard Medical School. The muscles of the rejuvenated mice were in tone, and their cardiac muscles were as new. It’s as if a 60-year-old man turned into a 17-year-old. The coenzyme nicotinamide adenine dinucleotide (NAD) has played a major role in this incredible transformation. In a living organism, NAD serves as a conduit between the nucleus and mitochondria. The latter generates energy, and NAD delivers it to the cell nucleus. At low concentration of NAD, communication between mitochondria and the nucleus of cells weakens. This substance is present in our body. However, with age, its concentration becomes lower and lower. We hypothesized that this might be the cause of aging, and introduced a drug to increase the level of this substance in mice,” said Prof Sinclair. Moreover, young mice, which also were given a dose of NAD, became more alert and energetic. At the moment, the scientists have the following tasks to be accomplished: to determine how long the rejuvenating effect will hold, to identify side effects of the drug, as well as to find out whether it may be used in humans. The study’s authors hope to test the drug on humans this year. However, it is worth noting that the cost of a single drop of NAD – which is a daily dose for a mouse – is about $ 1,000. Therefore, in case of successful outcome of further testing, it is necessary to find a way to reduce its cost to launch the mass production of the drug.
__label__1
0.989381
Kate Tinworth Founder and Principal ExposeYourMuseum LLC Kate Tinworth is the founder and Principal of ExposeYourMuseum LLC— a boutique consultancy delivering the tools and data required to better understand current and potential visitors, teams, communities, and audiences. Kate’s approach prioritizes making connections, facilitating conversations, elevating voices, engaging creatively, and strong, clear communication to inspire innovation, inform strategy, and drive decision-making.
__label__1
0.971368
What rhymes or sounds like the word population shift? (noun) a change in the relative numbers of the different groups of individuals making up a population more on Definitions.net » We couldn't find any rhymes for the word population shift. Maybe you were looking for one of these terms? popularly, populate, populated, populating, population, populations, populism, populist, populists, populous Find a translation for population shift in other languages: Select another language: Discuss this population shift rhyme with the community: Use the citation below to add this rhymes to your bibliography: "population shift." Rhymes.net. STANDS4 LLC, 2017. Web. 28 Mar. 2017. <http://www.rhymes.net/rhyme/population%20shift>. Know what rhymes with population shift? Have another rhyming word for population shift? Let us know! Is population shift wrong or has spelling mistakes? Alternative searches for population shift:
__label__1
0.979884
206 synonyms found [vˈɔ͡ɪsləs], [vˈɔ‍ɪsləs], [v_ˈɔɪ_s_l_ə_s] Synonyms for Voiceless: dumb (adjective) dumb, mum, mute, silent, speechless, taciturn, tongue-tied. Other synonyms and related words: Operose, accented, affricate, alveolar, anaudic, aphasic, aphonic, aphonous, apical, apico-alveolar, apico-dental, arduous, articulated, aspirate, aspiration, assimilated, assonance, atonic, back, backbreaking, bad, bare, barytone, bereft, bilabial, blank despondency, blue devils, blues, breathed, breathless, broad, cacuminal, central, cerebral, checked, close, close-mouthed, concentrated, consonant, consonantal, continuant, deaf, deaf person, deafmute, dental, denuded, deprived, despondency, destitute, difficult, disconsolateness, disenfranchised, disfranchised, dismals, dissimilated, doldrums, dorsal, dumb, dumbfounded, dumbstricken, dumbstruck, dumps, empty, flat, front, glide, glossal, glottal, grueling, gruelling, guttural, hard, heavy, high, hope deferred, horrors, hypochondriasis, il penseroso, inarticulate, inaudible, incommunicative, innocent, intemperate, intonated, invalid, involving surds, irrational, knockout, labial, labiodental, labiovelar, laborious, lachrymals, lateral, lax, light, lingual, liquid, low, megrims, melancholia, melancholy, mid, monophthongal, motionless, mumps, muted, muzzled, narrow, nasal, nasalized, noiseless, null and void, occlusive, open, oxytone, palatal, palatalized, partially hearing, pessimism, pharyngeal, pharyngealized, phonemic, phonetic, phonic, pitch, pitched, placid, posttonic, punishing, quiescent, quiet, radical, reserved, reticent, retroflex, rounded, sadness, scant, semivowel, severe, sharp, short, shy, soft, sonant, soundless, spleen, stopped, stressed, stricken dumb, strong, surd, syllabic, tacit, tense, thick, throaty, tight-lipped, toilsome, tonal, toneless, tongueless, tonic, tough, twangy, unaccented, unarticulate, unarticulated, unexpressed, unhearable, unheard, unpronounced, unrounded, unsaid, unsounded, unspoken, unstated, unstressed, unsung, unuttered, unverbalised, unverbalized, unvocal, unvocalized, unvoiced, vapors, velar, vocalic, vocoid, voiced, voteless, vowel, vowellike, weak, whispered, wide, wordless, words. Quotes for Voiceless:
__label__1
0.797493
We gratefully acknowledge support from the Simons Foundation and member institutions New submissions [ total of 32 entries: 1-32 ] New submissions for Tue, 28 Mar 17 [1]  arXiv:1703.08595 [pdf] Title: Low Precision Neural Networks using Subband Decomposition Comments: Presented at CogArch Workshop, Atlanta, GA, April 2016 Subjects: Learning (cs.LG) Large-scale deep neural networks (DNN) have been successfully used in a number of tasks from image recognition to natural language processing. They are trained using large training sets on large models, making them computationally and memory intensive. As such, there is much interest in research development for faster training and test time. In this paper, we present a unique approach using lower precision weights for more efficient and faster training phase. We separate imagery into different frequency bands (e.g. with different information content) such that the neural net can better learn using less bits. We present this approach as a complement existing methods such as pruning network connections and encoding learning weights. We show results where this approach supports more stable learning with 2-4X reduction in precision with 17X reduction in DNN parameters. [2]  arXiv:1703.08667 [pdf, ps, other] Title: Exploration--Exploitation in MDPs with Options Subjects: Learning (cs.LG) While a large body of empirical results show that temporally-extended actions and options may significantly affect the learning performance of an agent, the theoretical understanding of how and when options can be beneficial in online reinforcement learning is relatively limited. In this paper, we derive an upper and lower bound on the regret of a variant of UCRL using options. While we first analyze the algorithm in the general case of semi-Markov decision processes (SMDPs), we show how these results can be translated to the specific case of MDPs with options and we illustrate simple scenarios in which the regret of learning with options can be \textit{provably} much smaller than the regret suffered when learning with primitive actions. [3]  arXiv:1703.08774 [pdf, other] Title: Who Said What: Modeling Individual Labelers Improves Classification Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label or to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computer-aided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona, and by Mnih and Hinton. Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels for training. [4]  arXiv:1703.08816 [pdf, other] Title: Uncertainty Quantification in the Classification of High Dimensional Data Comments: 33 pages, 14 figures Subjects: Learning (cs.LG); Machine Learning (stat.ML) Classification of high dimensional data finds wide-ranging applications. In many of these applications equipping the resulting classification with a measure of uncertainty may be as important as the classification itself. In this paper we introduce, develop algorithms for, and investigate the properties of, a variety of Bayesian models for the task of binary classification; via the posterior distribution on the classification labels, these methods automatically give measures of uncertainty. The methods are all based around the graph formulation of semi-supervised learning. We provide a unified framework which brings together a variety of methods which have been introduced in different communities within the mathematical sciences. We study probit classification, generalize the level-set method for Bayesian inverse problems to the classification setting, and generalize the Ginzburg-Landau optimization-based classifier to a Bayesian setting; we also show that the probit and level set approaches are natural relaxations of the harmonic function approach. We introduce efficient numerical methods, suited to large data-sets, for both MCMC-based sampling as well as gradient-based MAP estimation. Through numerical experiments we study classification accuracy and uncertainty quantification for our models; these experiments showcase a suite of datasets commonly used to evaluate graph-based semi-supervised learning algorithms. [5]  arXiv:1703.08840 [pdf, other] Title: Inferring The Latent Structure of Human Decision-Making from Raw Visual Inputs Comments: 10 pages, 6 figures The goal of imitation learning is to match example expert behavior, without access to a reinforcement signal. Expert demonstrations provided by humans, however, often show significant variability due to latent factors that are not explicitly modeled. We introduce an extension to the Generative Adversarial Imitation Learning method that can infer the latent structure of human decision-making in an unsupervised way. Our method can not only imitate complex behaviors, but also learn interpretable and meaningful representations. We demonstrate that the approach is applicable to high-dimensional environments including raw visual inputs. In the highway driving domain, we show that a model learned from demonstrations is able to both produce different styles of human-like driving behaviors and accurately anticipate human actions. Our method surpasses various baselines in terms of performance and functionality. [6]  arXiv:1703.08933 [pdf, other] Title: Multiple Instance Learning with the Optimal Sub-Pattern Assignment Metric Subjects: Learning (cs.LG) Multiple instance data are sets or multi-sets of unordered elements. Using metrics or distances for sets, we propose an approach to several multiple instance learning tasks, such as clustering (unsupervised learning), classification (supervised learning), and novelty detection (semi-supervised learning). In particular, we introduce the Optimal Sub-Pattern Assignment metric to multiple instance learning so as to provide versatile design choices. Numerical experiments on both simulated and real data are presented to illustrate the versatility of the proposed solution. [7]  arXiv:1703.08970 [pdf, other] Title: Multimodal deep learning approach for joint EEG-EMG data compression and classification Comments: IEEE Wireless Communications and Networking Conference (WCNC), 2017 Subjects: Learning (cs.LG) In this paper, we present a joint compression and classification approach of EEG and EMG signals using a deep learning approach. Specifically, we build our system based on the deep autoencoder architecture which is designed not only to extract discriminant features in the multimodal data representation but also to reconstruct the data from the latent representation using encoder-decoder layers. Since autoencoder can be seen as a compression approach, we extend it to handle multimodal data at the encoder layer, reconstructed and retrieved at the decoder layer. We show through experimental results, that exploiting both multimodal data intercorellation and intracorellation 1) Significantly reduces signal distortion particularly for high compression levels 2) Achieves better accuracy in classifying EEG and EMG signals recorded and labeled according to the sentiments of the volunteer. [8]  arXiv:1703.09068 [pdf, other] Title: Automatic Decomposition of Self-Triggering Kernels of Hawkes Processes Subjects: Learning (cs.LG) Hawkes Processes capture self- and mutual-excitation between events when the arrival of one event makes future ones more likely to happen in time-series data. Identification of the temporal covariance kernel can reveal the underlying structure to better predict future events. In this paper, we present a new framework to represent time-series events with a composition of self-triggering kernels of Hawkes Processes. That is, the input time-series events are decomposed into multiple Hawkes Processes with heterogeneous kernels. Our automatic decomposition procedure is composed of three main steps: (1) discretized kernel estimation through frequency domain inversion equation associated with the covariance density, (2) greedy kernel decomposition through four base kernels and their combinations (addition and multiplication), and (3) automated report generation. We demonstrate that the new automatic decomposition procedure performs better to predict future events than the existing framework in real-world data. [9]  arXiv:1703.09146 [pdf, other] Title: GPU Activity Prediction using Representation Learning Comments: Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s) Subjects: Learning (cs.LG) GPU activity prediction is an important and complex problem. This is due to the high level of contention among thousands of parallel threads. This problem was mostly addressed using heuristics. We propose a representation learning approach to address this problem. We model any performance metric as a temporal function of the executed instructions with the intuition that the flow of instructions can be identified as distinct activities of the code. Our experiments show high accuracy and non-trivial predictive power of representation learning on a benchmark. [10]  arXiv:1703.09197 [pdf, other] Title: Deep Architectures for Modulation Recognition Comments: 7 pages, 14 figures, to be published in proceedings of IEEE DySPAN 2017 Subjects: Learning (cs.LG) Cross-lists for Tue, 28 Mar 17 [11]  arXiv:1703.08581 (cross-list from cs.CL) [pdf, other] Title: Sequence-to-Sequence Models Can Directly Transcribe Foreign Speech Comments: Submitted to Interspeech 2017 We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points. [12]  arXiv:1703.08612 (cross-list from cs.RO) [pdf, other] Title: Jointly Optimizing Placement and Inference for Beacon-based Localization Subjects: Robotics (cs.RO); Learning (cs.LG) The ability of robots to estimate their location is crucial for a wide variety of autonomous operations. In settings where GPS is unavailable, range- or bearing-only observations relative to a set of fixed beacons provide an effective means of estimating a robot's location as it navigates. The accuracy of such a beacon-based localization system depends both on how beacons are spatially distributed in the environment, and how the robot's location is inferred based on noisy measurements of range or bearing. However, it is computationally challenging to search for a placement and an inference strategy that, together, are optimal. Existing methods decouple these decisions, forgoing optimality for tractability. We propose a new optimization approach to jointly determine the beacon placement and inference algorithm. We model inference as a neural network and incorporate beacon placement as a differentiable neural layer. This formulation allows us to optimize placement and inference by jointly training the inference network and beacon layer. We evaluate our method on different localization problems and demonstrate performance that exceeds hand-crafted baselines. [13]  arXiv:1703.08710 (cross-list from cs.CV) [pdf, other] Title: Count-ception: Counting by Fully Convolutional Redundant Counting Comments: Under Review Counting objects in digital images is a process that should be replaced by machines. This tedious task is time consuming and prone to errors due to fatigue of human annotators. The goal is to have a system that takes as input an image and returns a count of the objects inside and justification for the prediction in the form of object localization. We repose a problem, originally posed by Lempitsky and Zisserman, to instead predict a count map which contains redundant counts based on the receptive field of a smaller regression network. The regression network predicts a count of the objects that exist inside this frame. By processing the image in a fully convolutional way each pixel is going to be accounted for some number of times, the number of windows which include it, which is the size of each window, (i.e., 32x32 = 1024). To recover the true count take the average over the redundant predictions. Our contribution is redundant counting instead of predicting a density map in order to average over errors. We also propose a novel deep neural network architecture adapted from the Inception family of networks called the Count-ception network. Together our approach results in a 20% gain over the state of the art method by Xie, Noble, and Zisserman in 2016. [14]  arXiv:1703.08836 (cross-list from cs.CV) [pdf, other] Title: Learned multi-patch similarity Estimating a depth map from multiple views of a scene is a fundamental task in computer vision. As soon as more than two viewpoints are available, one faces the very basic question how to measure similarity across >2 image patches. Surprisingly, no direct solution exists, instead it is common to fall back to more or less robust averaging of two-view similarities. Encouraged by the success of machine learning, and in particular convolutional neural networks, we propose to learn a matching function which directly maps multiple image patches to a scalar similarity score. Experiments on several multi-view datasets demonstrate that this approach has advantages over methods based on pairwise patch similarity. [15]  arXiv:1703.08838 (cross-list from cs.DC) [pdf, other] Title: Distributed Voting/Ranking with Optimal Number of States per Node Considering a network with $n$ nodes, where each node initially votes for one (or more) choices out of $K$ possible choices, we present a Distributed Multi-choice Voting/Ranking (DMVR) algorithm to determine either the choice with maximum vote (the voting problem) or to rank all the choices in terms of their acquired votes (the ranking problem). The algorithm consolidates node votes across the network by updating the states of interacting nodes using two key operations, the union and the intersection. The proposed algorithm is simple, independent from network size, and easily scalable in terms of the number of choices $K$, using only $K\times 2^{K-1}$ nodal states for voting, and $K\times K!$ nodal states for ranking. We prove the number of states to be optimal in the ranking case, this optimality is conjectured to also apply to the voting case. The time complexity of the algorithm is analyzed in complete graphs. We show that the time complexity for both ranking and voting is $O(\log(n))$ for given vote percentages, and is inversely proportional to the minimum of the vote percentage differences among various choices. [16]  arXiv:1703.08961 (cross-list from cs.CV) [pdf, ps, other] Title: Scaling the Scattering Transform: Deep Hybrid Networks Authors: Edouard Oyallon (DI-ENS), Eugene Belilovsky (CVN, GALEN), Sergey Zagoruyko (ENPC) We use the scattering network as a generic and fixed initialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing the best results to-date with pre-defined representations while being competitive with Deep CNNs. Using a shallow cascade of 1x1 convolutions, which encodes scattering coefficients that correspond to spatial windows of very small sizes, permits to obtain AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local encoding explicitly learns in-variance w.r.t. rotations. Combining scattering networks with a modern ResNet, we achieve a single-crop top 5 error of 11.4% on imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing only 10 layers. We also find that hybrid architectures can yield excellent performance in the small sample regime, exceeding their end-to-end counterparts, through their ability to incorporate geometrical priors. We demonstrate this on subsets of the CIFAR-10 dataset and by setting a new state-of-the-art on the STL-10 dataset. [17]  arXiv:1703.09185 (cross-list from cs.DC) [pdf, other] Title: Private Learning on Networks: Part II Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Learning (cs.LG); Optimization and Control (math.OC) Widespread deployment of distributed machine learning algorithms has raised new privacy challenges. The focus of this paper is on improving privacy of each participant's local information (such as dataset or loss function) while collaboratively learning underlying model. We present two iterative algorithms for privacy preserving distributed learning. Our algorithms involves adding structured randomization to the state estimates. We prove deterministic correctness (in every execution) of our algorithm despite the iterates being perturbed by non-zero mean random variables. We motivate privacy using privacy analysis of a special case of our algorithm referred to as Function Sharing strategy (presented in [1]). [18]  arXiv:1703.09194 (cross-list from stat.ML) [pdf, other] Title: Sticking the Landing: An Asymptotically Zero-Variance Gradient Estimator for Variational Inference Subjects: Machine Learning (stat.ML); Learning (cs.LG) We propose a simple and general variant of the standard reparameterized gradient estimator for the variational evidence lower bound. Specifically, we remove a part of the total derivative with respect to the variational parameters that corresponds to the score function. Removing this term produces an unbiased gradient estimator whose variance approaches zero as the approximate posterior approaches the exact posterior. We analyze the behavior of this gradient estimator theoretically and empirically, and generalize it to more complex variational distributions such as mixtures and importance-weighted posteriors. [19]  arXiv:1703.09202 (cross-list from stat.ML) [pdf, other] Title: Biologically inspired protection of deep networks from adversarial attacks Comments: 11 pages Subjects: Machine Learning (stat.ML); Learning (cs.LG); Neurons and Cognition (q-bio.NC) Inspired by biophysical principles underlying nonlinear dendritic computation in neural circuits, we develop a scheme to train deep neural networks to make them robust to adversarial attacks. Our scheme generates highly nonlinear, saturated neural networks that achieve state of the art performance on gradient based adversarial examples on MNIST, despite never being exposed to adversarially chosen examples during training. Moreover, these networks exhibit unprecedented robustness to targeted, iterative schemes for generating adversarial examples, including second-order methods. We further identify principles governing how these networks achieve their robustness, drawing on methods from information geometry. We find these networks progressively create highly flat and compressed internal representations that are sensitive to very few input dimensions, while still solving the task. Moreover, they employ highly kurtotic weight distributions, also found in the brain, and we demonstrate how such kurtosis can protect even linear classifiers from adversarial attack. Replacements for Tue, 28 Mar 17 [20]  arXiv:1610.06276 (replaced) [pdf, other] Title: Modeling Scalability of Distributed Machine Learning Comments: 6 pages, 4 figures, appears at ICDE 2017 [21]  arXiv:1611.03158 (replaced) [pdf, other] Title: Using Neural Networks to Compute Approximate and Guaranteed Feasible Hamilton-Jacobi-Bellman PDE Solutions Comments: Submitted to IEEE Conference on Decision and Control, 2017 Subjects: Learning (cs.LG) [22]  arXiv:1703.05840 (replaced) [pdf, other] Title: Conditional Accelerated Lazy Stochastic Gradient Descent Comments: 33 pages, 9 figures Subjects: Learning (cs.LG); Machine Learning (stat.ML) [23]  arXiv:1703.06182 (replaced) [pdf, other] Title: Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI) [24]  arXiv:1703.06959 (replaced) [pdf, other] Title: CSI: A Hybrid Deep Model for Fake News [25]  arXiv:1606.00611 (replaced) [pdf, other] Title: Recursive Autoconvolution for Unsupervised Learning of Convolutional Neural Networks Comments: 8 pages, accepted to International Joint Conference on Neural Networks (IJCNN 2017) [26]  arXiv:1611.01540 (replaced) [pdf, other] Title: Topology and Geometry of Half-Rectified Network Optimization Comments: 19 Pages (10 main + Appendices), 4 Figures, 1 Table, Published as a conference paper at ICLR 2017 Subjects: Machine Learning (stat.ML); Learning (cs.LG) [27]  arXiv:1611.01708 (replaced) [pdf, other] Title: Detecting Dependencies in Sparse, Multivariate Databases Using Probabilistic Programming and Non-parametric Bayes [28]  arXiv:1611.06534 (replaced) [pdf, other] Title: Linear Thompson Sampling Revisited Subjects: Machine Learning (stat.ML); Learning (cs.LG) [29]  arXiv:1612.01086 (replaced) [pdf, other] Title: Deep Learning of Robotic Tasks without a Simulator using Strong and Weak Human Supervision Subjects: Artificial Intelligence (cs.AI); Learning (cs.LG); Robotics (cs.RO) [30]  arXiv:1701.08716 (replaced) [pdf, other] Title: Does Weather Matter? Causal Analysis of TV Logs Comments: Companion of the 26th International World Wide Web Conference Subjects: Computers and Society (cs.CY); Learning (cs.LG) [31]  arXiv:1702.05043 (replaced) [pdf, other] Title: Unbiased Online Recurrent Optimization Comments: 11 pages, 5 figures [32]  arXiv:1703.01253 (replaced) [pdf, other] Title: Machine Learning on Sequential Data Using a Recurrent Weighted Average Subjects: Machine Learning (stat.ML); Learning (cs.LG) [ total of 32 entries: 1-32 ] Disable MathJax (What is MathJax?)
__label__1
0.912135
From rules to skills Becoming a better software engineer means you need to go beyond just following rules. That means learning principles, like the fact you only have a limited ability to keep details in your head. But you also need to learn new skills. Some of these skills are tied to particular technologies: applying new tools, learning new languages. But there are other, less obvious skills you will need to learn. One that I've already talked about is knowing when to ask for help, part of the larger skill of learning. And if you read that post closely you'll notice the reliance on other skills, like estimation and planning. Another critical skill for any programmer is writing: taking a vague notion and turning into a reasoned argument. Writing, as William Zinsser said, is thinking on paper, though these days it's much more likely to be on a computer. There are many books on writing, so to kick off what I hope to be a series of book reviews I'm going to be reviewing a particularly excellent one next week.
__label__1
0.718252
Limited access Upgrade to access all content for this subject Thermus aquaticus is a species of bacteria found living in the hot springs in Yellowstone National Park. Hot springs typically are areas characterized by low oxygen levels, low light levels and high concentrations of sulfur and calcium carbonate. Based on the way bacteria live in or near hot springs, which of the following would be the most reasonable explanation for how they create food? Through the process of photorespiration by using oxygen and light energy to produce carbon dioxide. Through the process of radiosynthesis by using gamma radiation and melanin to create energy for growth. Through the process of photosynthesis by using energy from the sun, water, and carbon dioxide to make glucose. Through the process of chemosynthesis by combining hydrogen sulfide with oxygen and carbon dioxide to make carbohydrates. Select an assignment template
__label__1
0.999984
Sentence Examples with the word leeward side Limnaea and Planorbis); the existence of belts of dead poplars, patches of dead and moribund tamarisks, and vast expanses of withered reeds, all these crowning the tops of the jardangs, never found in the wind-scooped furrows; the presence of ripple-marks of aqueous origin on the leeward side of the clay terraces and in other wind-sheltered situations; and, in fact, by the general conformation, contour lines, and shapes of the deserts as a whole. Over the whole width of the country from coast to coast, or of the Welsh mountain ranges only, this is so; but it is nevertheless true that the leeward side of an individual valley or range of hills generally receives more rain than the windward side. View more From this cause also, therefore, the leeward side of the valley receives more rain than the windward side. The sugar farms are mostly on the islands of Hawaii, Oahu, Maui and Kauai, at the bases of mountains; those on the leeward side have the better soil, but require much more irrigating. Or leeward side of the Olympics.
__label__1
0.878293
UN meets to discuss new global restrictions on economic activity United Nations bureaucrats are meeting in Doha, Qatar this week for a new global summit seeking to impose global taxes and caps on carbon dioxide emitted through economic activity. The Obama administration has kicked off the event by promising aggressive new actions to limit economic activity in the United States. The Guardian reports Obama representative to the conference Jonathan Pershing tells delegates “the Obama administration has taken a series of steps, including sharply increasing fuel efficiency standards for cars and trucks, and made good on promises of climate financing for poor countries.” The Guardian also notes Obama’s Cap and Tax scheme was killed in the U.S. Senate. That was due to the efforts of ATP supporters and our allies in contacting and challenging wavering senators. The UN has taken past actions to limit carbon dioxide and curb economic activity, but none of these have affected global climate. Rather than accept the scientific proof human activity is not altering the global climate, UN bureaucrats are using their failure to instead suggest they don’t have enough power and are now pushing even more draconian restrictions on economic activity.
__label__1
0.744444
Select Page Why Many From-Scratch Cities Are Failing to Thrive Why Many From-Scratch Cities Are Failing to Thrive Rome. Paris. New York City. Beijing. Some of the most powerful and beautiful cities are centuries old, with some having been built up and outwards for thousands of years. Many new, built-from-scratch cities may not be destined for the same success — in fact, hundreds of shiny, modern metropolises lay dormant in spite of high hopes and intelligent design. With global urban population expected to rise to six billion by 2050 from 3.9 today, new cities will be essential for growing communities to form and thrive. Public and private projects around the world are aiming to design and build these idealistic cities of the future, but have been proven flawed in both approach and outcome. Flawed planning, hazy results A Hologram for the King, a Dave Eggers novel recently made into a film starring Tom Hanks, portrays a fictionalized example of how new cities can fail to get off the ground. In it, the protagonist Alan takes a trip to Saudi Arabia to present his company’s IT system, which he hopes will be adopted by the in-the-works King Abdullah Economic City. Throughout the course of the story the city remains in a state of limbo, likely never to live up to the King’s exaggerated claims or marketing promises. King Abdullah Economic City (KAEC, pronounced “cake”) is real: though perhaps not as bleak as rendered in the book, the desert megacity has been underway since 2005 as one of four new cities proposed to control sprawl and congestion. The city is supposed to become larger than Washington DC, but remains only 15 percent completed, gaining only several thousand inhabitants over a decade post-conception. Other Middle Eastern countries have traversed this road already to minimal success. The United Arab Emirates’ carbon-neutral smart city Masdar, though completed, remained virtually empty to this day. Built to house 50,000, the population is only 1,000, consisting mostly of students attending the Masdar Institute of Science and Technology there. China is perhaps the posterchild for from-scratch cities: the nation has used more cement in the last three years than the US in the entire 20th century, and invested more time, money, and space into brand new cities than any other country. Called “overnight cities” by some, a fleet of metropolises have been popped up across rural land in anticipation of rapid population growth. Roughly 600 new cities have been built since 1949, many in the 80s and early aughts. Hundreds of these remain empty or under-populated, often referred to as “ghost cities” or compared to dystopian fantasy settings. The buildings shine, but the streets echo; they are eco-friendly, but until they are people-friendly their sustainability is a moot point. China’s top-down approach to urban planning is smart in theory but lacking in practice: you can plan a city all you want, but over half the appeal of cities are their community and culture, which are inherently organic. Even the best marketing plan can’t disguise the deficit. In spite of China’s struggles with the matter, down in India, Prime Minister Narendra Modi has made smart cities a major goal of his administration. One of such cities is Lavasa, a completely privately-run metropolis. Like the others, it is unfinished. Visitors remark that it is more suitable for vacation than for living, as it lacks adequate hospital and schooling systems. And like many other new cities, it doesn’t solve problems like poverty because it’s too expensive to live there. Other high-tech, eco-friendly city projects — like Portugal’s fully sensor-embedded PlanIT Valley — have halted construction over economic concerns. Whether public or private, from-scratch cities face similar issues: attracting people, solving problems, and investing large sums into an uncertain fate. People bringing promise Why is it, exactly, that historic cities flourish while new ones flounder? Old cities face troubles of all sorts, especially as older infrastructure crumbles, populations fluctuate and architecture becomes outdated. Systems fail, buildings overcrowd, and yet they remain resilient in their complexity. They say Rome wasn’t built in a day. This is obviously true: modern Rome was built over the course of two and a half thousand years, and is thusly one of the oldest continuously occupied sites in Europe. Even newer cities like New York have had a couple of centuries to develop character, layers and intricacy. New cities may never have the chance to make it to that status, and it doesn’t help that they lack the appeal and vibrancy of old cities. As the architect of the still-budding city Loseva said in an interview, “soul is something that the city develops over time.” The older the city, the stronger the soul. The location of age-old cities also reflects the cream of the crop in real estate: they are natural and ideal spots with the best weather conditions and access to water supply and trade. Their economies and communities have thusly developed both financial and cultural capital over the ages, not out of a goal of perfection but out of necessity. This said, not all from-scratch cities in recent histories have or will necessarily falter. South Korea’s Songdo is among the most successful so far, whether by design or chance, it’s reached a population of 70,000, which is expected to triple by 2018. Though it’s largely a blank slate still, Koreans already accustomed to the high-tech slickness of cities like Seoul may be ideal adopters to fill it out with the color and energy it needs. Songdo may be an outlier. Either way, the planet needs more cities. Perhaps people simply don’t want to live in cities of the future — not yet anyway. What is the solution? Unfortunately, I don’t think there a hard and fast rule — it may be that time and population growth alone does the trick. Even so, the territory is uncharted and the variables are many. Knowing what we know about people, about culture, and urban planning, better practice should at least be in reach. As Jane Jacob wrote in The Death and Life of Great American Cities “Cities have the capability of providing something for everybody, only because, and only when, they are created by everybody.” Perhaps this is the core issue: city architects should plan and build alongside people rather than simply for them. There must be room to go off the books, room for the unexpected, room for a little bit of chaos to seep between the buildings. If urban planners have to compromise their ideals to get there, it may just be worth it: without people, a city is simply a glorified diorama. With them — flaws and all — it can develop a spirit along with its skyscrapers. 6 Green Innovations That Could Revolutionize Urban Planning 6 Green Innovations That Could Revolutionize Urban Planning The world’s cities are in constant flux, being built or built upon to accommodate fast-growing urban populations. The standard approach to urban design has changed, as it has countless times throughout the ages. Today, a new trend has dawned: sustainable design for greener cities, and ultimately a greener planet. Sustainable design boils down to two main goals: conserving energy and reducing waste. There are many innovations that deliver these goals that are worth paying attention to. After all, they could be staples in the future of urban design, construction, and renovation. As a seasoned real estate entrepreneur, I find it useful to watch these developments carefully. Here are seven innovations that, though not necessarily new, I believe may explode in popularity as cities are designed and developed with sustainability in mind. 1.Green roofs Not unlike the habitats of the Hobbit’s Shire, green roofs are structures either partially or fully covered by vegetation. Why? The benefits are extraordinary. Green roofs limit the need for heating and cooling, filter pollutants from the air, insulate buildings, and offer extra square footage for agriculture. They also help mitigate the “heat island” effect in urban spaces — when cities are hotter than the surrounding land — by lowering temperatures. The modern trend started in Germany in the 1960s and spread to other European cities, many of which are known for their sustainable initiatives. It’s an effective use of space that reduces energy consumption and adds new functionality to formerly barren roofs. North America also has a growing market for green roofs and other types of eco-friendly “living architecture.” 2. High-speed transport Elon Musk’s future-forward Hyperloop has been in talks for years, and recently the much-hyped high-speed rail had its first public test. It could take many years to be fully realized, but it’s not the first of its kind: also known as bullet trains, high-speed railways can be found in Japan, China, France, Germany, Russia, South Korea, the US, among other countries. Though expensive to build, high-speed rails are generally eco-friendly and save in greenhouse emissions by providing a speedy alternative to more fuel-intensive transport. Just look to California, where high speed rails run on electricity and reduce the need for cars. In future cities, mitigating the need for vehicular and air travel by implementing high speed rails instead will save dramatically on energy usage and mitigate pollution, too. 3. Floating buildings With so much land overtaken by human activity, expanding onto the water could preserve greenery for agriculture and other uses. There are various of types of floating architecture designed precisely to be eco-friendly urban solutions. Self-sustaining floating house units already exist: for example, the WaterNest 100 by EcoFloLife is made of 98 percent recyclable materials, with photovoltaic panels embedded in the rooftop for solar energy. This type of innovation could work for an entire city, in theory. In fact, various eco-friendly floating cities have been designed, including Belgian architect Vincent Callebaut’s Lilypad, proposed as a city for climate refugees, and Silt Lake City, which would float atop the Nile River. 4. Water recycling Speaking of water, droughts and shortages are predicted to become increasingly common as populations grow, supplies dwindle and the planet warms. Americans in particular use 100 gallons per day, 95 percent of which is wasted. Certainly there is a more efficient way to conserve water, in and out of cities. Greywater refers to water in sinks, showers, washing machines, etc: essentially all streams but toilets. Greywater, along with stormwater and even wastewater can be better managed and recycled for reuse in urban settings through large-scale systems and smaller-scale innovations like this high-tech shower that recycles water as you wash. 5. Solar solutions Solar is one of the fastest growing forms of sustainable energy and one of the most promising too. But for all the hype, it faces some key problems: mainly, the fact that sunlight isn’t always a guarantee. Even so, solar technology is becoming better and more efficient, and more easily stored for not-so-sunny hours. One Swedish company is able to power 24 homes with one dish, and Tesla now offers a solar battery at the most affordable price yet. Individual solutions like these could be scaled up for implementation in urban areas, like this solar road used in the Netherlands that generates enough power for a year’s worth of electricity. 6. Microgrids Solar energy is great for easing reliance on the electrical grid, but grids will likely be necessary in powering cities for a long time. However, there are less wasteful ways to provide electricity, like the use of microgrids for example. Microgrids are small, decentralized energy systems that collect, store and distribute electricity in an even and balanced way. Where large-scale power plants are often powered by fossil fuels, localized grids are better suited to sustainable energy sources like solar and wind, and can act as backup in case of blackouts. For cities, a network of microgrids would waste less energy and derive it from a variety renewable sources. The market for microgrids is expected to grow to $40 billion by 2020. These are just several of the innovations that are likely to inform sustainable design in cities as we move into a more environmentally-conscious future. Whether all at once or a little at a time, the urban greening trend shows no sign of sunsetting anytime soon — which is why it’s smart for those in the real estate industry to take note and adapt. ULI’s UrbanPlan: Creating Informed Citizens, In and Out of Classrooms ULI’s UrbanPlan: Creating Informed Citizens, In and Out of Classrooms As a real estate industry insider, I’ve felt compelled and delighted to follow the Urban Land Institute and its endeavors closely. An independent, global nonprofit, ULI dedicates time and resources to supporting the entire spectrum of real estate development and land use disciplines in order to strengthen communities across America. The real estate industry as of late has been striving for sustainability and local empowerment; ULI is representative of the space where the real estate business meets private and public community betterment. One ULI initiative I feel has a unique potential is UrbanPlan, which brings hands-on curriculum into high school and college classrooms to help students learn about — and participate in — the forces that shape community development. UrbanPlan has been servicing youth for over a decade in schools across the country, and recently even further. Since its founding in partnership with the Fisher Center for Real Estate and Urban Economics at the University of California, Berkeley in 2003, it’s reached over 27,000 high school and university students in about 140 classrooms every academic year. This latest year, UrbanPlan operated in 36 high schools, ten universities, and even a pilot program for 90 pupils in the United Kingdom. UrbanPlan is run by 13 ULI District Councils, which deliver the ULI mission at a local level to provide industry expertise to community leaders. In 2010, UrbanPlan was selected by the George Lucas Educational Foundation (yes, that George Lucas) as one of 20 programs running in the US to spread awareness on innovative and effective educational programs, emphasis mine. Since kids are the future of our cities and communities, this is exactly the type of program that influences future leaders to care about intelligent land use. How it works What I find most interesting about UrbanPlan is that it’s not your everyday lecture; there are no textbooks, powerpoints or pop quizzes. Instead, the curriculum is immersive, allowing students to fill various roles and negotiate to solve problems — and turn profit — in fictional communities. I think most of us can agree that the most fun and interesting classes back in the day were the ones that let us learn in the act of doing, rather than simple note-taking. What better way to get students interested in real estate development than to let them experience it for themselves? Here’s a description of the program, from ULI’s website: Student development teams respond to a “request for proposals” for the redevelopment of a blighted site in a hypothetical community. Each team member assumes one of five roles: finance director, marketing director, city liaison, neighborhood liaison, or site planner. Through these roles, students develop a visceral understanding of the various market and nonmarket forces and stakeholders in the development process. They must reconcile the often-competing agendas to create a well-designed, market-responsive, and sustainable project. Once again, the emphasis is mine. With students taking on different roles, conflict, collaboration, and creativity can organically unfold. All the while, team members work together to design a project that fits the needs of the community and the market. Teams present their final projects to “city council” of ULI members, who question teammates, deliberate, and award a “contract” to the winning proposal as a council would in reality. The course is typically six weeks and a total of 15 hours long. And while it’s not designed to create a future generation of real estate developers — the skills go far beyond that, into general teamwork, marketing and economics — for those that may choose professions in this area, a robust understanding of the industry will be instilled. Real results While this sounds pretty interesting in theory, you may be wondering how it comes together in reality. The press has covered several cases, which report on how the UrbanPlan curriculum operated in real classrooms. According to Paula Blasier director of San Francisco-based UrbanPlan, the program allows students that may not be top performers under traditional education models to excel. “All of a sudden, a kid discovers a whole new world, maybe even a possible profession, that requires a skill set they thought had no value,” Blasier said. This is because the activities require human skills that aren’t always used in classrooms. Berkeley High School student Sofia Haas noted that the program helped her understand the complexity of development and the political trade-offs involved. “It was definitely challenging to have to make a profit on our product and try to keep true to our beliefs,” she said. “But those are the problems that face people who do this in the real world.” Now, Haas is careful to take note of the little things in neighborhoods that were likely implemented carefully behind the scenes. In Colorado, high school students found UrbanPlan was helpful for team building. It was also a great fit for Littleton High School’s curriculum, because it fit into the economic portion of students’ social studies requirements. “When we first started, none of us really liked each other, but as time went on, we all stepped up and took on our roles,” student Ashley Winters said of the experience. “Everyone helped everyone else know what they were doing and what they were supposed to be talking about.” Why it’s important Since the US population is forecasted to grow by 60 million people in the next two decades, programs like UrbanPlan are critical in educating the next generation to be informed citizens able to handle population and community demands. Blasier admits that one of the key benefits of UrbanPlan is preparing young people to be called upon for active roles in their communities — and that the forces that come into play in these decisions aren’t often taught in schools. “We wanted to provide the most realistic experience possible, but we also wanted a model that could be embraced by public schools nationwide,” she said. “We knew we had to make it not only engaging for the kids but teacher friendly as well.” Ultimately, I believe that UrbanPlan is an innovative and highly useful educational approach that helps students understand the reality of the public, private, and political aspects of real estate and land development. The town in question may indeed by hypothetical, but the lessons are not. At the end of the day, immersive projects like this are the ones that stick and teach kids some of the most important responsibilities of adulthood. Featured image: Sony Abesamis via Flickr Dying Sustainably: What Greener Burials Mean For Big Cities Dying Sustainably: What Greener Burials Mean For Big Cities The world’s cities have a grave problem. With limited space and growing populations, the dead outnumber the living in packed cemeteries that occupy valuable real estate, cost families exorbitant fees, and strain the environment to boot. While land is abundant in more rural areas to respectfully bury the deceased, in cities like London and New York City space is increasingly scarce. Because cemeteries aren’t inherently profitable — the dead do not pay rent, after all — existing sites must grapple with an influx of demand without much chance at horizontal expansion. So what is the solution? Pushing bodies further underground? Building mausoleum towns? Creating floating cemeteries and skyscrapers? As radical as these ideas may seem, they have all been explored and implemented in cities attempting to make room for the dead. With 50 million people passing away every year, afterlife accommodation is as much a real estate issue as it is an environmental one. Just as the real estate industry has moved toward a more ethically and environmentally conscious ethos,  the funeral business is doing the same for deceased tenants. Some say as many as one in four older Americans are likely to opt for sustainable burial options in the future, given the growth of environmental awareness. Though challenges lie ahead, especially in cities, many sustainable and space-saving burial options exist. It may take an extra dose of creativity — and maybe even some cultural change — but new earth and community-friendly burial solutions could do the world a great good. A costly problem Even if they want to, many city residents can no longer bury their loved ones the traditional way. Inground plots in Manhattan are in the single digits with six-figure costs. Even a burial outside of cities can cost upwards of $10,000, considering the price of coffins and other funeral services. In US cemeteries alone, 30 million feet of hardwood caskets are buried, along with 90,000 tons of steel caskets, 14,000 tons of steel vaults and over 2,500 tons of copper and bronze. That’s a huge wealth of trees and minerals buried beneath the earth, unable to be recycled or put to use. Embalming chemicals can also be incredibly toxic to humans, animals and wildlife. Even cremation takes its toll: it’s an energy-intensive process that emits mercury from burnt teeth fillings. With baby boomers aging, 76 million Americans are projected to reach life expectancy between 2024 and 2042. To give each a standard burial, an area about the size of Las Vegas would be required. This won’t be an issue for the many people living in rural locations, but with city populations growing there will no doubt be problems among denser populations. In fact, there already are. City residents and urban planners are in perhaps the perfect positions to pursue sustainable alternatives, for the sake of space, money and the planet. The Green Burial Movement The concept of green burials is not a new one — in fact, it was once the norm, with burials often occurring at home in wooden boxes. At the turn of the 19th century, when deaths moved from homes to hospitals and funeral parlors, the post-death rituals we practice today became widely adopted. Embalming began during the Civil War to help preserve the bodies of soldiers during their transport, and though not legally mandated continues to be the standard practice. The green burial movement, which began in the early 90s, seeks to return to the style of natural burial. Biodegradable caskets made of bamboo, cardboard, or wicker are less expensive and easier on the earth; for those that want to go the cremation route without the detriments, an alternative method called resomation is less toxic and energy exhaustive. Today, people who want green burials need only consult with the Green Burial Council (in North America) to find a certified green burial provider, the number of which has increased from just one in 2006 to over 300 today. Unlike other services bearing the “organic” label, green burials tend to be even cheaper than traditional ones. The Green Burial Council estimates that about one-quarter of older Americans want green burials — an opportunity to take the trend from niche to mainstream. Because city residents face the biggest dilemma and tend toward progressive social leaning, it’s no surprise that New York City boasts great green options like Brooklyn’s Greenwood Heights Funeral & Cremation Services. Saving space and memories Just making the switch from steel to straw caskets won’t solve space issues, however green they may be. With the last open cemetery in Manhattan selling vaults for $350,000, it’s worth wondering if there’s a better way to die without shipping yourself to faraway fields a day-trip away from family. Other cities have tackled this problem, some to great success. Countries like Belgium, Singapore and Germany practice grave recycling, through which families get a free public grave for the first 20 years or so, after which they can either pay for renewal or allow the cemetery to move the body to make space for another. Locations without this practice balk at the idea of disturbing the dead. Some Asian cities have decided upon large, mechanized columbariums, which store thousands of urns that can be retrieved with an electronic card. Hong Kong has plans for a columbarium island called “Floating Eternity,” and other cities are considering vertical cemeteries. A Norwegian student won a design contest with his vision of such a skyscraper, which would house coffins, urns, and a computerized memorial wall. As our virtual selves gain credence during life, digital memorializing has become more popular. A Japanese company offers virtual cemeteries for descendants to tour, while Hong Kong’s government created a virtual social network for families unable to. Designing for the future How do we negotiate respect for the dead with respect for the planet? And how do we negotiate these with cemetery real estate deficits and cost concerns? We don’t want to do away with cemeteries, after all. Like schools and hospitals, graveyards add a layer of emotional and cultural intelligence to neighborhoods. In cities, they are more akin to history museums and monuments — housing century-old skeletons instead of people more recently warm. Moving forward, city residents will have to make tough choices, and urban planners will have to make smarter ones. As the number of people living in cities grows, the number of those dying there will too. Real estate developers may not be directly responsible for accommodating the dead, but one a larger scale urban planners may be wise to do so. Grave as the situation may seem, so long as there are both private and public efforts to solve space and environmental issues, cities and their residents will grow to adopt the most efficient and green burial processes possible. How Urban Planning Shapes Art, Music and Culture How Urban Planning Shapes Art, Music and Culture The structure and a design of a city is integral to the way in which communities come to use its space. This is true of the basics — like recreation, work and residential — but it’s also true of art: where it thrives, where it’s created, and how it’s consumed. From music venues to street art and theater districts, there are reasons why certain cities and spaces spawn cultural renaissances. You see, the size and layout of homes and public places impact what people choose to do there. So if a building is conducive to creative activities, the area will develop to reflect it. Then, as the location begins to adopt creativity into its ethos, the art has it’s own impact, and may affect urban planning moving forward. This cycle of environment and art is something often considered overtly. For people interested in art, culture, urban design or all of the above, the cause and effect of environment on cultural innovation is a topic worth exploring. With this knowledge, urban planners can design spaces that organically spark creation, and artists can choose spaces that serve their needs based on both historic accomplishments and future goals. Art & culture Artistic movements are induced by time, attitude, persons and place. The importance of environment in art-making cannot be overstated — just think about the classic pastoral, impressionism, and expressionism, all inspired by nature. The natural world has always been impactful for artists, but so have cities in a different way. Urbanisation has birthed its own art movements based on factors like the city layout, population, and the social and political realities of the times. The symmetry and tight design of a cities contrasts greatly with nature, so it makes sense this the art that comes from it characterizes more grit than greenery; more angles than angels. For example, modernism as an art form was shaped by the rapid growth of cities and industrialization in the late 19th and early 20th century. When people began to move into cities to seek their fortunes, the environment inspired a new outlook that rejected the certainty of enlightenment and the limits of traditionalism. Modern cities, defined by order and the promise of enlightenment, inspired art, literature and philosophy that affirmed human power to create with the aid of science, technology, and practical experimentation. In this way, modernism was a reflection of urban environments: their stoicism, their nihilism and their promise. It’s a testament to the functionality of city design during the industrial revolution: when machines and manufacturing were introduced via factories and plants, economic growth and innovative thinking flourished. In fact, modernism inspired a new era of urban design that followed the logic of mass production by implementing large-scale solutions citywide. Around the same time, city design that prioritized industrialization faced artistic pushback in the form of the Arts and Crafts movement. Due to anxieties over the prominence of industrial life, handcraftsmanship surged in the early 20th century to prove the value of human creativity and design. But in the wake of two world wars, cities and cultural attitudes would change. Times of hardships and the inability of cities to keep urban spaces safe and clean made clear that even modernism had its faults: in city planning, architecture, and in philosophy. Postmodernism emerged as a result of these failures, exemplified by the decline of American cities. It rejected the totality of modernism in favor of a more contextual and skeptical approach less devoted to perfection. But even in times and places in strife, art has thrived: as it turns out, abandoned warehouses served the needs of artists as well as they did mass production. In Williamsburg, Brooklyn, large abandoned factories proved ideal for artists seeking cheap rent and ample, light-filled spaces. The town was revived as a hub for creative types, and remains one of the hippest neighborhoods in the city. Music making Another key example of how our designed environment shapes culture is music, which has developed in garages, living rooms, churches and garages. The distinct personality of different genres has in part to do with the history of the space through which they first emerged. 1960s Detroit, it’s argued, gave birth to MoTown for several reasons: first, the northern migration of African Americans from the South for factory jobs, and second, because single-family houses had ample space for pianos. A musical heyday was relinquished even in the face of the city’s imminent decline because African American residents used this extra space to gather and make music. “What this suggests is that cities shouldn’t despair too much about their existing built form, even if in many cases they are struggling with it,” wrote Aaron Renn on this phenomenon. “The question might be, what does that form enable that you can’t get elsewhere?” The Detroit-Motown connection prompted journalist Ian Wylie to explore how urban planning shaped other music scenes in an article for the Guardian. When considering the grunge scene in Seattle, Wylie writes, both architecture and weather contributed to its cultivation. The inclusion of garages in local homes gave musicians a place to practice, and the damp, moderate climate convinced them to spend more time there making music, regardless of season. Wyle also writes of how tower blocks in London made ideal transmission spots for illegal radio stations promoting grime music: the towers were fortress-like labyrinths that concealed the pirate stations from police, allowing the widespread broadcasting and popularization of grime music. Then there are Berlin’s abandoned warehouses, which gave rise to electronic music. Not unlike Williamsburg’s abandoned factories, these large spaces provided ample from for experimentation and dance parties. “DJs enjoyed the liberation of making music in places where previously they might have been jailed or even shot for trespassing,” Wylie writes. “The large warehouses of cold war-era Berlin also became spaces for artists and musicians to convert into studios.” Lastly, New York City’s community centers played a part in the growth of hip hop: DJs and other hip hop artists held shows at sponsored community talent shows and dances more often than they did on the streets. The existence of these centers allowed a blending of generations and cultures, which contributed to hip-hop’s unique and eclectic sound. What it all means Where there are cities there are humans, and where there are humans there is art. Whether the city as a whole inspires a political and artistic movement, or simply certain elements of its design, environment will always play an important role in cultural evolution. Many of the world’s cities tell the stories of the complex relationships between buildings, communities, and art. When the space informs the artist, the art informs the community, and then the community informs the future of the space. It’s a known phenomenon that when art flourishes, it invigorates neighborhoods, and invites the further development of both art and business. Both urban planners and artists can take this information into stride in their future endeavors. It’s a common goal of developers to build spaces that are hospitable to young, artistic individuals and communities. Building spacious community centers, rooms, and public spaces where collaboration can happen could prompt the artistic renaissances of the future. For young creatives, cities continue to offer the inspiration and flexibility to kickstart new projects, movements, genres, and works of art. Whether or not urban planners intend it — and as evidenced by abandoned spaces, perhaps especially when they don’t — creativity finds a way to fill in both cracks and canals. Featured image: Vincent Anderlucci via Flickr How The World’s Most Eco-Friendly Cities Pull Off Sustainable Transport How The World’s Most Eco-Friendly Cities Pull Off Sustainable Transport With every population, urban or otherwise, carbon footprints collect like dirty clouds in the wake of human movements. Ironically, these footprints are often more like wheel tracks: roughly a third of America’s emissions caused by the transport of people and goods, 80 percent of which can be attributed to cars and other road vehicles. The world’s cities are far from immune to travel’s impact on the planet, as many densely populated metropolises are crammed with vehicles. These cars, which zip between skyscrapers and line narrow streets, often belong to commuters that travel significant distances each day and emit Co2 all the while. Even idling cars are problematic, as emissions increase the more time is spent on the road accelerating and decelerating. With urban populations growing, the opportunity to make city transport greener is one worth a deep dive for urban planners. Cities already have a leg up when it comes to sustainability: with robust public transportation systems in place, cities have lower footprints than their suburban counterparts. As downtown revitalization attracts a greater number of residents into cities that can live, work, and shop locally, cars may be rendered an unnecessary luxury in due time. The rise of eco-friendly transport The US is home to several cities considered eco-friendly; because of its public transportation and commitment to green initiatives, New York City is one of them. But it wasn’t always this way — in fact, many of America’s cities were influenced by the 1939 New York World’s Fair imagining of an ideal, car-based city. This utopic roadway concept took the States by storm post-war, after which car ownership skyrocketed and roadways sprawled. Cities in Europe, on the other hand, were never designed based on the assumption that private cars are the pinnacle of urban mobility. Countries far older than the US boast more compact and walkable city streets. We know now that the World Fair was wrong, and author Jane Jacobs was right: “Traffic congestion is caused by vehicles, not by people in themselves,” she wrote in The Death and Life of Great American Cities. Cities that invested in car-based infrastructure have proven to be less environmentally sustainable, among other issues. As a result of this knowledge, cities across the world, no matter their original design, are striving to decrease their reliance on cars. The financial incentive for this is also clear: city residents without cars save money on automobile-related costs, and have a heightened ability to boost the local economy. The planet likes bikes Some cities are ahead of others in this regard, and when it comes to transportation innovations, it’s worth looking to some of the planet’s greenest urban spaces for solutions and ideas. One prime example is Copenhagen, Denmark, sometimes called the bike capital of the world. 55 percent of citizens here ride a bike every day — it’s estimated that for every kilometer cycled, society enjoys a net profit of 23 cents. Bikers also save the city 90,000 tons of Co2 emissions annually. Copenhagen’s bicycle culture has been over a century in the making; photographs show Danes in early 20th century biking to and fro enthusiastically. This culture was challenged mid-century by cars, but in the 1960s it became apparent that their prominence was leading to traffic accidents and congestion. To counter this, Copenhagen put energy and investment into extensive cycling infrastructure and branding campaigns. With over 390 kilometers of bike lanes, Copenhagen’s cycling is not only a healthy, green alternative to driving, but a well-defined symbol of freedom and personal energy. The Danish city continues to expands their bike culture with new policies, marketing initiatives, projects, pathways and more. Other cities look to Copenhagen as an example, and the idea is certainly catching on — and not only in Europe. Buenos Aires in Argentina has become a latest poster child for urban biking, and Chinese city Hangzhou boasts the largest bike share in the world with a 78,000  fleet. Have feet, will travel Cites that are pedestrian friendly also tend to be much greener than car-centric ones. When urban space is made walkable — a task that may take development and beautification — residents rely less on cars and more on their feet. Some cities have even closed down roads to cars to create public walking spaces. You can see examples of this in New York City, where Times Square has been transformed into a shiny, commercial pedestrian paradise. There are many big cities cutting down on cars, including London, Madrid, and Hamburg, in addition to already mostly car-free cities like Venice, Freiberg, and Groningen. Cities like Istanbul and Mexico City are also embracing the importance of people-oriented mobility to incredibly promising results. Pedestrianization has been touted as a great necessity in urban design: it preserves the health and safety of city residents, reduces pollution and noise while improving tourism, and heightens retail income and community involvement. Air, land and sea Part of reducing reliance on cars means finding alternative ways for people to travel. This goes much further than biking and walking: there are many other complex systems of transport including travel via bus, tram, boats and more. Perhaps the quintessential example of a carless city is Venice, which was built with canals instead of streets. Though some boats there are indeed pollutants, the city is accessible by foot and gondola. Medellin, Colombia takes the concept of gondolas to a new level, literally: the city implemented gondola lifts called metro cable that go up and down the city’s steep mountainside. This is part of a greater metro system called Metro de Medellin, which saves 175,000 tons of Co2 every year along with saving $1.5 billion in respiratory health costs. Areas that were once violent and dangerous have been utterly transformed due to the modern ease of mobility. Share or beware Lastly, the share economy is a growing trend in transportation and travel that has proven to be a green alternative to cars and hotel rooms. By renting space in an existing cars and homes, travelers don’t contribute to excess energy consumption. Carsharing in particular is on the rise everywhere from the US to China, India, Brazil and Mexico. If people share rides, theory has it, car ownership decreases and complements a growing array of public transportation options. Car ownership is already declining, and the Ubers and Googles of the world are knee-deep in plans for a future of automated ridesharing. Gilles Vesco, a politician that switched the city of Lyon to a sustainable model, agrees that sharing is the future: “Sharing is the new paradigm of urban mobility,” he said. “Tomorrow, you will judge a city according to what it is adding to sharing. The more that we have people sharing transportation modes, public space, information and new services, the more attractive the city will be.” But non-sharing car-owners should perhaps beware, because the other side to encouraging alternative transportation is discouraging car use. Tolls, gas taxes, high occupancy fast lanes, no-car days and other measures that bar cars can make driving unpleasant and expensive. Maybe this isn’t the fairest way to push sustainable transport, but seems to be working in cities like Portland, where congestion charges have been implemented to cut down on traffic. 100 percent green transport may still be impossible, and the organic dissuasion of excess car use certainly won’t happen overnight. But as we all become more aware of our collective carbon footprints, it wouldn’t hurt to gradually relax our wheels. Instead, we can push for a future of mobility that elevates both community and the environment. The trick is prioritizing these elements over the luxury of plush leather interiors. Smarter Urban Planning in an Age of Extreme Weather Smarter Urban Planning in an Age of Extreme Weather Historically, cities around the world have practiced urban planning methods to improve communities for both residents and governing bodies. Those methods differ by region, but are consistent in the initiatives they support — land use, environmental protection, and public welfare. However, weather patterns have recently become unusually extreme, sometimes disastrous, due to global warming. In order to protect our urban environments, we need to account for these dramatic weather fluctuations through smarter, preemptive urban planning. The Current Landscape Altering a city’s physical infrastructure is not fast or easy, which means cities must prepare beforehand to defend against certain weather conditions. Natural disasters like Hurricane Katrina and Hurricane Sandy are unfortunate reminders of the potential consequences. Of course, no two locations feel environmental effects in quite the same way. California has succumbed to extreme droughts while the Northeastern region of the country is visited regularly by severe storm weather. Meanwhile, rising temperatures melt polar ice, warm ocean bodies, and lessen mountain snowpacks. As a result, coastal areas are feeling the consequences of rises in sea level. Weather patterns may differ by location, but their massive impact on city life and urban planning is undeniable. The cost for a city’s unpreparedness can multiply itself in damage repair. The Effects of Extreme Weather In Urban Places One of the more dangerous and consistent results of global warming and inadequate urban planning is flooding. Not only can water disasters lead to loss of life, but their effects on agriculture can also be substantial, often with international ripple effects. In urban environments, flooding causes damage to residential properties, businesses, subway infrastructure, and roads. In order to prevent those massive repair costs, urban planning must be approached as a preventative measure. The largest global disaster of 2012 was Hurricane Sandy with an astounding cost of $65 billion. Hurricane Sandy and the year-long Midwest/Plains drought accounted for almost half of the world’s economic losses, according to USA Today. Even when the damages of extreme weather don’t amount to this astronomical figure, flooding still poses both economic and environmental problems on a smaller scale. Snowfall and drought also can be damaging in extreme cases. Finding New Methods Just as weather patterns are changing, so should preparation efforts. Self sufficiency — a common community approach when it comes to anticipating severe weather — is an element that urban planning can easily incorporate. That is, urban planners will help make it easier to retain the resources that become scarce in times of natural disasters. For example, many buildings are beginning to harvest their own rainwater and residential households are filtering their rainwater for utility usage. With cities allowing designated space for water storage area or tools to help water and energy conservation, the communities are in a much better place if their resources are ever strained. Architects are also looking to implement responsive living materials into traditional buildings so that they are more adaptive and environmentally suitable. It is not uncommon to find corporate buildings using green energy and filtered air, and architects are now seeking to make residential buildings just as environmentally conscious. Engineers have started to create amazing materials, such as self-repairing concrete, a substance that uses sunlight and bacteria to repair any cracks that appear in the concrete to prevent water infiltration. To persuade cities to be more proactive outside of their traditional urban planning methods will take some time. But just as urban planning helps us adapt to busy environments, it needs to adapt to the extreme weather conditions as well. Finding new ways to approach the urban planning process can help minimize the damaging effects of severe weather conditions and create a better, more prepared city environment. Have Smartphones Made Us More or Less Charitable? Have Smartphones Made Us More or Less Charitable? In the minds of many, smartphones and narcissism go hand in hand, in pocket. From selfies to social media, mobile technology has become a digital extension of the physical self, or a means through which to carefully curate one’s personality and values. For some, this means mirror shots at the gym, brunch photographs, or vaguely political articles and memes. For others, it’s photos of missionary work, Kickstarter campaigns and philanthropy apps. Almost always, the aim is the same: to promote and enable an identity to be admired. Whether that identity is generous or conceited is up to the choices we make with the power of a million apps at our fingertips. Where does charity fit into the smartphone experience, and does mobile technology actually encourage people to be more giving? It would be easy to assume that smartphones promote vanity above all else, an argument many have made while sneering at Snapchatting youngsters. Just consider the breadth of tools dedicated to selfies, the weight of likes and upvotes, and the shallow mentality of viral news trends. At the same time, smartphones make donating easier than ever, to more causes than ever. This ability has been wildly impactful, if not only because the availability and simplicity of the tools that enable it. The truth is complex — a little of this, a little of that, and a lot of speculation. It may just be that humans are both more charitable and more self-absorbed, and that the two aren’t mutually exclusive. It may just be that the effects of smartphone ubiquity can steer us toward a greater good. Cell phones and selfishness We all can picture the stereotype of a grumpy teenager texting at the dinner table, oblivious to both conversation and food. This negative image isn’t completely without warrant: studies have shown that those that spent more time with smartphones were less socially minded. They are also less inclined to volunteer for community service compared to those without their noses in screens. The feeling of connectivity a phone provides, in theory, may in the moment feel like enough to replace physical connection. And so, when we’re digitally connected to our closest friends and family, the impulse to engage with or help outsiders diminishes. We remain glued to Instagram, instead of donating gifts to homeless shelters or building schools for kids. This effect is called “virtual distance” by some analysts, and it has measurable impacts. The greater the distance, researchers have found, the more separated we become from those around us. We are less inclined to share our ideas, especially in the workplace. Smartphone usage also may prevent us from engaging with and helping others, whether out of distrust or isolation. Photo: Brandon Warren via Flickr. The point is, when technology is used as a default for human relations, it can be harmful to their ability to produce real-world benefits. The key is to step back from the digital distance and re-engage with the world, including strangers. For children growing up in the golden age of smartphones, this may be more difficult task than it is for today’s adults, who grew up in a less tech-driven era. Because mobile apps let users create their own worlds with like-minded individuals and friends, people have a tendency to become more insular in their beliefs and interests. Within a carefully curated social group, there is little room for discovery or new ideas. This can make people less empathetic, and block access to new social causes. While the availability to expand our minds and hearts is greater with smartphones, there’s also just enough information to reinforce complacency. That’s not to mention the instant gratification of smartphones, which can breed mindsets unwilling to embark on new and potentially difficult journeys without immediate returns. The ease of mobile technology makes delaying gratification harder; the gratification of charity, in comparison to a tangibility of a Seamless delivery, appears more abstract in value. Whatever the cause, interest in volunteering and charity has seen a precipitous decline in America. Volunteering hit its lowest rate in a decade in 2013. The case for mobile charity Every factor that might contribute to a perceived selfishness and detachment among smartphone users can also be used to the benefit of charities worldwide. The trick is in the delivery and expression. Giving is a social act. Mobile technology can fuel anti-social behavior, as detailed above, but in the digital realm it begets a whole new type of social behavior: social media. Online networks can have enormous reach, and facilitate the constant sharing of thoughts, ideas, and content. When this content is philanthropic in nature, the reach alone can prompt awareness and drive donation. Photo: m01229 via Flickr. Researchers say there are three reasons people give: one, because they want to; two, because they think it’s valuable to do so; and three, as a form of showing off. The third reason is a significant one when it comes to mobile donation, which has in many ways become a form of social performance. Think of the ALS Icebox Challenge, which had participants pour icy water on their heads, or the no-makeup selfies for cancer awareness. Neither explicitly required donation, but both raised huge funds along with stroking the egos of the people that jumped on the bandwagon. Instant gratification works in the favor of charity, too — mobile technology makes donating as easy as one or two taps. When Facebook implemented a “Donate Now” button for nonprofit pages, they positioned themselves to become a hub for online fundraising. Mobile-responsive crowd-funding also lets users get involved in peer-to-peer donation projects. These easy social options have been hugely successful so far on all digital platforms, including mobile. Text message donations can also be made by smartphone owners — for example, in 2012, the Red Cross launched a donate-by-text initiative, generating $46 million in relief funds after a deadly earthquake in Haiti. Now, there are various SMS processes and campaigns people can use to text pre-set micro-donations to different causes. Lastly, mobile technology opens completely innovative ways for people to donate to causes that catch their interest. Mobile app Instead let users make small donations in lieu of daily expenses like coffee; One Today showcases a different cause every day; and Tinbox donates money from corporate sponsors in exchange for ad placement on your phone. What’s the verdict?   By numbers alone, people are more charitable than ever. As the planet’s overall wealth amasses and spending increases, statistically more of this money goes to charitable causes (every year, more of these are made on mobile phones.) Americans give about 3 percent of their income to charity, and this figure has not changed in the decades before or after the smartphone’s rise. Generally speaking, then, it’s rather dubious to claim that smartphones have changed our charitable virtues for better or worse. More accurately, they have changed not what we do, but how we do it. It just so happens that smartphones have the ability to amplify our actions ad infinitum. That’s not to say that mobile technology doesn’t have its pitfalls in terms of the behavior it can bring out in users. It sometimes seems as if people have traded in working to help their actual communities in favor of their digital ones, at a loss to psychical spaces in need of extra hands. I think that we should all be careful to engage more with real people and their needs than we do with front-facing camera phones — this said, there is a lot of value behind screens in terms of reach and convenience. Nonprofits should definitely lean into mobile donations and campaigns that jive with what’s shown to drive progress. It’s simple: Make it a selfie. Make it instant. Make it social. Heck, make it a game. These are the new vehicles of charity — and though may never replace soup kitchen style volunteering, they do work. We can only keep faith that the desire to give back remains part of the human DNA, not to be overwritten by iPhone coding. Featured image: Jon Fingas via Flickr The Massive Impact of Effective Altruism Programs The Massive Impact of Effective Altruism Programs The nonprofit industry has been growing exponentially over recent years but Effective Altruism programs have found a way to add a element to the historic practice of charitable giving. The movement focuses on charitable giving guided by data meaning they measure where the most impactful organizations are and what causes need the most attention from a statistical standpoint. The effect altruism approach has reached a handful of peaks that have amounted in bettered lives and helped established communities globally. Accomplishments like raising over $10 million dollars in funds for direct cash transfers to individuals living in impoverished countries are just an example of this movement’s collective impact Effective Altruist programs work in coherence to deliver amongst the values of this movement through their philanthropy. All programs that support this vision practice core beliefs like open-mindedness, critical thinking and global empathy in route to scale change to a larger level and measure the difference being made. A few organizations under the effective altruism movement include: Started in 2007, GiveWell specializes is on identifying the most promising causes and charities to donate to. GiveWell is a part of the effective altruism movement and it’s been majorly effective in putting finances and useful attention towards notable causes with productive organizations. In August 2014, GiveWell announced “Open Philanthropy Project” for exploration of more speculative causes. The Open Philanthropy Project is the collaborative bridge between GiveWell and Good Ventures, a philanthropic foundation founded by Facebook co-founder Dustin Moskovitz and his wife. Giving What We Can Founded in November 2009, Giving What We Can is a community of people interested in maximizing the good they can do implement in emerging markets specifically for global poverty. GWWC largely banks on the research done by organizations like GiveWell that evaluate the actual effectiveness of nonprofits. 80,000 Hours 80,000 Hours is a UK-based organization that conducts research on careers with positive social impact. The name represents the 80,000 hours a healthy person will work in their career lifetime. Direct Giving Bennat Berger The group emphasizes that the positive impact of choosing a certain occupation should be measured by the amount of additional good that is done as a result of this choice. It considers indirect ways of making a difference, such as donating via your job’s salary, but direct giving is still a focus of theirs as well. 80,000 Hours is run by the charity the Centre for Effective Altruism. There are more effective altruist organizations to fill the roster and each has contributed to the amazing accomplishments that this movement stands by. It’s hard to believe that such a community of nonprofits only began a few years ago but the positive impact has reached very monumental goals. Over $350 million dollars pledged to evidence based global poverty interventions via GWWC. GWWC, in efforts with SCI, also helped deworm over 4 million school children by way of funding through their platforms. GiveWell and Against Malaria Foundation provided financing for over 1 million bed nets to protect against malaria in needing places. These causes have been identified by effective altruist supporters as global problems that need positive change and this movement allows such organizations to do said things at a massive level. With such large donations for direct impact, the change being implemented is clearly being scaled to a more macro approach. These achievements are thanks to the many effective altruism teams across the world. The community includes the founders of Paypal and Skype; over 20 companies and a global network of prestigious educational communities – including students and professors students from institutions such as Harvard, Cambridge, Yale, Oxford and Stanford. Effective altruist organizations have done a spectacular job standing behind a vision of impactful giving to the most needing causes and collaboratively they’ve done that at a bigger scale. How Charities Use Data to Save Lives How Charities Use Data to Save Lives Philanthropic organizations have been put under the microscope lately as people who actively donate want more transparency into where their donations are going. Direct giving is a practice that approaches philanthropy with that exact purpose and, naturally, more and more foundations are starting to participate. The premise behind GiveWell and other like organizations is that they provide transparency to the donation process. GiveWell vets charities who may receive their grants through a strict application process. Not only has GiveWell created a library of worthy charities, it also helps donors decide where they’ll contribute based around GiveWell’s analysis. Effective Altruism doesn’t only create a positive trend just for impoverished communities, it also does so for the entire business of philanthropy. How GiveWell got started and established GiveWell is arguably the most prominent nonprofit involved in the development of emerging markets and a notable proponent of effective altruism. GiveWell specializes in giving contributors the best opportunities and details so that they can donate where they see the best fit. Their organization has put a priority on conducting deep research into the impact of programs; measured by the lives they’ve saved and communities they’ve changed per dollar spent. But GiveWell only deals with the most notable charities so that donors give to the best of the best. GiveWell’s intent is to analyze charities by quantifying their effectiveness so that donors have the most insight when giving. Started as a group of donors, this nonprofit began with giving founders who wanted their charitable efforts to be put towards good use. The organization’s top charities influence some of the most effective initiatives – Against Malaria Foundation, GiveDirectly, and Deworm the World Initiative are just a few under GiveWell’s direction. These charities allow GiveWell donors to contribute towards different causes or through different methods of donation. Take GiveDirectly for example, their charity allows cash transfers to households in developing countries through a mobile phone payment service. But GiveDirectly is only one of the many effective programs under the umbrella. Although GiveWell is one of the founding fathers of this type of nonprofit, it supports a much bigger cause in effective altruism. Effective Altruism The quickly growing social movement, effective altruism, is a household name in the nonprofit community. The ideology focuses on charitable giving guided by data. Effective Altruist organizations & projects have the goal of making a massive and efficient impact, something that GiveWell and other nonprofits perpetuate well. The effect altruism approach has reached a handful of peaks that have amounted in bettered lives and improved communities worldwide. Over $10M in funds raised for direct cash transfers to individuals living in impoverished countries via GiveWell and GiveDirectly’s services. That accomplishment is matched by five others of that caliber that attend to funding for deworming of school kids, global poverty interventions and more honorable causes. EA organizations fight to make charitable morality more effective and impactful. Bringing a data analysis approach to charities shows donors where their money is actually impacting the causes at hand. The philanthropic landscape is moving more towards this method as it presents a better use of funds in a more streamlined process.
__label__1
0.744957
Sunday, 14 April 2013 Brain, Universe, Internet governed by same fundamental laws, suggests supercomputer simulation By performing supercomputer simulations of the Universe, researchers have shown that the causal network representing the large-scale structure of space and time is a graph that shows remarkable similarity to other complex networks such as the Internet, as well as social and biological networks. A paper describing the simulations[1] in the journal Nature's Scientific Reportsspeculates that some as-yet unknown fundamental laws might be at work. "By no means do we claim that the Universe is a global brain or a computer," said paper co-author Dmitri Krioukov, at the University of California, San Diego. "But the discovered equivalence between the growth of the Universe and complex networks strongly suggests that unexpectedly similar laws govern the dynamics of these very different complex systems." For the simulations, the researchers found a way to downscale the space-time network while preserving its vital properties, by proving mathematically that these properties do not depend on the network size in a certain range of parameters, such as the curvature and age of our Universe. After the downscaling, the research team performed simulations of the Universe's growing causal network. By parallelizing and optimizing the application, the researchers were able to complete in just over one day a computation that was originally projected to require three to four years. "We discovered that the large-scale growth dynamics of complex networks and causal networks are asymptotically [at large times] the same, explaining the structural similarity between these networks," said Krioukov, who believes the findings have key implications for both science and cosmology. "The most frequent question that people may ask is whether the discovered asymptotic equivalence between complex networks and the Universe could be a coincidence," he explained. "Of course it could be, but the probability of such a coincidence is extremely low. Coincidences in physics are extremely rare, and almost never happen. There is always an explanation, which may be not immediately obvious." Such an explanation could one day lead to a discovery of common fundamental laws whose two different consequences - or limiting regimes - are the laws of gravity (Einstein's equations in general relativity) describing the dynamics of the Universe, and some yet-unknown equations describing the dynamics of complex networks. SKELETON KEY: Researchers develop method that shows diverse complex networks have similar skeletons The worldwide air transportation network. Each grey link resembles traffic of passengers between more than 1,000 airports worldwide; the entire network has more than 35,000 links. The red lines represent the network’s skeleton, a tree-like structure of only 1,300 links that represents the core structure of the network. Links in the skeleton are the most important connections in the network.  Northwestern University researchers are the first to discover that very different complex networks -- ranging from global air traffic to neural networks -- share very similar backbones. By stripping each network down to its essential nodes and links, they found each network possesses a skeleton and these skeletons share common features, much like vertebrates do. Mammals have evolved to look very different despite a common underlying structure (think of a human being and a bat), and now it appears real-world complex networks evolve in a similar way. The researchers studied a variety of biological, technological and social networks and found that all these networks have evolved according to basic growth mechanisms. The findings could be particularly useful in understanding how something -- a disease, a rumor or information -- spreads across a network. This surprising discovery -- that networks all have skeletons and that they are similar -- was published this week by the journal Nature Communications. “Infectious diseases such as H1N1 and SARS spread in a similar way, and it turns out the network’s skeleton played an important role in shaping the global spread,” said Dirk Brockmann, senior author of the paper. “Now, with this new understanding and by looking at the skeleton, we should be able to use this knowledge in the future to predict how a new outbreak might spread.” Brockmann is associate professor of engineering sciences and applied mathematics at the McCormick School of Engineering and Applied Science and a member of the Northwestern Institute on Complex Systems (NICO). Complex systems -- such as the Internet, Facebook, the power grid, human consciousness, even a termite colony -- generate complex behavior. A system’s structure emerges locally; it is not designed or planned. Components of a network work together, interacting and influencing each other, driving the network’s evolution. For years, researchers have been trying to determine if different networks from different disciplines have hidden core structures -- backbones -- and, if so, what they look like. Extracting meaningful structural features from data is one of the most challenging tasks in network theory. Brockmann and two of his graduate students, Christian Thiemann and first author Daniel Grady, developed a method to identify a network’s hidden core structure and showed that the skeletons possess some underlying and universal features. The networks they studied differed in size (from hundreds of nodes to thousands) and in connectivity (some were sparsely connected, others dense) but a simple and similar core skeleton was found in each one. “The key to our approach was asking what network elements are important from each node’s perspective,” Brockmann said. “What links are most important to each node, and what is the consensus among nodes? Interestingly, we found that an unexpected degree of consensus exists among all nodes in a network. Nodes either agree that a link is important or they agree that it isn’t. There is nearly no disagreement.” By computing this consensus -- the overall strength, or importance, of each link in the network -- the researchers were able to produce a skeleton for each network consisting of all those links that every node considers important. And these skeletons are similar across networks. Because of this “consensus” property, the researchers’ method does not have the drawbacks of other methods, which have degrees of arbitrariness in them and depend on parameters. The Northwestern approach is very robust and identifies essential hubs and links in a non-arbitrary universal way. The Volkswagen Foundation supported the research. Journal Reference: 1. Daniel Grady, Christian Thiemann, Dirk Brockmann. Robust classification of salient links in complex networks.Nature Communications, 2012; 3: 864 DOI:10.1038/ncomms1847 Tuesday, 9 April 2013 Idea networking Idea networking is a qualitative means of undertaking a cluster analysis or concept mapping of any collection of statements. Networking lists of statements acts to reduce them into a handful of clusters or categories. The statements might be source from interviews, text, web sites, focus groups, SWOT analysis or community consultation. Idea networking is inductive as it does not assume any prior classification system to cluster the statements. Rather keywords or issues in the statements are individually linked (paired). These links can then be entered into network software to be displayed as a network with clusters. When named, these clusters provide emergent categories, meta themes, frames or concepts which represent, structure or sense-make the collection of statements. An idea network can be constructed in the following way: • 60 to 200 statements are listed and assigned reference numbers. • A table is constructed showing which statements (by reference number) are linked (paired) and why. For example statement 1 maybe linked to statements 4, 23, 45, 67, 89 and 107 because they all are about the weather (see table). Is Linked To Because They Are About 4, 23, 45, 67, 89, 107 16, 29, 46, 81 23, 45, 67, 89, 107 13, 16, 34, 78, 81 The number of links per statement should be from 1 to 7; many more will result in a congested network diagram. This means choosing why the statements are linked may need grading as strong or weak, or by sub sets. For example statements linked as being about weather conditions may be further subdivided into those about good weather, wet weather or bad weather etc.…). This linking is sometimes called ’coding’ in thematic analysis which highlights that the statements can be linked for several and different reasons (source, context, time, etc.). There maybe many tens of reasons why statements are linked. The same statements may be linked for different reasons. The number of reasons should not be restricted to low number as so anticipate the resultant clustering. • The reference numbers are input to a network diagramming software, usually in the form of a matrix with the reference numbers along the top and side of the matrix. Each cell will then have a 1 or 0 to indicate whether its row and column reference number are linked. • The software is instructed to draw network diagram using maximum node repulsion. This encourages cluster formation. Around 5 clusters are identified in the network diagram, both visually and using the cluster identification algorithms supplied with the software (e.g. Newnan Girvan sub-groups) • A descriptive collective adjective name is determined for each cluster of statements (a meta narrative, classification name or label). • The list of statements is then reported as being clustered into these five or so cluster names (themes, frames, concepts). For example, one might report that your analysis of the statements shows that those at community meeting were using the concepts of exposure, interaction, safety, light and inspiration in their responses. Underlying philosophy • In his book Notes on the Synthesis of Form, the pragmatist Christopher Alexander suggested networking the ideas of clients as means to identifying the major facets of an architectural design. This is still used modern design work usually using cluster analysis. Modern social network analysis software provides a useful tool for how these ideas can be networked. This simply adds ideas to the list of computers, power stations, people and events that can be networked (see Network theory) The links between ideas can be represented in a matrix or network. Modern network diagramming software, with node repulsion algorithms, allows useful visual representation of these networks revealing clusters of nodes. • When networking peoples' statements or ideas, these become the nodes and the links are provided by an analyst linking those statements thought to be similar. Keywords, synonyms, experience or context might be used to provide this linking. For example, the statements (1) That war is economics progressed by other means, might be considered linked to the statement (2) That progress unfortunately needs the innovation which is a consequence of human conflict. • Linguistic pragmatism argues we use our conceptions to interpret our perceptions (sensory inputs).[5] These conceptions might be represented by words as conceptual ideas or concepts. For example, if we use the conceptual idea or concepts of justice to interpret the actions of people, we get a different interpretation (or meaning) compared to using the conceptual idea of personal power. Using the conceptual idea of justice makes certain action ideas seem reasonable. These may include due process, legal representation, hearing both sides, have norms or regulations for comparison. Therefore there is a relationship between conceptual ideas and related apparently rational action ideas. • If the statements gathered at a consultative meeting are considered action ideas, then clusters of these similar actions ideas might be considered to examples of a meta idea or conceptual idea. These are also called themes, and frames. Modern research extending Miller’s Magic number7 plus or minus 2, to idea handling, suggests a five-part classification is appropriate for humans. Notable applications and uses Using networking to cluster statements is considered useful because: • It provides a multi-dimensional alternative to post-it notes in clusters YouTube example. • It offers a convenient graphic which can be presented in reports and analysed using network metrics (See Computer assisted qualitative data analysis software). • It is an auditable process where each step taken can be explained in supporting documentation. • It is a qualitative alternative, and thus more subtle and transparent, than NVivo, Thematic Analysis, Cluster analysis, Factor analysis, Multidimensional scaling or Principle component analysis. This subtleness includes enabling the analyst to deal with metaphor, synonyms, pronouns and alternative terminology generally. No variables (variation in numerical data) are necessary 1. Alexander, C. (1964). Notes On The Synthesis of Form.. Harvard University Press, Mass 2. Metcalfe, M. (2007). Problem Conceptualisation Using Idea Networks. Systematic Practice and Action Research. pp. 141-150. 3. Alexander, C. (c. 1964). Notes On The Synthesis of Form. Harvard University Press, Mass. 4. Inkpen, A.C., E.W.K. Tsang. (2005). Social Capital, Networks, And Knowledge Transfer. Academy Of Management Review. pp. 146–165. 5. Rorty, R. (1982). Consequences of Pragmatism. Minneapolis: University of Minnesota Press. 6. Miller, G.A. (1956). The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. The Psychological Review. pp. 81–97. Monday, 8 April 2013 Analyzing financial interactions/markets as a complex network Since most of the posts till now have been the characteristic trademark of the so-called 'techies' , I thought I will combine a new field with the analysis of complex networks . Financial interactions/markets in the world sure are a source of arousing curiosity of most of the tech-school students , though ephemerally .So here it goes ..... Economic research on network formation usually starts from the observation that social structure is important in a variety of interactions, including the transmission of information and the buying and selling of goods and services. Over the past decade, economists have started to investigate the interactions between network topologies and game-theoretic notions of stability and efficiency using the tools and jargon of graph theory.  It is noteworthy that in a recent survey of economic models of network formation, the term degree distribution does not appear once, and it seems that the economics profession is only very recently becoming aware of the research by statistical physicists and computer scientists. As a consequence, it becomes very hard to evaluate whether the network structures emerging from various processes are efficient with respect to either the involved microscopic motives, or to the resulting and possibly unintended macroscopic regularities, or both .Interestingly, the particular economic (game-theoretic) setup does not seem to be crucial for such a trade-off between efficiency and stability. Chemical engineers have argued that generic network topologies, including polar cases like ‘stars,’‘circles,’ and ‘hubs,’ can be understood as the outcome of adaptive evolutionary processes that optimize the trade-off between efficiency and robustness in uncertain environments .The efficiency and robustness results were extended to show how power-law, exponential, and Poisson degree distributions can be understood as manifestations of the trade-off between efficiency and robustness within a unified theoretical framework based on the maximum entropy principle .Although degree distributions, small world effects, clustering measures, or other concepts from the statistical physics community have been ignored to a large extent by economists, I take the position that economically or behaviorally plausible mechanisms in the generation and evolution of empirically relevant network structures should play an important role. Financial time series are characterized by a number of statistical regularities in both their unconditional and conditional properties that are often termed ‘stylized facts.’ One property is the scaling law observed for large returns r .                                                                   P(|r|>x)∼pow(x,−α) ,   where α denotes the tail index .The ubiquitous nature of this power law decay has been documented in numerous financial data, covering different markets (stock markets, foreign exchange markets, commodity markets), different frequencies (from minute-to-minute to monthly data), and different countries. Moreover,empirical estimates lead to a universal value of α≈3, with a small interval of statistical variability between 2.5 and 4. Taking into account the long tradition in the analysis of complex systems in statistical physics, scaling laws suggest to look at financial data as the result of social processes of a large ensemble of interacting sub-units.  I start from the observation that multi-fractal models (i.e. models with a hierarchy of volatility components with different stochastic life times) describe the stylized facts of financial returns very well, and combine this with our aforementioned insight that the very large number of traders (called clients subsequently) in financial markets should be grouped into K structures, where K is moderate relative to the total number of clients. The multi-fractal model corresponds to a tree graph with a moderate K compared to the total number of agents .In social nets the average geodesic path length seems to be universal andrelatively small (the so-called “small world effect”). In our nets this distance is related to K , which determines the tail index. Thus I expect to explain the universality of the latter by that of K.The prediction of a hyperbolic decay of correlation in the multi-fractal model is only valid in a time window. Since this window changes according to with the time resolution, I hope to explain the transition from a power law for high frequency data to the complicated behavior of daily returns.The improvement of time behavior by including features of the multi-fractal model should lead to a substantially improved predictive power. It might help to understand whether the intermittent character of financial returns as a signature of the ‘critical’ nature of market fluctuations serves to balance a lack of efficiency with the robustness to absorb large exogenous shocks (like 9/11) along the lines of the above literature. From an interdisciplinary point of view, such insights should be applicable as well to other phenomena that are characterized by qualitative differences among nodes in a network, like supply chains in industrial organization, or food webs in biological systems. References : (2)V. Bala and S. Goyal. A non-cooperative model of network formation.Econometrica, 68:1181–1230,2000 . (3)E. Egenter, T. Lux, and D. Stauffer. Finite-size effects in Monte Carlo simulations of two stock market models.Physica A, 268:250–256, 1999 (4)M.I. Jordan.Learning in Graphical Models. MIT Press, Cambridge, 1998. (5)T. Lux. Turbulence in financial markets: The surprising explanatory power of simple cascade models. Quantitative Finance, 1:632–640, 2005. The application of graph theoretical analysis to complex networks in the brain Traditionally, neuroscientists correlate ‘focal’ brain lesions, for instance brain tumors, with ‘focal’ clinical deficits. This approach gave important insights into the localization of brain functions; a classical example is the identification of the motor speech center in the lower left frontal cortex by the French neurologist Paul Broca at the end of the 19th century. Particularly during the last decades of the 20th century, this essentially reductionistic program led to significant progress in neuroscience in terms of molecular and genetic mechanisms . Despite the impressive increase of knowledge in neuroscience, however, progress in true understanding of higher level brain processes has been disappointing. Evidence has accumulated that functional networks throughout the brain are necessary, particularly for higher cognitive functions such as memory, planning, and abstract reasoning. It is more and more acknowledged that the brain should be conceived as a complex network of dynamical systems, consisting of numerous functional interactions between closely related as well as more remote brain areas (Varela et al., 2001). Evaluation of the strength and temporal and spatial patterns of interactions in the brain and the characteristics of the underlying functional and anatomical networks may contribute substantially to the understanding of brain function and dysfunction. A major advantage of this approach is that a lot can be learned from other fields of science, particularly the social sciences, that are also devoted to the study of complex systems. In the last decade of the 20th century, considerable progress has been made in the study of complex systems consisting of large numbers of weakly interacting elements. The modern theory of networks, which is derived from graph theory, has proven to be particularly valuable for this purpose (Amaral and Ottino, 2004; Boccaletti et al., 2006). Historical background The modern theory of networks has its roots both in mathematics and sociology. In 1736 the mathematician Leonard Euler solved the problem of ‘the bridges of Konigsberg’. This problem involved the question whether it was possible to make a walk crossing exactly one time each of the seven bridges connecting the two islands in the river Pregel and its shores. Euler proved that this is not possible by representing the problem as an abstract network: a ‘graph’. This is often considered as the first proof in graph theory. Since then, graph theory has become an important field within mathematics, and the only available tool to handle network properties theoretically. An important step forward occurred when ‘random’ graphs were discovered (Solomonov and Rapoport, 1951; Erdos and Renyi, 1960). In random graphs, connections between the network nodes are present with a fixed and equal likelihood. Many important theories have been proven for random graphs. In particular it has been shown that properties of the graphs often undergo a sudden transition (‘phase transition’) as a function of increasing likelihood that edges are present. However, despite the success of random graph theory, most real-world networks, particularly those found in social or biological systems, have properties that cannot be explained very well by classic random graphs. These properties include the high clustering and power law degree distributions. One empirically observed phenomenon in many real networks is the fact that the ‘distances’ in sparsely and mainly locally connected networks are often much smaller than expected theoretically. This phenomenon was probably first observed by the Hungarian writer Frigyes Karinthy in a short story called ‘Chains’. In this story he speculates that in the modern world the distance between any two persons is unlikely to be more than five persons, a phenomenon later studied and described in more detail by Stanley Milgram, and referred to as the ‘small world phenomenon’, or ‘six degrees of separation’ (Milgram, 1967). Recent advances The publication of a landmark paper in 1998 by Watts and Strogatz (Watts and Strogatz, 1998) provided a simple and elegant way of modeling small world networks. These authors proposed a very simple model of a one-dimensional network. Initially each node (‘vertex’) in the network is only connected to its ‘k’ nearest neighbors (k is called the degree of the network), representing a so-called ‘regular’ network. Next, with likelihood ‘p’, connections (‘edges’) are chosen at random and connected to another vertex, also chosen randomly. With increasing p, more and more edges become randomly re-connected and finally, for p = 1, the network is completely random (Fig. 1). Thus, this simple model allows the investigation of the whole range from regular to random networks, including an intermediate range. The intermediate range proved to be crucial to the solution of the small world phenomenon. In order to show this, the authors introduced two measures: the clustering coeffi- cient ‘C’, which is the likelihood that neighbors of a vertex will also be connected, and the path length ‘L’ which is the average of the shortest distance between pairs of vertices counted in number of edges. Watts and Strogatz showed that regular networks have a high C but also a very high L. In contrast, random networks have a low C and a low L. So, neither regular nor random networks explain the small world phenomenon. However, when p is only slightly higher than 0 (with very few edges randomly rewired) the path length L drops sharply, while C hardly changes (Fig. 2). Thus networks with a small fraction of randomly rewired connections combine both high clustering and a small path length, and this is exactly the small world phenomenon to be explained. The authors demonstrated the existence of such small world networks in the nervous system of Caenorhabditis elegans, a social network of actors, and the network of power plants in the United States. Furthermore, they showed that a small world architecture might facilitate the spread of infection or information in networks (Watts and Strogatz, 1998). A second major discovery was presented in 1999 by Baraba´si and Albert (Barabasi and Albert, 1999). They proposed a model for the growth of a network where the likelihood that newly added edges connect to a vertex depends upon the degree of this vertex. Thus, vertices that have a high degree (large number of edges) are more likely to get even more edges. This is the network equivalent of ‘the rich getting richer’. Networks generated in this way are characterized by a degree distribution which can be described by a power law:  Networks with a power law degree distribution are called ‘scale free’. It has been shown that many social and technological networks, such as for instance collaborative networks of scientists, the World Wide Web, and networks of airports, are likely to be scale free (Newman, 2003). Basics of modern network theory The discovery of small world networks and scale free networks set off a large body of theoretical and experimental research, which has led to increasing knowledge on various aspects of network properties in the last decade. Before we move on to the application of network theories to experimental neural networks, and healthy and diseased brain, we will provide some basic knowledge on several aspects of network properties. As mentioned before, more detailed mathematical descriptions can be found in Albert and Baraba´si (Albert and Baraba´si, 2002) and Stam and Reijneveld (Stam and Reijneveld, 2007). Core measures The degree distribution, clustering coefficient, and path length are the core measures of graphs. The degree distribution can be described as the likelihood p(k) that a randomly chosen vertex will have degree k. The clustering coefficient C is an index of local structure. It is the likelihood that neighbors of a vertex will also be connected to each other, and have been interpreted as a measure of resilience to random error (if a vertex is lost, its neighbours remain connected). The path length L is a global characteristic; it indicates how well integrated a graph is, and how easy it is to transport information or other entities in the network (Fig. 3). On the basis of the abovementioned three measures, four different types of graphs can be distinguished: (i) regular or ordered; (ii) small world; (iii) random (see Fig. 1); (iv) scale free. We should stress that neither regular, small  world, nor random networks are scale free. Scale free networks can have very small path lengths of the order of lnln(N), but the clustering coefficient may also be smaller than that of small world networks (Cohen and Havlin, 2003). 1.Achard S, Bullmore E. Efficiency and cost of economical brain functional networks. PLoS Comput Biol 2007;3:e17. 2.Achard S, Salvador R, Whitcher B, Suckling J, Bullmore E. A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. J Neurosci 2006;26:63–72. 3.Aertsen AM, Gerstein GL, Habib MK, Palm G. Dynamics of neuronal firing correlation: modulation of ‘‘effective connectivity. J Neurophysiol 1989;61:900–17. 4.Albert R, Baraba´si AL. Statistical mechanics of complex networks. Rev Mod Phys 2002;74:47–97. 5.Amaral LAN, Ottino JM. Complex networks; augmenting the framework for the study of complex systems. Eur Phys J B 2004;38:147–62. 6.Artzy-Randrup Y, Fleishman SJ, Ben Tal N, Stone L. Comment on Network motifs: simple building blocks of complex networks’’ and  Superfamilies of evolved and designed networks. Science 2004;305:1107. 7.Astolfi L, De Vico Fallani F, Cincotti F, Mattia D, Marciani MG, Bufalari S, et al. Imaging functional brain connectivity patterns from high-resolution EEG and fMRI via graph theory. Psychophysiology 2007;44:10.1111. Increasing the Complications Lot has been studied for the past few months in complex networks. Some non-trivial topological features in real time networks such as citation network, movie actor network, web graph and many more. All these graphs were having some common features such as they were are simple, undirected, and un-weighted graphs. Network topological features have been studied on lots of other types of graphs too. (I am not saying that the course was less so please don't increase the syllabus). Some other interesting graphs which show some unique behavior or characteristic properties worth studying can be weighted graphs, directed graphs, eulerian graph, hamiltonian graph. Considering weighted graphs, they are not that complicated if seen superficially but they can be a useful representation to some complicated real-time networks which are not covered by simple graphs or are partially covered in terms of information and thus are unable to study some vital results that are seen in the real-time systems. Let’s take an example. The movie actor graph can be further made weighted with the weights showing the number of times he/she was on screen or the time for which he/she was on screen. This graph will give a better representation to the actual weightage of an actor in the movie rather than giving equal weight to all the actors (lead/subordinate/extras). Another example can be Airport Network (Fig 1). This graph in a simple way can represent all the connected airports. However this graph can be extended to weighted graph in order to represent the total number of seats available between two airports in all the flights. The extended information can be well used to study traffic over two places rather than just studying the connectivity of places in case of un-weighted graphs. Fig1. The Airport Network (edge weight denote no. of seats (million/year) However this new graph will also need some new definitions to the old terms or some new terms may even be coined in. To study this new attribute, weight of edge, along with the old studies on the undirected graph a new definition coined in the paper by A. Barrat is of vertex strength. Vertex strength (si) can aptly cover the degree distribution along with weight of edge. It is defined as summation all the weights of the edges of a vertex. In the case of the Airport Network the vertex strength simply accounts for the total traffic handled by each airport. This certainly is more useful measure rather than just having connectivity of airports. Here aij denote the presence of edge and wij denote the weight on the edge. The strength also gives the degree centrality of the vertex since the degree centrality is not the degree anymore. It also has to cover for the weights on the edges. Another type of graph can be directed graphs. Here one can think of some unbalanced relations of undirected graphs that have been studied. In those graphs a directed edge will provide a better and accurate view. One example here that fits here is the citation graph (Fig 2). The graph of research papers with each out-edge denoting a citation of source node. Here again it would be interesting to observe the clusters where the group of papers are cited by most other papers. A more important set of paper can be easily pointed out according to the number of papers to which a certain paper has been cited. Fig 2. Citation graph (Research papers from the field of biology) One simplification of directed graph for certain calculation is breaking each vertex into a pair of vertices, one denoted as in-vertex and the other as the out-vertex. The in-vertex shall have edges that were ending into the original vertex. Similarly out-vertex shall have edges that were originating i.e. going out of the original vertex. All the edges in this graph are undirected. This was quite a useful representation for graph attributes. More can be done to find out whether this is useful for complex networks attributes or not. Lot more is there to explore in complex networks w.r.t. the types of graphs, properties, interpretations etc. A couple of them I tried to present here. The more we discover, the more it is going to be interesting. Though this blog did not cover much but it was just intended to give a brief overview (some of my views) on these topics. Thank you. •          The architecture of complex weighted networks A. BarratM. BarthélemyR. Pastor-Satorras,  A. Vespignani •          Albert, R. & Barabási, A.-L. (2002) Rev. Mod. Phys. 74, 47-97. •          Pastor-Satorras, R. & Vespignani, A. (2001) Phys. Rev. Lett. 86, 3200-3203. •         Callaway, D. S., Newman, M. E. J., Strogatz, S. H. & Watts, D. J. (2000) Phys. Rev. Lett. 85, 5468-5471. Saturday, 6 April 2013 Connecting the Dots: Linked Data Not so long ago, Tim Berners-Lee came up with the notion of a World Wide Web revolutionizing the way people exchange information and documents. We now live in an economy where data plays a central role in decision making. As the implications of utilizing large data sets in research and enterprises are being realized, there is also a certain degree of underlying frustration when it comes to acquiring quality data sets. The Internet has proved to be a phenomenal source of information. However, most of it is unstructured and scattered. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. Linked Data Graph Partial Snapshot of the Linked Data Graph The difference between the current web and a web of data is best explained with an example. Nowadays a typical online store would consist of pages describing different products. Such a page contains all the information a human reader requires. But this information is represented in a way that makes automatic processing of it hard. In a web of data, the online store would actually publish data resources to the web, which represent the products. These resources can be retrieved in different representations, such that a browser would allow you to view a product page while a web crawler would get a machine understandable representation of the product. Now comes the part which makes it even more exciting for research. Consider a case of a researcher developing a drug to treat Alzheimer’s.  Suppose, she wants to find the proteins which are involved in signal transduction AND are related to pyramidal neurons. (The question probably makes sense to her). Searching for the same on Google returns about 2,240,000 results not one of which leads to an answer. Why? Because no one has ever had that idea before! There exists no single webpage on the web with the result. Querying the same on the Linked healthcare database pulls in data from two distinct data sets and produces 32 hits, EACH of which is a protein which has those properties. The vision of Linked Data is the liberation of raw knowledge and making it (literally) accessible to the world. Indeed, a Web of Ideas. Let us now take a deeper look into the principles involved and how the same can be applied in a Complex Network domain. Broadly, it entails four basic principles: 1. Use URIs as names for things. These may include tangible things such as people, places and cars, or those that are more abstract, such as the relationship type of knowing somebody, the set of all green cars in the world, or the color green itself. 2. Use HTTP URIs, so that people can look up those names. 3. When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL). 4. Include links to other URIs, so that they can discover more things. For example, a hyperlink of the type friend of may be set between two people, or a hyperlink of the type based near may be set between a person and a place. The power here lies in the linksThere are three important types of RDF links: • Relationship Links : point at related things in other data sources • Identity Links point at URI aliases used by other data sources to identify the same real-world object or abstract concept • Vocabulary Links point from data to the definitions of the vocabulary terms that are used to represent the data Having a good idea of the basic concepts, we are now ready to see how we can harness Linked Data for research involving Complex Networks. As I see it, there are two interesting areas here: As described in this old article, Linked Data is growing at an exponential rate since its modest beginnings back in 2007. Over the years, different data sets have started linking to the global database and a clear emergence of genres can be seen. The Linked Data graph may offer a good opportunity to study the temporal behavior of the largest structured knowledge database ever witnessed by the world. Then again, as mentioned before, the data offered itself has huge potential in terms of network research. A good question therefore is how to get our hands on this data. One approach is to use existing systems to get data. Linked data is accessible through browsers like the Disco Hyperdata browser, the Tabulator browser or the more recent LinkSailor. We can also make use of search engines such as, Falcons and SWSE. For example, research on Citation Analysis may be supplemented with data available on authors (example) or subject areas (example). Linguistic research stands to gain from the relationships relating words and objects with actions, events, facts, etc. (example, example). As with all generic applications however, the problem is that deployed applications offer little control over how the data is accessed. For specific research, therefore, it is best to develop applications and crawlers from scratch. This website would be a good place to start. It explains in detail about RDF and architectures involving linked data applications. To conclude, Linked Data provides a more generic, more flexible publishing paradigm which makes it easier for data consumers to discover and integrate data from a large numbers of data sources. Though, still in its infancy period, it has come a long way since its inception. Coupled with the onset of an Internet of Things, Linked Datasets will encompass the physical world, identifying social relationships, behavioral patterns and events. What we do with this information is entirely up to us. Regarding indexing of text and metadata information, the Solr system:
__label__1
0.662263
Cutting-Edge Insulation Technology High Tech Insulators uses a state-of-the-art process to insulate crucial engine components such as exhaust tubes, exhaust manifolds, and turbo chargers. Reduce Surface Temp The HTI insulation technology can reduce engine surface temperature up to 440%, resulting in cooler engine rooms and safer working conditions. Extend Engine Life Proper engine insulation creates a consistent operating temperature for all the engine’s components, extending the life of your machinery and reducing the amount of time and money spent on repairs. Boost Power Maintaining proper temperature during intake and compression allow engines to reach optimal horsepower levels. Tried and True High Tech Insulators has been a leading provider of baked-on engine insulation since 1985. They specialize in using state of the art techniques to insulate vital engine components such as turbo chargers, exhaust tubes, and exhaust manifolds. HTI has provided thousands of parts to clients in extremely demanding industries such as marine, mining, and defense. By using the highest quality materials coupled with rigorous industry standards their products excel in the harshest of conditions. No matter what the situation, parts insulated by HTI go above and beyond their client’s demands. For more information on how HTI can solve your engine insulation needs, Contact Us.
__label__1
0.830004
Your browser is outdated. For everything to work properly please upgrade your software. Cookies enable us to provide our services. By using our services you agree with our cookie policies. MORE INFO > The last country PHOTO: Erakogu Romania is politically detoxifying. The diet is a «play by ear», without a set of rules, administered by disunited «technicians» and «doctors» on a clear concept for the cure. Having already been dissatisfied with previous treatments, the patient lacks confidence in the possible success rate of what familiar specialists offer and appears ready to subscribe to more extreme methods. Romania may just be the last country in the European series to show the signs of fostering a mainstream nation-centric, anti-Western political movement. Having so far averted the institutionalisation of extreme nationalism by including populist, ethnocentric elements in their own rhetoric, mainstream parties no longer appear able to contain this phenomenon. A disenchanted, unsatisfied society is looking for answers outside the seemingly panting political establishment. So where did it all go wrong for the country that constantly ranks among the highest trusting states in the European institutions according to the Standard Eurobarometer? The Agent What can be so worrisome with the emergence of party outsiders in central decision making? Nothing. Unless they are too many. Parties remain the uncontested core of democratic rule. At least some outstanding outsiders should be included in already established parties. Without understanding the need to woo them once their proliferation is noticeable at the central or local level, parties increase the gap between themselves and the society they should represent. With few notable exceptions, such as the support provided by the Liberal Party to outsider and current president Klaus Iohannis, Romanian parties have had a faulty human resources policy. The all technocrat government ruling the country as of November 2015 is another first in the country’s history. The Tools When you don’t have something to believe in, you can be easily manipulated to believe anything. Ideology has mostly been a ceremonial asset for mainstream parties and all political turn-overs have been the result of “anti” campaigns on broad temporary agendas. Three such phases of polarization can be so far identified: communism vs. anti-communism (1990-2000), corruption vs. anti-corruption (2000-2008), for President Basescu vs. anti - President Basescu (2008 – 2014). Once this final, highly personalised period in Romanian politics came to an end, confusion set. Romania now has two mainstream parties which call themselves socialist and centre-right liberals. The scene is set for an ideological division. If the ‘detox period’ is successful, it should end with the 2016 local and parliamentary elections. To this end, the lists of candidates should be balanced between the priorities of their activist - base and different social categories. If unsuccessful, the tension between parties and society continues and provides fertile ground for extremist parties. The message Why are parties to blame for the possible questioning of the pro-European, pro-American sentiment? Besides the general flow of ideas in Europe that echoes even in the more pro-European Eastern members, parties have diluted or altogether ignored the meaning of the European project. After the fall of communism, the one thing that united all mainstream Romanian parties has been the rhetoric on the country’s European path of integration. There was little debate needed as the society was in favour of breaking with the Russian influenced past and head West. In time, no additional effort has been made to explain that EU integration also imposed certain conditions, but was used as a scape goat for difficult decisions. During my brief political campaigning experience it was a communality to hear that “campaigns are not won with issues of foreign affairs. What the people care about are internal matters.” Not taking the trouble to explain the interdependence of all internal matters with everything external – be it changes in the security order or matters of European integration – led to the false belief that the two could simply work in parallel. This is not a uniquely Romanian sin, but it rings differently for a nation that continues to be thought in schools about a glorious history in which it withstood the conquering efforts of empires. Parties never took it upon themselves to differentiate between the conditionality of European integration and US partnership - that we wanted - and the rules of conquering empires – that we did not want. Reacting to the lack of inclusiveness of parties, civil society is strengthening. And after spending the last 25 years deploring its weakness, we may just start swallowing our words. Unorganised activism is an open gate for manipulation. Add the weakness and inadequacy of mainstream parties and one has the recipe for political chaos. Parties are strong as long as they understand to be inclusive institutions. The real competitors, those outside the Euro-Atlantic family, seek not to conquer but weaken. From this perspective, defence is no longer a matter of taking up arms, but ensuring governance. Back up
__label__1
0.655838
, °F Personalized Forecasts Featured Forecast My Favorite Forecasts My Recent Locations Survival Guide: Recognize Which Clouds Mean Danger By By Meghan Evans, AccuWeather.com Meteorologist May 18, 2015, 5:34:15 AM EDT During severe weather outbreaks, conditions can change rapidly and the weather can turn volatile quickly. It is crucial to follow severe weather and tornado-related watches and warnings during episodes of severe storms. Keeping a weather radio nearby, with extra batteries handy, is a must. If you are out on the open road, staying tuned to severe weather alerts and being able to read the clouds for severe weather can help save your life. The Difference Between Tornado Watches and Warnings AccuWeather Severe Weather Center Can Severe Weather Ever Be Avoided? The following is a breakdown of ominous-looking clouds and whether there is imminent danger associated with them. Cumulonimbus Clouds Rapid vertical growth in these cauliflower-looking cumulonimbus clouds shows that there is a mature thunderstorm, likely producing heavy rain. Abundant moisture and instability due to cool air aloft and heating at the surface set the stage for cumulonimbus to develop. A lifting mechanism, such as a cold front, can help trigger these clouds to form. Heavy rain, frequent lightning, strong winds and hail can be threats associated with cumulonimbus clouds. Scud Clouds Scud clouds may appear to be ominous as they hang vertically below a cumulonimbus cloud. Sometimes, scud clouds are mistaken for funnel clouds. However, these clouds are benign and non-rotating. They often have a ragged appearance that sets them apart from the often smooth funnel clouds. Shelf Clouds Shelf clouds often form at the leading edge of a gust front or outflow boundary from a thunderstorm or strong winds flowing down and outward from a storm. The outer part of a shelf cloud is often smoother with a notable rising motion exhibited by a tiered look (hence, the name shelf cloud). Underneath, a turbulent, unsettled appearance is often the case. A shelf cloud should be seen as a harbinger of strong winds, so take caution. Wall Clouds A wall cloud is a cloud that is lowered from a thunderstorm, forming when rapidly rising air causes lower pressure below the storm's main updraft. "Wall clouds can range from a fraction of a mile up to nearly 5 miles in diameter," according to the National Weather Service. Wall clouds that rotate are a warning sign of very violent thunderstorms. They can be an indication that a tornado will touch down within minutes or even within an hour. Funnel Clouds A funnel cloud is a rotating column of air (visible due to condensation) that does not reach the ground. If a funnel cloud reaches all the way to the ground, it is then classified as a tornado. When out on the road, funnel clouds should be treated as tornadoes, since they could touch down. A tornado is a rotating column of air, reaching all the way to the ground. Strong tornadoes are one of the most destructive forces of nature on a small scale, the strongest of which can level entire towns. A roaring noise, often compared to that of a train, can be heard in many cases when a tornado touches down. Vehicles are NOT a safe place to be if there is a tornado nearby. Thunderstorm Anvil Clouds Anvil clouds are the flat top of a thunderstorm, or cumulonimbus cloud. They can spread up to "hundreds of miles downwind from the thunderstorm itself," according to the National Weather Service. Lightning can strike from anvil clouds, even far away from a thunderstorm. Lightning described as striking "from out of the blue" is usually from an anvil cloud that has drifted from a thunderstorm. Mammatus Clouds Striking mammatus clouds can sometimes be seen below thunderstorm anvil clouds. The rounded and smooth look of mammutus clouds captivates onlookers. They are often found underneath anvil clouds of severe thunderstorms; however, they can form underneath clouds associated with non-severe thunderstorms as well. Asperatus Clouds An abundance of heat in the atmosphere is needed to produce enough energy for the dramatic, rolling formations of asperatus clouds. Another factor is the interaction of very moist air (often on the fringes of thunderstorm complexes) with very dry air. The darkness of the clouds is likely due to the large amount of water vapor. Asperatus clouds are not necessarily accompanied by stormy weather. In fact, they have often been observed without the development of thunderstorms. Report a Typo More Weather News
__label__1
0.67086
Active Release Technique green outline art sm What is Active Release Techniques (ART) to Individuals, Athletes, and Patients? How do overuse conditions occur? Over-used muscles (and other soft tissues) change in three important ways: • accumulation of small tears (micro-trauma) • not getting enough oxygen (hypoxia). What is an ART treatment like? These treatment protocols - over 500 specific moves - are unique to ART. They allow providers to identify and correct the specific problems that are affecting each individual patient. ART is not a cookie-cutter approach. What is the history of Active Release Techniques? ART has been developed, refined, and patented by P. Michael Leahy, DC, CCSP. Dr. Leahy noticed that his patients' symptoms seemed to be related to changes in their soft tissue that could be felt by hand. By observing how muscles, fascia, tendons, ligaments and nerves responded to different types of work, Dr. Leahy was able to consistently resolve over 90% of his patients' problems. He now teaches and certifies health care providers all over the world to use ART.
__label__1
0.729805
Eying the ultimate technology and systems For 50 years, the industry has been scaling chip technology following Mooreʼs Law, with scaling targets that addressed cost, area, power, and performance, more or less at the same pace. The systems built from those chips followed suit, upgrading features and performance as new chip generations became available. But the last years, the explosive growth of data traffic has led to a demand for processing power beyond what is possible with traditional transistor scaling. Additionally, the Internet of Things is growing into a veritable system of systems, demanding highly specialized functionality for e.g. low-power sensoring, security, or high-performance computing. In the following vision, imecʼs An Steegen, Diederik Verkest and Ingrid Verbauwhede explain how imec and its partners are developing the chips and systems for this new reality. An Steegen talks about the pipeline of materials, device architectures and advanced techniques for a number of new technology generations. And Diederik Verkest and Ingrid Verbauwhede look at higher system functions and see how we may create an optimized technology to implement these as efficiently as possible. Technologies to extend semiconductor scaling The explosive growth of data traffic fuels the demand for ever more processing power and storage capacity. Mooreʼs Law continues to be necessary, but innovations are needed beyond this law to help managing the devices power, performance, area and cost. An Steegen reveals some of the secrets of semiconductor scaling – a pipeline full of materials, device architectures and advanced techniques that promise to further extend semiconductor scaling. The end of happy scaling? Data traffic explosion, fueled by the Internet of Things, social media and server applications, has created a continuous need for advanced semiconductor technologies. Servers, mobile devices, IoT devices... they drive the requirements for processing and storage. An Steegen: “At the same time, this trend is also creating more diversification. IoT devices for example will need low-power signal acquisition and processing, and embedded non-volatile memory technologies. For mobile and server applications, on the contrary, further dimensional scaling, continuous transistor architecture innovations and memory hierarchy diversification are among the key priorities.” But will we be able to continue traditional semiconductor scaling, as initiated by Gordon Moore more than 50 years ago? An Steegen: “For a long time, we have lived in the happy scaling era, where every technology node reshrinks and redoubles the number of transistors per area, for the same cost. But the last 10-12 years, we have not been following that happy scaling path. The number of transistors still doubles, but device scaling provides us with diminishing returns. Weʼve seen these dark periods of ‘dark siliconʼ before, but, fortunately, weʼve always managed to get out of these periods. Again, the technology box will provide new features to help manage power, performance and area node by node as we move to the next generation.” The technology box for dimensional scaling On the dimensional scaling side, extreme ultraviolet lithography (EUVL) is considered an important enabler for continuing Mooreʼs Law. An Steegen: “Ideally, we would need it at the 10nm node, where we will start replacing single exposures with multiple exposures. More realistically, it will hopefully be ready to lower the costs for the 7nm technology. At imec, we already showed that EUVL is capable of printing 7nm logic dimensions with one single exposure.” Still, issues need to be resolved, related to, for example, the line-edge roughness. An Steegen: “At the same time, to enhance dimensional scaling, we increasingly make use of scaling boosters, such as self-aligned gate contact or buried power rail. These tricks allow a standard cell height to be reduced from 9 to 6 tracks, leading to a bit density increase and large die cost reduction -   a nice example of design-technology co-optimization.” Improving power/performance in the front-end of line FinFET technology has been the killer device for the 14 and 10nm technology nodes. But for the 7-5nm, An Steegen foresees challenges. “At these nodes, FinFET technology canʼt meet the 20% performance scaling and 40% power gain anymore. To go beyond 7nm will require horizontal gate-allaround nanowires, which promise better electrostatic control. In such a configuration, the drive current per footprint can be maximized by vertically stacking multiple horizontal nanowires. In 2016, at IEDM, we demonstrated for the first time the CMOS integration of vertically stacked gate-all-around Si nanowire MOSFETs. Vertical nanowires, although requiring a more disruptive process flow, could be a next step. Or junction-less gate-all-around nanowire FET devices, which, as shown at the 2016 VLSI conference, appear as an attractive option for advanced logic, low-power circuits and analog/RF applications.” Further down the road, from the 2.5nm node onwards, fin/ nanowire devices are expected to run out of steam. An Steegen: “Sooner or later, we will need to find the next switch. Promising approaches are tunnel- FETs, which can provide a 3x drive current improvement, and spin-wave majority gates. ”Spin-wave majority gates with micro-sized dimensions have already been reported. But to be CMOS-competitive, they must be scaled and handle waves with nanometer-sized wavelengths. An Steegen: “In 2016, imec proposed a method to scale these spin-wave devices into nanometer dimensions, opening routes towards building spin-wave majority gates that promise to outperform CMOS-based logic technology in terms of power and area reduction.” Extending or replacing Cu in the back-end-of-line Looking ahead, it might as well be the interconnect that will threaten further device scaling. Therefore, the back-end-of-line (BEOL) and the struggle to keep scaling the BEOL needs attention as well. “We look at ways to extend the life of Cu, for example with liners of ruthenium (Ru) or cobalt (Co). On the longer term, we will probably need alternative metals, such as Co for local interconnects or vias”, says An Steegen. The future memory hierarchy Besides a central processing unit, memory to store all the data and instructions is another key element of the classical Von Neumann computer architecture. The ever increasing performance of computation platforms and the consumerʼs hunger for storing and exchanging ever more data drive the need to keep on scaling memory technologies. Besides this scaling trend, existing memories that make up todayʼs memory hierarchy are challenged with the need for new types of memory. An Steegen: “STT-MRAM, for example, is an emerging memory concept that has the potential to become the first embedded non-volatile memory technology on advanced logic nodes for advanced applications. It is also an attractive technology for future high-density standalone applications. It promises non-volatility, high-speed, low-voltage switching and nearly unlimited read/write endurance. But its scalability towards higher densities has always been challenging. Recently, we have been able to demonstrate a high-performance perpendicular magnetic tunnel junction device as small as 8nm, combined with a manufacturable solution for a highly scalable STT-MRAM array.” The future memory landscape also requires a new type of memory able to fill the gap between DRAM and solid-state memories: the storage class memory. This memory type should allow massive amounts of data to be accessed in very short latency. Imec is working there on MRAM and resistive RAM (RRAM) approaches. Beyond classical scaling – towards system-technology cooptimization... A challenge for traditional Von Neumann computing is to increase the data transfer bandwidth between the processing chip and the memory. And this is where 3D approaches enter the scene. An Steegen: “With advanced CMOS scaling, new opportunities for 3D chip integration arise. For example, it becomes possible to realize different partitions of a system-on-chip (SoC) circuit and heterogeneously stacking these partitions with high interconnect densities. At the smallest partitions, chips are no longer stacked as individual die, but as full wafers bonded together.” An increased bandwidth is also enabled by optical I/O. In this context, imec continues its efforts to realize building blocks (e.g. optical modulators, Ge photodetectors) with 50Gb/s channel data rate for its Si photonics platform. Mooreʼs Law will continue, but not only through the conventional routes of scaling. An Steegen: “We have moved from pure technology optimization (involving novel materials and device architectures) to design-technology cooptimization (e.g. the use of scaling boosters to reduce cell height). And we are already thinking ahead about a next phase, system-technology co-optimization. And to keep computing power improving, we are exploring ways beyond  the classical Von Neumann model, such as neuromorphic computing, a brain-inspired computer concept and quantum computing, which exploits the laws of quantum physics. There are plenty of creative ideas that will allow the industry to further extend semiconductor scaling...”  An SteegnAn Steegen is imecʼs Executive Vice President Semiconductor Technology & Systems. In that role, she heads the research hubʼs efforts to define and enable nextgeneration ICT technology and to feed the industry roadmaps. Dr. Steegen is a recognized leader in semiconductor R&D and an acclaimed thought leader and speaker at the industryʼs prominent conferences and events. An Steegen joined imec in 2010 as senior VP responsible for imecʼs CORE CMOS programs in logic and memory devices, processing, lithography, design, and optical & 3D interconnects. Before, she was director at IBM Semiconductor R&D in Fishkill, New York, responsible for the bulk CMOS technology development. While at IBM, Dr. Steegen was also host executive of IBMʼs logic International Semiconductor Development Alliance and responsible for establishing collaborative partnerships in innovation and manufacturing. Dr. An Steegen holds a Ph.D. in Material Science and Electrical Engineering, which she obtained in 2000 at the KU Leuven (Belgium) while doing research at imec. She has published more than 30 technical papers and holds numerous patents in the field of semiconductor development. Optimizing technology for IoT systems – adding fingerprints and brains The IoT is fast becoming a multilevel system of systems spanning the globe. But to realize the growth path that is forecasted, weʼll need optimized and specialized hardware, capable of, amongst others, sensoring at ultralow power, guaranteeing a systemʼs security during its full lifetime, and learning from huge amounts of data. Imecʼs Diederik Verkest and Ingrid Verbauwhede talk about the next step: how technology can be further optimized to solve specific system and application demands. As examples, Ingrid proposes hardware-entangled security and Diederik explains imecʼs efforts in neuromorphic processing. A heterogeneous chip future (Moore on steroids) “Until recently,” says Diederik Verkest, “we concentrated almost all of our scaling effort on the smallest unit of a chip, the transistor, whatever that unit was used for afterwards. Next, to stay on the course predicted by Mooreʼs Law, we co-optimized technology with lower-level design units such as e.g. memory cells. Now weʼre working our way up in the system hierarchy, always looking how we can optimize technology to better implement a function. So naturally, we also arrive at the key functions needed for the future IoT, such as a failsafe security. And we are also eying specialized processors for e.g. neuromorphic computing, complete subsystems to tackle specific, hard problems.” Verkest adds that he is excited about imecʼs recent merger: “This is a great opportunity for both sides. My new colleagues are application experts in domains such as bio-informatics or security. They can help us screen technology and direct us to the solutions that are best fit to solve the hard problems in their domains. And reversely, as application experts, they will learn from all the hardware opportunities that we are considering. What I see happening going forward, is a much more intimate, structured interaction between hardware and application R&D, greatly speeding up this system/ technology co-optimization.” Secure chips with unclonable fingerprints Today already, electronics are embedded in many objects in our environment. Think of your carʼs keys, security cameraʼs, smart watches, or even implanted pacemakers. “This makes security considerably more complex than it used to be," says Ingrid Verbauwhede. "Existing cryptographic algorithms demand a lot of compute power, so they run mainly on high-end platforms. But most microchips in the IoT are small, lightweight, low-power and have a limited functionality. So traditional cryptography doesnʼt fit well. Our ambition is to make chips that are inherently more secure through the way they are designed and processed." In 2016, Ingrid Verbauwhede and her team received a prestigious European ERC research grant for their Cathedral project. “This grant is at once a recognition for what we have been doing, and a great support going forward. A support that will allow us to independently look for the best solutions.” Ingridʼs team is exploring various ways of doing that: “In the past, R&D looked at dedicated designs methods for e.g. low-power chips. We now want to do the same to better secure chips. Chips e.g. that donʼt leak information while they are computing so they are more resistant against side-channel attacks. And another of our focus points is implementing future-proof cryptography, algorithms that will protect a system during its long lifetime, even if it is attacked by future quantum computers.” Asked for her plans for 2017, Ingrid Verbauwhede points to the direct access her team now has to technology processing at imecʼs fabs: “One of the characteristics of todayʼs chip scaling is process variation: each chip is slightly different from all others. From a reliability perspective, that is a nuisance. It requires engineers to take extra measures so that computations remain predictable. But there is also an upside that we want to exploit: the variations are like a fingerprint, a way to uniquely identify each chip without expensive calculations. It is what we call a physically unclonable function (PUF). And if you tie that function to the software running on the processor, you have another layer of security which is well-suited for IoT devices.” Smart chips with brain power Our brains are formidable computing wonders, using only a fraction of the power than traditional computers to obtain comparable results. Therefore, engineers are eager to mimic the brain on chip to speed up deep learning from massive amounts of data, or low-power image recognition. “But to do so,” says Diederik Verkest, “we have to replicate the brainʼs architecture, a tight interconnection of an enormous number of relatively primitive processing nodes (the neurons) and their interconnections (the synapses). That is usually done with some type of crossbar architecture, wires laid out in a matrix (or cube), so that each input line connects with all outgoing lines. At a crossing of two lines, there is a switch that implements the synapses. They contain the intelligence of the system, the ability to hold data, process and learn from experience. So they should be made programmable and selfadaptable. Work on this emerging domain at imec started some two years ago, partly embedded in the EC Horizon2020 project NeuRAM3. In 2016, we have selected an architecture and screened options to implement the self-adapting synapses. We are convinced that our concept is uniquely suited to tackle the problem, so weʼve taken out a patent and are now building a proof-of-concept. In 2017, we will tape-out a first chip and package it into a neuromorphic computing system that we can test against neuromorphic application simulators with growing numbers of neurons.” These brain-on-chips may not be exact copies of our brain circuits, but nature teaches us that it is physically possible to build much better computers than we do today. Computers that we need to make sense from the enormous amounts of data that the IoT will generate. But also for the intelligent sensors and robots of the connected world. Small, low-power, long-lasting devices that have to stand their ground among an ever growing stream of data, continuously adapt themselves to their environment, even learn and become smarter over their life time. Diederik VerkestDiederik Verkest is director of imecʼs INSITE program. After earning a Ph.D. in micro-electronics engineering from the KU Leuven, Diederik joined imec in 1994, where he has been responsible amongst others for hardware/software co-design. In 2009, he started imecʼs INSITE program focusing on co-optimization of design and process technology for sub-14nm nodes. The program offers the fab-less design-community insights into advanced process technologies and provides a platform for foundries and fab-less to discuss directions for next generation technologies. Diederik Verkest published over 150 articles in international journals and at international conferences. Over the past years he has been involved in numerous technical conferences. He was the general chair of, DATE, the Design, Automation, and Test in Europe conference in 2003. Verkest is a Golden Core member of IEEE Computer Society. Ingrid VerbauwhedeIngrid Verbauwhede is professor at the KU Leuven (Belgium) in the imec- COSIC research unit where she leads the embedded systems and hardware group. She is also adjunct professor at the electrical engineering department at UCLA, Los Angeles (USA). Ingrid Verbauwhede is an IEEE fellow, member of IACR and she was elected as member of the Royal Academy of Belgium for Science and the Arts in 2011. Her main interest is in the design and design methods for secure embedded circuits and systems. She has  published around  70  papers  in  international  journals  and  260  papers  at  international conferences. She is also inventor on 12 issued patents.
__label__1
0.868355
The Endgames that never end in Afghanistan Real Great Afghanistan from Indus River to Oxus River The Endgames that never end in Afghanistan Since Afghanistan first became known as buffer state , it had been a weak state barely held together by a central government in Kabul. The modern boundaries of Afghanistan were defined by European powers to benefit foreign interests , not those of the people living there. Due to its critical location , Afghanistan has always been the victim of the foreign invaders for centuries. But all the times , the consequences of the invasions were disastrous for both the Afghanistan and invaders , specially  the invasions in 19th century, when the British Empire invaded Afghanistan first in 1839 and then in 1878.  In 1839, Britain moved to pre-empt this by invading  Afghanistan, but this  First Anglo-Afghan War was a disaster for Britain and similarly the Second Anglo – Afghan War in 1878. The great game was the strategic economic and political rivalry and conflict between the British Empire and the Russian Empire for supremacy in Central Asia at the expense of Afghanistan, Persia and the Central Asian Khanates/Emirates. At that time both the Russian , the expansionist Empire,  and British, the colonial empire were fighting against each others according to their own agenda . Unfortunately , they didn’t want to confront each other directly but they had made a poor country , Afghanistan as their battle field. Then the British found that the Afghans could never be defeated in the battles field. They can be defeated in the beginning but soon after the defeat, they again emerge as big resistant power and defeat them. So first in 1849 , to destroy Afghanistan’s Economy , the British occupied the economic hub of Afghanistan , Peshawar. Then in 1893 , using their famous formula Divide and Rule , the British divided the Pashtuns territory by drawing an imaginary line, the line of hatred, THE DURAND LINE , on the heart of the Pukhtuns to destroy their unity and power. From the British perspective, the Russian Empire’s expansion into Central Asia threatened to destroy the “jewel in the crown” of the British Empire, India. The British feared that the Tsar’s troops would subdue the Central Asian khanates (Khiva, Bokhara, Khokand) one after another. The Emirate of Afghanistan might then become a staging post for a Russian invasion of India. and by 1865 Tashkent had been formally annexed. Samarkand became part of the Russian Empire in 1868, and the independence of Bukhara was virtually stripped away in a peace treaty the same year. Russian control now extended as far as the northern bank of the Amu Darya river. 1907 agreement between Russia and British. Anglo-Russian Entente: Signed on August 31, 1907, in St. Petersburg, Russia, recognized Britain’s influence over Afghanistan.  The classic Great Game period is generally regarded as running approximately from the Russo-Persian Treaty of 1813 to the Anglo-Russian Convention of 1907, in which nations like Emirate of Bukhara fell. A less intensive phase followed the Bolshevik Revolution of 1917, causing some trouble with Persia and Afghanistan until the mid 1920s. And the foreign policy of Afghanistan was also controlled by the British by force. The Great Game, 1856-1907 which is the time when Russo-British Relations in Central and East Asia were tense. Amir Amanuallah Khan Amir Amanuallah Khan In 1919 when King Amanullah Khan declared war against the British to get the sovereignty of Afghanistan. At that time also when there was an agreement in Rawalpindi , the British were not ready to negotiate the Durand line issue but left it for future negotiation as a bone of contention. After this Third Anglo-Afghan War and the signing of the Treaty of Rawalpindi in 1919, King Amanullah Khan declared Afghanistan a sovereign and fully independent state. Afghans celebrate independence Day every August 19.  Afghanistan was never a part of British India. From 1919 to 1973 , the government of Afghanistan was most in charge of its own internal affairs while receiving significant amounts of military and financial aid from foreign powers. After World War Second , super powers played out their rivalries in Afghanistan in more subtle ways that fighting wars of conquest and drawing borders. But when After second world war , the British empire was unable to continue keeping India as its colony further, so they decided to leave India, because the immense growth in nationalist sentiment in India throughout the Second World War effectively guaranteed that immediate Indian independence was a fait accompli. Fundamentally, Britain’s position in India following 1945 was untenable.  The second world war left Britain exhausted both militarily and economically.  Britain, put simply, had no other choice. It would therefore be quite inaccurate to state that India was voluntarily ‘given up. However apparently , the  Britain embraced decolonization as a voluntary process with a lot of iffs and butts. This hurried retreat, unsurprisingly described by Churchill as a ‘scuttle’, was completed by August 1947. British subordination to the United States, the so-called special relationship as it is optimistically known in London, is so taken for granted that it is seldom subjected to critical scrutiny. Why is it that the British ruling class and its agents have since 1945 come to embrace a junior partnership in the U.S. empire so wholeheartedly? When the Labor government came into power in 1945, it found itself confronted by widespread colonial unrest, and at the same time dependent on the United States, an imperial rival that was intent on replacing British influence throughout the world with its own. The British had neither the economic nor military strength to hold onto their empire and were forced into an unwilling retreat. Within months of the end of the war, it was glaringly obvious that Britain lacked the means to defeat a renewed mass campaign by the Congress. Its officials were exhausted and troops were lacking. But the British still hoped that a self-governing India would remain part of their system of ‘imperial defence’. For this reason, Britain was desperate to keep India (and its army) united. These hopes came to nothing.  The loss of India was a massive blow that seriously damaged British imperial pretensions. Britain was now overshadowed by the United States and Soviet Union, its domestic economy had been seriously weakened and the Labour government had embarked on a huge and expensive programme of social reform. Britain was finding it too costly to protect its remaining colonies. Britain did not have the military strength and economic resources to defeat the resulting freedom fighters.  Moreover, Washington would not finance such an imperial endeavor: they wanted to replace the British, not prop them up. In May 1947 two American responsible officers , Ronald A.Hare, Head of the division of South Asian affairs and Thomas E.well 2nd Secretary of US Embassy in India, visited Mohammad Ali Jinnah in his house in Bombay( now Mumbai). Jinnah promised them that Pakistan will protect the interests of America in the region and being a Muslim country , Pakistan with other Muslim countries would stand together against USSR. What the British hoped for at this time was a partnership with the United States on relatively equal terms. Hopes of an equal relationship were soon eclipsed, however. Now America was an emerging world power on the world stage. The British state no longer had the power to protect its global interests. The Cold War played an important role in legitimizing the U.S. empire. Despite the collapse of the foundations of Britain’s rule in British India , London remained committed to a colonial presence in the region in some way or the other. So the idea of Pakistan was developed in 1939 to keep this country as the client state of British or its allies which will be completely depended upon them , strategically, financially , militarily and economically. The British Empire had been sucking the blood of the Indian people to feed it own people for centuries, since 1775 to 1857 through East Indian company and then directly from 1857 till 1947. But when they decided to quit India, they wanted to cut into peices  the motherland of the Indian people as the reward for the Indian land which had been feeding the British families in Britain for centuries. But on the other hand , the thirst of the British to destroy Afghanistan could never end and continuing their this thirst , the British handed over the land of Afghanistan from Durand line to Indus river to an unnatural state of Pakistan in 1947 which came into being on the lands of Afghanistan, Baluchistan , Bengal  and Sindh under the hegemony of Punjab by its  Army , civil military establishment and Judiciary. And thus the bone of contention was left deliberately between Afghanistan and Pakistan. Because the British knew that Afghanistan would never recognize its territory to be a part of Pakistan, and will be demanding it for ever. As a result , there will be permanent clash between Afghanistan and Pakistan which will be in the interest of the British to keep both Afghanistan and Pakistan dependent upon them or on its allies. And thus the British or its allies will stop the expansionist policy of former USSR in that region and protect its oil interests in Middle through a so called Islamic state of Pakistan. And the other objectives were to make India weak by dividing its army , food basket of Punjab and also dividing the Bengal gulf. And the irony is that the Muslims who were used by the British Empire for the slogan of Pakistan , were from Lakhnavo , Madrass , Bombai , and Nagpore . These territories  had nothing to do with the territories where the Pakistan had to be made. And the Muslims in those territories were in minority. Non of these territories  became part of Pakistan. As a result those Muslims who betrayed their motherland India , migrated to Pakistan. In the army , and civil and military establishment those Muslims were given the highest posts. And  the Pashtuns, Baloch , Sindhi and Bengali were given in the slavery of Punjab, with the help of Punjab dominated Pakistan Army , establishment and judiciary. Pakistan came into being in 1947. And since day one , to keep Afghanistan territory from Durand line to Indus river intact to it, Pakistan continued the same British Policy to destroy and weaken Afghanistan which continues till to date. Afghanistan had been a buffer state between Britain and Russia and after 1947 Afghanistan was considered a buffer between US-backed Pakistan and Iran, on one side and the Soviet Union on the other hand. But after the second world war , another game started which is called cold war between USSR and USA with its allies. As the cold war between USA and USSR was in full swing , Afghanistan became the perfect battleground for the world’s two superpowers to fight a proxy war that would decide once and for all who would dominate global politics. Pakistan was allowed to destroy and weaken Afghanistan since 1947. Read more Since 1973 Pakistan had been inviting the radical elements from Afghanistan , trained , equipped and financed them for the militancy in Afghanistan. One Washington policy maker has claimed that US support for Islamic extremists was not a consequence of the December 1979 Soviet invasion , but predated it. President Jimmy Carter’s National Security Adviser Zbigniew Brezenski told an interviewer in 1998 that the U.S began covert support for Afghan rebel groups based in Pakistan by mid-1979. Soviet Union invaded Afghanistan on 24th December 1979 , but it was July 3 , 1979 that President Carter signed the first directive for secret aid to the opponents of the pro-Soviet regime in Kabul. When former USSR invaded Afghanistan in 1979, Pakistani dictator and president Zia ul Haq called it a God gifted opportunity for Pakistan. And that opportunity was how to destroy and weaken Afghanistan. On the other hand some in the United States saw the Soviet invasion as a gift. To Brezenski , the Russian move was a blunder that the U.S had to use it to its fullest advantage. “ The day that Soviets officially crossed the border , I wrote to President Carter : We now have the opportunity of giving to the USSR its Vietnam war.” The Afghan resistance i-e the seven Islamist “ Mujahideen” or “ Jihadi” groups based in Afghan refugee camps in Pakistan received the bulk of US and Saudi monetary , military and logistic support via the Pakistani Inter-Service intelligence Directorate (ISI). Dictator Zia ul haq insisted that support for the Mujahideen from the outside world had to flow exclusively through Pakistani hands , principally through the Pakistan’s army’s directorate for inter-servvices intelligence ISI and the ISI jealously maintained its exclusive access to the Mjuahideen. Continuing the British Policy , Pakistan has been destroying and weakening Afghanistan since its birth , in 1947 but now Pakistan was double happy, first  to destroy Afghanistan and second to make atomic bomb with the help of US and Saudi money. It was not only Afghan Mujahideen who were harnessed by the US and its allies ( Western Europe , Saudi arabia, Pakistan , Egypt and China) to fight the Soviet Union but “ Jihadis” from around the globe to fight the “holy war” against the atheistic communist invaders. Milton Bearden Milton Bearden The primary concern in Washington was the Soviet presence and its threat to U.S interests. The price of this for the Afghan people was never considered. Milton Bearden , CIA station chief in Pakistan from 1986 -89 said: “ The United States was fighting the Soviets to the last Afghan.” However, James Nathan , a political scientist at the university of Delaware , criticized the politicians for wanting to put Afghan lives on the line to fight Washington’s war. But Afghanistan was destroyed . However it is easier to destroy a state than to build a new one, and US planners exempted themselves from doing the later , while concentrating on the former. As a result ,  2 millions Afghans were killed, 5 millions were disabled, 5 millions were compelled to leave their country and 5 millions were displaced internally. But in Afghanistan America’s beneficiaries turned out to be America’s worst enemies. The repercussions of U. S .policy in the recruiting , financing , training , and arming the most fanatical elements in the Muslim world to fight in the great Jihad against the Soviets, still echo today every where , especially in Afghanistan. But this US Policy is almost never admitted. Eduard Shevardnadze Eduard Shevardnadze After the withdrawal of Soviets from Afghanistan , Russian Foreign Minister , Edward Shevarnadze told to the supreme Soviet in 1989 that we committed the most serious violations of our laws, our party and public norms to invade Afghanistan. As Shevarnadze apologized for his country’s role in the destruction of Afghanistan , US president Ronald Reagan authorized an increase in funding to the Mujahideen and rejected moves toward peace. A ceasefire offer from Russian President Gorbachev was declined by the US who wanted to prolong the bloodshed until the last trace of Soviet influence was expunged. The varying degrees of fundamentalism espoused by the seven Pakistan- based parties were taken to represent the opinions of all Afghans. While denouncing the Dr. Najeebullah, the United State offered no political alternative to him, rather than the seven groups, a few of which were aligned politically with most Afghans. Washington’s blanked endorsement of the extremists had the effect of “weakening and undermining secular nationalist leadership , and fragmenting power in every locality” of Afghanistan. Kabul was buring Kabul was buring According to Barnett Robin , “ ISI and CIA sponsored military activity increasingly took the form of encouraging individual commanders to fire rockets  into Kabul city, where they mainly killed the civilians. And the commanders were paid for each rocket fired. According to New York times correspondent , John F. Burns, at least a thousand civilians were killed in Kabul during the first year after the Soviet withdrawal. Kabul was buring and who burnt it were paying for it. On the other hand Pakistan had no intention of withdrawing support from the Mujahidden , so the shelling of Kabul continued. Najibullah fled his post on April 16 , 1992. Almost immediately , Washington lost all concern for the deepening crisis for which it was in large part responsible. The truth is that the Bush Administration didn’t care anymore what happened in Afghanistan. Unfortunately for the Afghan people , it was the beginning of another era of devastation, fueled by Pakistan. From April 1992 to February 1995 , before the Taliban took power, at least 25,000 civilians were killed in Kabul by indiscriminate shelling by all seven group Mujahideen , according to Western agencies. Approximately 80 percent of Kabul had been reduced to rubble by early 1995, when the Pakistan-backed Taliban first started bombing the city. John F. Burns wrote , “ whole neighborhoods look like Hamburg or Dresden after world war 2nd bombing raids. According to Michael Griffin, “No city since the end of the Second World War – except Sarajevo – had suffered the same ferocity of jugular violence as Kabul from 1992 to 1996. Sarajevo was almost a side-show by comparison and , at least it wasn’t forgotten.”Intervening in Bosnia was considered important to US national interests , whereas Afghanistan was ignored. In 1992 the Bush Administration promoted a court to try war crimes by parties to the war in Bosnia. In doing so , the administration said it was sending “ a clear message that those responsible for the atrocities and gross violations… must be brought to justice.” But no such court was supported for Afghanistan , where , according to the New York Times , “ the onus for fueling a murderous war in Afghanistan falls on the U.S.and Pakistan.” Amnesty international blamed the foreign powers that had originally supplied rockets and other weapons to Mujahideen against the Soviets for catalyzing the destruction . “ Afghan civilians have paid a terrible price for international involvement in their country’s affair. After the fall of Najibullah Government , the government of Afghan Mujahideen was established in Peshawar under the supervision of the then prime minister of Pakistan  Nawaz Sharif government. And Pakistan started another game in Afghanistan. Overtly Pakistan was supporting their government but covertly , Pakistan was preparing another group against the Mujahideen government because Pakistan has never wanted Afghanistan to be a stable state. Due to the civil war , fueled by Pakistan, Afghanistan was descending into another anarchy. Essentially the U.S. Invested in terrorism in the 1980s, and the Afghan people reaped the “ dividends”most horribly after the withdrawal of Soviets. Between 1992 and 1996 , Western governments and the Western media turned their backs to Afghanistan. They got what they wanted , there was no need for Afghanistan any more. In late 1994 with the full backing of Pakistan a new generation of fundamentalists, now called Taliban appeared , bringing a new era of terror for the Afghan people , especially Afghan women. A January 1995 secret cable from the U.S. Embassy in Islamabad described the Taliban as “ well-armed , militarily proficient and eager to expand their influence. The mentor of the Taliban was the Pakistan Army , ISI and Pakistan established. That’s why they were fully armed, fianced and trained professionally by Pakistan Army. So with the help of Pakistan Army and intelligence organization ISI , Taliban came into power in 1996. And thus another era for the destruction of Afghanistan and their atrocities agaisnt the Afghan people was started. The brutalities of the Taliban government , its support of terror, its oppression of women and its iron grip on the people continued. The Pakistan intelligence organization ISI had full control over the Taliban government. It was just a puppet government. The Clinton administration appeared eager to to build ties with the Taliban government, and showed interest to open Embassy in Kabul. The Clinton State Department like the Carter , Reagan and Bush Departments before it , unconcerned with the human rights consequences of having armed fundamentalists in charge of Afghanistan. Within hours of its control of Kabul , by the direction of ISI Taliban hunted former president Najibullah, tortured him and hanged him in public. Dismissing women from from work and confining them to their homes. The possible stabilization of war-torn Afghanistan by the Taliban was expected to yield a windfall for US based Oil company UNICOL , who wanted to build natural gas and oil pipeline from Turkmenistan to Pakistan via Afghanistan.US responsibility for the rise to power of the Taliban includes direct support in the form of a pipeline deal between the Taliban and a U.S. Corporation ( UNICAL). The situation in Afghanistan was standard for the company UNICAL, notorious for having abusive governments as business partners. During the Taliban regime , Pakistan wanted to take Afghanistan to stone age. All terrorist organizations camps , including Al Qaeda were transferred to Afghanistan. And all the terrorist activities not only in Afghanistan but in the world were directed from those training camps , like 9/11 tragedy. Osama bin Laden with his body guard Osama bin Laden with his body guard Pakistan deliberately transferred those terrorist camp to Afghanistan for their future plan to destroy Afghanistan. The plan was to declare Afghanistan as a terrorist state by the world community which could lead Afghanistan to disintegration. But the world community was silent and the destruction of Afghanistan in all fields of life continued according to the everlasting Agenda of Pakistan. So after the August 7, 1998 embassy bombings , the United States under president Clinton went from a policy of polite diplomacy towards Afghanistan to a policy of threats , aggression and sanctions. 50 cruise missiles were fired at the compound of a training camp in Khost , where hundred of terrorist leaders , including Osama Bin Laden , were supposedly gathering. 21 people were killed and 53 wounded. Apparently Osam bin Laden had left house earlier. Two ISI officer were also killed, who were giving training to the global Al qaeda terrorists for the terrorist activities in the world. This camp had been financed by ISI and built by a Pakistan contractor. On October 1999, the US sponsored a resolution (1267) in the United Nation Security council that imposed sanctions on the Taliban. The Taliban immediately shut down the UN mission in Afghanistan and exited from the negotiations. When the Taliban’s Islamic Emirate of Afghanistan was sanctioned by the United Nations in 1998 , Pakistan would stand by the Taliban and continue to give it aid. Taliban , was the most effective means for advancing Pakistan’s ambitions in Afghanistan and it continues till to date. Ahmad Rashid Ahmad Rashid Ahmad Rashid , a Pakistani Journalist says that by 1997 the Pakistanis were providing 30 million dollars in aid annually to the Taliban as well as free oil to run the country’s war machine. By 1999 on-third of the fighters in the Taliban were either Pakistani fundamentalists or foreign volunteers who arrived by way of Pakistan. Mullah Umar gave Osama bin Laden the authority to run his terrorist empire and to plot attacks abroad from Afghanistan, and  thus al Qaeda became a state within a state. The alliance of Taliban and al Qaeda would set the stage for 9/11 , which in turn set the stage for the U.S. intervention in Afghanistan. The Taliban goverment was quickly toppled from Power and United States and NATO allies took on the challenge of building a new Afghanistan. The role of Pakistan has always been very destructive for Afghanistan. In case of Soviets invasion , Pakistan played the role against the occupation of USSR but in case of US and NATO occupation , Pakistan supported their occupation. However, both the US and USSR were the occupants. The fact is that Pakistan never loses any opportunity how to destroy Afghanistan. But even then Pakistan was not sincere with its allies like US , NATO and ISAF and was playing double cross game to prolong the war in Afghansitan. At last that terrible nightmare of Taliban regime  for Afghan people lasted for 5 years. Afghanistan is a weak state pulled apart by its neighbors. The 1979 Soviet invasion, then after the Soviet withdrew , various factions occupied and then lost control of the capital, Kabul, until the rise of Taliban government in 1996. In fueling the Afghan civil war fueld by Pakistan , destroyed state institutions and killed 2 millions Afghan , five millions were disabled and five millions were forced into exile of moderate Afghans who might have helped rebuild the country, Pakistan and the world community bear a large share of the responsibility for its destruction. Because according to its demand , Pakistan was given free hand by Washington and the world community to do what it wanted in Afghanistan. And it has always been the agenda of Pakistan to destroy and weaken Afghanistan to such an extent that it will not be able to demand for its territory occupied by Pakistan. In 1998 US Embassy bombings and the US “retaliation” to them,  changed the status of Afghanistan in Washington DC.  Living in Afghanistan, Osama bin Laden , was behind the bombings. The US demanded and threatened Taliban government to hand over Osama bin Laden. A US diplomat told the Far Eastern Economic Review, “ We are determined to make life for Taliban very very difficult in every field – political , economic , military and in terms of their foreign relations- unless they hand over bin Laden. But on the other hand , Pakistan was behind Taliban in making any decision. The then Director General of Pakistan Intelligence organization ISI General Mahmood had told to Mullah Umar ( Taliban leader), not to hand over Osama bin Laden to US , and promised full support fighting against America and its allies. And as the puppet government , Mullah Umar had no other option but to refuse handing over Osama to US. And that double cross game of Pakistan , could destroy Afghanistan to the fullest which Pakistan wanted to have solved the Pukhtunistan issue by making Afghanistan too weak with all respects. But after the tragic events of 9/11 , media attention to Afghanistan dramatically increased and the U.S.’s war on terror was launched. Thus after 9/11 another game started in Afghanistan , when USA along with its allies and NATO invaded Afghanistan in 2001 , they did tremendous job to topple the government of Taliban in Afghanistan in October of 2001 . The people of Afghanistan took a sigh of relief. But then US and NATO started to bomb Afghan people to save their lives. The Afghans warned the not repeat the mistakes of allying with fundamentalists , but once again the fate of civil society was less important than “getting the job done”. Within two months, the Taliban retreated and Northern Alliance member began reestablishing themselves into positions of power and repeating the same mistakes. Meanwhile , on the political front , Afghanistan’s government was redesigned by the US with Hamid Karzai as president. Prominent members of Northern alliance were repaid for their help in toppling the Taliban with high – ranking positions in the interim government while most ordinary Afghans were excluded. After the fall of the Taliban , Northern Alliance warlords obtained more power than the central government. Women’s rights activists in RAWA made an “ appeal to the United Nations and World community “ asking the UN to intervene to protect them from Northern Alliance commanders: “The retreat of the terrorist Taliban from Kabul is a positive development , but the entering of the rapist and looter Northern Alliance is the city is nothing but a dreadful and shocking news for about 2 million residents of Kabul whose wounds of the years 1992 – 96 have not healed yet. We would like to emphatically ask the UN to send its effective peace keeping force into the country before the Northern Alliance can repeat the unforgettable crimes they committed in the said years.” But from 2001 onward , another game started in Afghanistan. That was the dirty game as usual of Pakistan against Afghanistan. Only USA had given 25 billion dollars to Pakistan to fight against Al Qaeda and Taliban. But Pakistan started double cross game. Pakistan with a part of that money provided the training camps to Taliban fighters and Alqaeda in Pakistan to reorganize themselves for the terrorist activities in Afghanistan. For the leadership of Taliban Pakistan organized Quetta shura in Quetta  Balochistan, Pakistan. Due to a lot of causalities of the innocent people of Afghanistan , anti-American anger started boiling over. With the help of Pakistan army and ISI , Taliban had made a hell on earth for the men and women of Afghanistan during their tenure. But Pakistan wanted to continue the support of Taliban even after the Taliban government was toppled. Those militants have been attacking the , Afghan , US and NATO forces. They have also been targeting the peaceful innocent population of Afghanistan till to date. According to Human Rights Watch report, after the invasion of Afghanistan by NATO , US and its allies , “ up to 60 percent of deputies in the lower house of parliament were directly or indirectly connected to the current and past human rights abusers. But slowly and gradually Afghanistan dropped off the media’s screen once the main event in Iraq was launched. But the dirty game of Pakistan continued. US and NATO forces started  bombing the people to save them . Even cluster bombs were dropped and the innocent children, women , elders were killed. The New York Times Nicholas Kristof wrote that : One of the uncomfortable realities of the war on terrorism is that we Americans have killed many more people in Afghanistan than died in the attack on the World Trade Center. Innocent civilians were killed by the US to save the lives of others. Since 2001 , hundreds of men , women and even some boys designated as “ enemy combatants” for the Taliban or al-Qaeda , had been arrested by the U.S., imprisoned , tortured , and in a few cases killed. Several hundreds were sent to Guantanamo Bay in Cuba. Though , the exporting of prisoners to other countries that are known to have atrocious human rights records is called “ extraordinary rendition”. One Afghan villager said : “ The foreign troops gave me medicine and also Radio and corn seed. He asked if we needed anything . I said ,” We don’t need anything. Don’t humiliate us. Don’t rob our country. Don’t commit crimes. We don’t need anything.” According to BBC report that life in Afghanistan is not good , people are dissatisfied . There is growing frustration and 40 percent of the populations is hungry. The Taliban and other militants , backed by Pakistan , were using this situation for the justification of their own terrorist activities in Afghanistan. As the “ war on terror” occupied the headlines , the newspapers failed to scrutinize either the past and current empowerment of fundamentalists or the manipulation of Afghan “democracy” by U.S.players for U. S.benefit. On the eve of the Pentagon’s “ Operation Enduring Freedom” in October 2001 , women’s rights were touted as a reason to fight the Taliban , and most Americans sympathetically supported a “ war to liberate women.” But what is the role of USA in helping destroy women’s rights in Afghanistan through the support of fundamentalists? What about the continued oppression of Afghan women in the Post-Taliban Afghanistan? As a result Afghanistan is still a very dangerous place for the ordinary people who live there. With the help of Pakistan , Taliban have intensified their attacks to kill the innocent people. As always the Afghan people pay the price for decisions made by the Pakistan , US and its allies, and are now caught between the agendas of Islamic fundamentalism ( Taliban, al-Qaeda) and Western imperialism ( U.S. , NATO) Immediately after the fall of Taliban in 2001 , the U.S. and international community had an opportunity to fill the military vacuum in the country with the peace keeping forces to help civil society recover from decades of war. However this window of opportunity was lost as the US prevented an expansion of the United Nation – led ISAF ( International Security Assistance Force) outside Kabul and instead focused tens of thousand of troops to hunt and kill Taliban , Al Qaeda remnants. This tactic has clrearly failed to bring about security in Afghanistan. Zakia , a student activist at Kabul university , said : “ America knew that Afghan people are very powerful and very brave. So they even destroyed the bravery of Afghan people. They had such bad policies in Afghanistan , such bad things happened in Afghanistan , that all people are now not brave , they are just beggars…. That’s why they need American troops.” Disarm the Warlords and help Bring Them to justice. During to a survey of Human Rights research , 88 percent Afghans wanted the government to do more to reduce the powers of commanders, with 65 % saying that disarmament was the most important step toward improved security in Afghansitan. Mariam , a middle aged woman living in Farah province said ,” She thought of the U.S. policies of bombings Afghanistan from Air and supporting the warlords on the ground is not the solution of the problem. Don’t kill these fundamentalists and Taliban- we only say , don’t send them rifles , you should disarm them. The countries like Pakistan which sponsors and supply arms to them must be punished like Cambodia in Vietnam war, Internally in Afghanistan , the disarming process is the best thing to bring peace in Afghanistan. If disarming takes place there will be justice in Afghanistan , there will be a good constitution and law in Afghanistan , then after that the Afghans can do a lot of economic projects for Afghanistan. It is important to disarm warlords first to enable justice – that will then improve the economic and financial part of the country and the society will progress day by day. Nader Nadery , the spokesman for AIHRC said: The Afghans want to get rid of warlords. But if they will be reappointed at the same positions, then the people will realize no change. If they will be in power , the system will not be able to achieve a democratic Afghanistan in future. These perpetrators must be brought to justice and remove them from office rather than provide political support for these people to be in power. In the non-existence of the above scenario , Afghan see the U.S. as more a problem that a solution. Today Afghanistan is known as the “ sick man” of Asia . The statistics on health , literacy , employment and lifespan , especially for women have changed little since the fall of Taliban But several years later, little attention is paid to Afghanistan globally or in the United States. Polls reveal that today Americans are less interested in Afghanistan. Due to the hostile policy of Pakistan towards Afghanistan , Afghans are at war for the last 35 years any way. Pakistan is not ready to quit this dirty game of intervention in the internal affairs of Afghanistan. Pakistan demanded to its puppet Taliban government to recognize the disputed , imaginary line , Durand line , as an international border between Pakistan and Afghanistan, because non of the government in Afghanistan has recognized this line as the international border and claim the territory from Durand line to Indus Rive is Afghan territory, occupied by Pakistan. Each and every person in Afghanistan is not ready to recognize this line and are dreaming to take back its territory back from Pakistan. Conclusion : So now the only solution is , “that all Afghans , the army and the civilians   , must join hand to fight a final war against Pakistan and take its territory from Durand line to Indus river back. Former King of Afghanistan , Dost Mohammad Khan had also been begging the British Empire to give him back his territory, but he got nothing. Now only and only the Afghans can take back its own occupied territory from Pakistan like Bengali did in 1971. At that time also the West were using unspeakable words against the Indra Gandhi, because they didnt want to disintegrate Pakistan. But it happend and more than half of Pakistan became Bangladehs. But if Afghans are not prepared for that then they must know that  for the last 35 years Afghanistan is bleeding and will be bleeding until and unless Pakistan will be disintegrated and make a part of history. Mashal Khan Takkar Editor In Chief Reference books: What we won  ,  Bleeding Afghanistan Be the first to comment on "The Endgames that never end in Afghanistan" Leave a comment
__label__1
0.61382
Structure of the atmosphere Information on the various layers of the atmosphere HideShow resource information • Created by: Thomas • Created on: 13-12-12 09:27 Preview of Structure of the atmosphere First 463 words of the document: Structure of the atmosphere The atmosphere is made up of four layers, the Troposphere, the Stratosphere, the Mesosphere and the Thermosphere. Troposphere: The troposphere is the lowest layer of the Earth's atmosphere and is also the layer in which clouds form. The average depth of the troposphere is around 8km deep at the poles and 17km deep within the tropics. There is a decrease in temperature as you ascend within the troposphere at approximately 6.5C per km. At the top of this boundary there is the Tropopause. This is the layer between the troposphere and stratosphere that acts as a temperature inversion which forms a 'ceiling' for the Earth's weather system containing it at this level. Within the troposphere vertical convection currents disturb the atmosphere and the air masses flow horizontally from one latitude to another. Stratosphere: The stratosphere extends to approximately 50km above the Earth's surface. Within this layer there is an increase in temperature as height increases. The stratosphere is free of clouds and dust, and here is where ozone absorbs and filters out ultraviolet radiation. This warming is greater over the Polar Regions and the temperature differences between the tropics and the Polar Regions cause strong horizontal air movements at great heights. The transition boundary which separates the stratosphere from the mesosphere is called the Mesosphere: The mesosphere extends from the Stratopause to about 85 km above the earth. The gases continue to become thinner and thinner with height. As such, the effect of the warming by ultraviolet radiation also becomes less, leading to a decrease in temperature with height. On average, temperature decreases is from about -15°C to as low as -120°C at the Mesopause. The gases in the mesosphere are still thick enough to slow down meteorites hurtling into the atmosphere, where they burn up, leaving fiery trails in the night sky. The transition zone between the mesosphere and the thermosphere is called the Thermosphere: The thermosphere extends from the Mesopause to 690 km above the earth. The gases of the thermosphere are thinner than in the mesosphere. This means incoming high energy ultraviolet and x-ray radiation from the sun, absorbed by the molecules in this layer, causes a large temperature increase. Because of this absorption, the temperature increases with height and can reach as high as 2,000°C near the top of this layer; however, despite the high temperature, this layer of the atmosphere would still feel very cold to our skin because of the extremely thin air. No comments have yet been made Similar Geography resources: See all Geography resources »See all resources »
__label__1
0.929058
Through a photograph you can capture a sense of place, or a moment in time. Technical information: Use natural light to give your pictures a sense of place. Please turn off the flash when taking pictures. When sending portraits, the head should usually fill most of the frame. When taking photographs indoors without a flash, it's sometimes helpful to rest the camera on something solid, like a tabletop, before taking the picture. You may send us a photograph of anything. Here are some ideas to get you started. Please send us photographs that tell us about how you see and feel about the world. Send us portraits of your family, friends, or strangers. If there's a beautiful garden or tree in your neighborhood, take a picture of that and send it to us. Capture action, like sports, or traffic, or a rushing wave. Send us pictures of cities, the lights at night, the stores where you shop. Please send us photographs of nature: a lovely flower, clouds scuttling across the sky, snowdrifts, a stream. Use your imagination. You can also set up scenes from stories using Playmobile, or Legos, or stuffed animals, or using your friends as actors to create a scene form a book, something from  your imagination, or form a movie.  Sign up form Please enter a valid email address. Your password must be at least 6 characters.
__label__1
0.813686
History of the Dallas Cowboys Essay example 1539 Words 7 Pages Clint Murchison, Jr. and Bedford Wynne were awarded a National Football League (NFL) expansion franchise on January 28, 1960, located in Dallas, TX. At the annual meeting, they purchased the team for $600,000 (Bohls 1). They were given the status of a “swing team” meaning that they would play every other team in the league their first season of play. They were displaced in the Western Division Standings. Murchison’s and Wynne’s next moved was to high their front office personnel which were: Tex Schramm (General Manager), Gil Brandt (Director of Player Personnel), and Tom Landry as Head Coach. In the beginning there were called the Dallas Steers, then after a couple of weeks the name was changed and there were called the Dallas Rangers. …show more content… In 1966, a tradition was born as the first Thanksgiving Day was played, with is a ritual now that the Cowboys play on Thanksgiving (Dallas Cowboys 3). The first game was played and won against the Cleveland Browns (26-14). This win put Dallas in the driver’s seat for playoff positioning for the first time in franchise history. Dallas went on to finish the year with a record of 10-3-1, the Eastern Conference Championship, and first post-season action. They would eventually lose in the Playoff Bowl on January 9, 1967 to the Baltimore Colts (Fleming 1). In 1967, they Cowboys put together back-to-back winning seasons, and earned the Capitol Division title (Fleming 1). This season the Cowboys won their first playoff game, but eventually lost to the Green Bay Packers. The Packers won the NFL title game with a score of 21-17, which is now classically known as the “Ice Bowl” (Prinalgin 1). The next couple of season ended in disappointment as the Cowboys made the playoffs but were embarrassed in 1968 and 1969. There was a shining light in 1969 when quarterback Roger Staubach returned from his military assignment--he had been drafted in 1964. The quarterback initially had to split playing time with the incumbent quarterback Craig Morton (Fleming 2). In 1970, the Dallas Cowboys started off the season sluggish, with a record of 5-4 midway through the season. The Cowboys won the final five games, won the NFC East title and…
__label__1
0.988359
How do you overcome results oriented thinking from a client? 2017-03-20 23:29:38 I am working with a client on a special project that I am going to obfuscate in this question. Basically, I'm trying to overcome some short-term, results-oriented thinking from my client. Let's say you have a model to forecast the performance of a racehorse. Your model tells your client to sell racehorse X because the probability of its performance is low (<10%). The horse is sold and said racehorse goes on to win 3 races. Your client says, "See, we should have never let go of that racehorse! The model is wrong!" As data scientist we can understand that anomalies happen and that this horse may have also lost some races - we were just on the wrong side of probability this time. But how do you overcome those objections with the client? How do you turn short-term thinking into a long-term outlook of predictive modeling?
__label__1
0.957268
During the 80s until 2000, Peru lived through an armed conflict between two radical leftist movements, Sendero Luminoso and the Movimiento Revolucionario Tupac Movement Amaru (MRTA), also in confrontation with the government and large parts of the population, unleashing one of the most violent periods in modern Peruvian history. These movements were eventually defeated, disarmed and imprisoned, but at a high cost in human lives. The process included violence by militant groups led by the Peruvian state. Today, almost 10 years after the end of violence, the museum commemorating the memory of the victims of years of violence for the the of national reconciliation, Lugar de la Memoria del Perú, has not yet been finalized. Memora was born out of the fill this gap for new generations who don’t know this violent history. The site, still in construction, captures information about key events from the early 1980’s until the 1990’s. Users have several options for exploring the history, including map, timelines and narratives. The Modo Historia (History Mode) is an excellent example of the use of timelines for historical narratives. Un Día en la Memoria (A day in the memory) is a collaboration with artist Mauricio Delgado C. to remember events through the use of posters commemorating specific dates. Link: http://memora.pe/ Selected by: Alex Gil Text by: Alex Gil
__label__1
0.790955