content
stringlengths
27
653k
pred_label
stringclasses
2 values
pred_score_pos
float64
0.5
1
Artillery: Israel Replaces 155mm Guns With Smart Rockets February 13, 2012:  Israel has decided to replace its four decade old M109 155mm self-propelled armored artillery vehicles with guided rockets. This comes a year after deciding that the M109s could not be refurbished and upgraded anymore and would have to be replaced. At first, other artillery systems were examined. But then it was remembered that two years ago Israel had accepted that the U.S. use of GPS guidance in rockets, while more expensive, was more effective than the cheaper (but less accurate) Israeli developed rocket guidance system and even cheaper unguided artillery shells. So now the M109s are being replaced with guided rockets. This radical shift in artillery weapons has been coming since the 2006 war with Hezbollah, when the Israelis found that they did little damage to Hezbollah bunkers, even though over 120,000 unguided 155mm shells were fired (mostly by M109 guns) at them. Meanwhile, they noted that the U.S. 227mm MLRS rockets with GPS guidance was excellent at taking out similar targets in Iraq and Afghanistan. So Israel equipped its 160mm Accular rockets with GPS. These 110 kg (242 pound) rockets have a range of 40 kilometers and enable one bunker to be destroyed with one rocket. Israel always recognized the superiority of GPS in some situations. For example, Israel developed LORA (Long Range Artillery Rocket) which is similar to the U.S. ATACMS. Each LORA missile weighs 1.23 tons and carries a half ton warhead. With a range of 300 kilometers, GPS guidance is used to land the warhead within 10 meters (31 feet) of the aim point. These missiles are expensive. The similar U.S. ATACMS, which is fired from a MLRS container that normally carries six of the standard MLRS rockets, cost a million dollars each. It's often a lot cheaper if you can use smart bombs (which cost less than $50,000). But if you don't have aircraft up there, or control of the air is contested, you can get a LORA missile on a target within ten minutes of the order being given. Israel expects to replace a lot of artillery shells, air delivered missiles, and bombs with its GPS guided rockets and take out more targets with far fewer rockets and artillery shells. Article Archive
__label__1
0.828798
The coordinate plane We use coordinates to describe where something is. In geometry, coordinates say where points are on a grid we call the "coordinate plane". Coordinate plane We first explored the coordinate plane in the 5th grade. But that was only dealing with positive coordinate. Now, however, we know all about negative numbers so why not have negative coordinates as well! Common Core Standards: 6.NS.C.6, 6.NS.C.6b, 6.NS.C.6c, 6.NS.C.8
__label__1
0.973172
- this week Gender: F Meaning of Minerva: "of the mind, intellect" Origin of Minerva: Latin Minerva Pott is a character in Dickens's novel . In Harry Potter, Minerva McGonagall is a Witch and Deputy Headmistress of Hogswarts, and in the Artemis Fowl books, Minerva Paradizo is equally intellectual. There's also a series of kids' mystery books centering around a Minerva Clark. Minerva was a Top 500 name through the early 1920s, dropping off the list in 1973/ It's well liked on Nameberry though--currently at Number 297. Nickname Minnie fits right in with the other currently stylish vintage nickname names. Curiously, though, actress Minnie Driver was not born Minerva--but Amelia. Famous People Named Minerva Minerva Urecal, American actress Minerva Teichert, American painter Minerva Josephine Chapman, American painter Minerva Cuevas, Mexican conceptual artist Minerva Parker Nichols, American architect Minerva Pious, American actress Minerva Mendez, American actress Pop Culture References for the name Minerva Minerva, Roman goddess of wisdom Minerva "Min" Green, main character in "Why We Broke Up" by Daniel Handler Minerva McGonagall, character in the Harry Potter series Minerva Clark, main character in the book series by Karen Karbo Minerva Louise, chicken from the children's book series by Janet Stoeke Minerva, ugly stepsister in Rodgers and Hammerstein's "Cinderella" Minerva Mink, character in animated series "Animaniacs" Minerva "Mini" McGuinness, character on TV's "Skins" Owl of Minerva, symbol used by the Illuminati Merva, Myna, Minnie, Min, Minivera Minerva's International Variations Minerve, Minette (French) Minetta (Spanish)
__label__1
0.876917
The Playroom: Where Things Almost Happen The Playroom is one of those films that you hope has that dramatic edge to carry toward a wider release, but disappointingly believes too much in itself and fizzles quickly into the eye glazing over dream state that sometimes accompanies a non-ballyhooed indie February release. The uncomplicated story hovers around the four Cantwell children coping with yet another night entertaining themselves in the playroom while their parents Martin (John Hawkes) and Donna (Molly Parker) have an “adult party” downstairs with their neighbors. Over the course of an hour and a half nothing happens until the end when something almost happens but then it doesn’t.  I suppose the plot was subtle, quiet, elusive, something, something, something artsy words. I was rolling my eyes two minutes in and I couldn’t stop until I was thanking the final credits.  I really checked out twenty minutes in when the inciting incident was supposed to happen, and then didn’t. Rather, a contrived family dinner scene occurred that played out like the first night of rehearsal of a senior thesis play, with each actor waiting to deliver their line and laugh-tracking at each poorly timed grossly un-funny joke meant to break up the non-existent tension. Other than some blatant foreshadowing occurring, where each child speaks about “where they would go on an adventure” (Setting up the stories that each child tells the others to entertain themselves while their parents frolic downstairs) nothing of consequence happened in the “pivotal” scene that was supposed to set the mood for act two. A strong story happened before the events shown in this film, and a strong story probably happened sometime after. The filmmakers decided to show us a story that almost happened, as if we were watching a bunch actors milling around a soundstage waiting for another scene to be set up. Just as the scene is well lit and sound is rolling, the film cuts away and leaves us guessing, “What could have happened?” A majority of the story flipped back and forth from the oldest child Maggie’s (Olivia Harris) perspective from existing upstairs as a parent figure for her siblings, and downstairs a therapist for her parents. Each time she went upstairs a chunk of intended dialogue occurred, stories between the siblings, and each time she went downstairs the story did not progress to anything substantial. Instead, we were once again treated to a scene in which Donna blatantly flirts with their neighbor Clark (Jonathan Brooks) and Martin slowly gets less and less oblivious to it. It’s blasphemy to criticize the plot point choices of a writer more successful than myself, but perhaps another trained screenwriter could have taken the events of the entire film and condensed them into an act one, leaving the rest of the film to show the aftermath in dramatic rising and falling action. Individually the performances of the actors in the film were above respectable, but their chemistry with each other was sub-par at its best. The semi-palpable dissention between Hawkes and Parker as they dealt with the lingering aura of Dolly’s obvious yet somehow “surprising” affair carried the film. The relationships between the cookie-cutter stock characters of the Cantwell children were paint-by-numbers, although the acting effort was definitely there. Perhaps the direction could have been better, but frankly there wasn’t much dramatic action for the young actors to work with. I have a strong suspicion the film may have been drawn from some autobiographical source, as The Playroom fiddled with the machinery of a large family dynamic, and was coincidentally produced by one. Of the Dyer siblings, Julia directed, Stephen produced, and Gretchen wrote. Sadly screenwriter Gretchen Dyer passed away in 2009 before she could really take flight, because she probably could have had a promising career as a novelist. Her screenplay dealt a lot with inner tension and the caustic family dynamic of a disconnected family growing up in the 1970’s. If I were reading a novel or watching a stage play, I might have enjoyed the poignant yet understated family drama. However, as a film it had very little impact due to its lack of payoff and slow nature. The inner writer slash editor slash director in me kept shouting “Faster!” “Cut that!” “Cut that!” “Faster!” “Step on her line!” “Somebody do something… anything!”    The Playroom is in very few theaters this weekend for a reason. Several indie film aficionados will strongly disagree with me, and perhaps I watched the film under the wrong circumstances. Maybe the lighting was bad. Or I was hungry. I’m searching for more excuses, but as the credits rolled I couldn’t help thinking how much I needed to pack more dramatic action into my own screenplays if I ever hope to produce a film as frustrating as this one.
__label__1
0.650967
Homefront: The Revolution | | Release Date: May 17, 2016 Homefront 2 Wiki Guide Edit Page    Last Edit: 10 days 3 hours ago 17 May 2016 Xbox One, PS4, PC Developer Publisher Crytek Studios UK Deep Silver Official Sites Official Site Homefront: The Revolution in development by Crytek UK (the TimeSplitters developers, not the Ryse or Crysis developers) is taking the series in a decidedly different gameplay direction. In The Revolution, Crytek UK promises an open-world romp through Philadelphia, where the Korean People’s Army has been oppressing the citizens of Philadelphia for four years, and you as an average-Joe member of the American resistance decides to fight back. The game was announced at the begining of June 2014 leading up to E3 2014.  Homefront: The Revolution is a free-roam first person shooter where you must lead the Resistance movement in guerrilla warfare against a superior military force. A sprawling city responds to your actions - you and your Resistance Cell can inspire a rebellion on the streets and turn Occupation into Revolution, as oppressed civilians take up the fight. in Co-Op you and three other players can form your own Resistance Cell and become renowned as Heroes of the Revolution. Her once-proud citizens live in a police state, forced to collaborate just to survive, their dreams of freedom long since extinguished.But in the badlands of the Red Zone, in the bombed out streets and abandoned subways, a Resistance is forming. A guerrilla force, determined to fight for their freedoms despite overwhelming odds and ignite the second American Revolution. EditHomefront: The Revolution Sections
__label__1
0.914724
Partner Series What Is an Herbivore? Herbivores eat only plants, but can be very picky. Credit: stock.xchng Herbivorous animals are vegetarians. They only eat plants. Herbivores can be very picky eaters. Scientists group these creatures by the parts of plants the herbivore eats. Frugivores only eat fruit. Folivorous birds feed on leaves. Nectivorous bats suck on nectar. Granivorous insects munch on seeds whereas polynivorous ones prefer pollen. When the weather changes, however, herbivores often have to alter their diets to feed on whatever plant food is in season. The teeth of herbivores have adapted to chew the tough fibers of plants. Their big molars grind up seeds and twigs. Special stomach chambers in some herbivores, such as cows and moose, are called rumen. The rumen contains microorganisms that break down the cellulose in grass so that the animal can collect as much energy as possible from the food.
__label__1
0.894734
Academic journal article Magistra Ancrene Wisse: A Modern Lay Person's Guide to a Medieval Religious Text Academic journal article Magistra Article excerpt This paper is, in part, an account of a personal journey; I approached the study of Ancrene Wisse from a training in modern literature and philosophy via study of feminist critical study. How was a late twentieth century feminist agnostic to deal with a thirteenth century text written initially for the guidance of three sisters who were leading a religious life in solitude? How could I give a modern lay person's reading of a medieval religious text? In order to formulate an approach to a coherent reading, I had first to clarify the terms that would be used. The two major problems encountered, those of definition and anachronism, were, I realized, interlinked. Terms I was dealing with, such as "spirituality" and "lay," had to be understood in the historical context in which they (or their Middle English/Latin equivalent) were used, while other terms, most notably "misogyny," were anachronistic and as such had no historical context. While it may not always be possible to follow the exhortation of John Bossy that "a historian writing about an earlier period should not use a key word, such as religio, except in the sense it had in the period about which he is writing,"(1) it is often useful to look at the way key terms were used in the period in question. Jacques Le Goff points out that "Every idea is embodied in words, and every word reflects some reality. The history of words is history itself."(2) An investigation of terms used can provide an insight into the text being read. What is meant by "text," is, of course, also problematic especially when, as with Ancrene Wisse, the original version is no longer extant and the versions that do exist exhibit variations. It is a modern prejudice that the "original" version, or that which most closely conveys the author's intentions, is the "best" in some quasi-moral sense. Bella Millett, herself in the final stages of preparing an edition of Ancrene Wisse, is not attempting to produce a "reconstructed definitive authorial version," but rather to come to terms with mouvance, that is, the "textual instability or fluidity." She points out that: Editors have been preoccupied with the loss of "une authenticité perdue" the hypothetical perfection of the completed work as it left the author's hands, rather than coming to terms with the "une authorité generalisée" of medieval vernacular works which might be revised and rewritten indefinitely, both by their original authors and by others.(3) The variability and complexity of Ancrene Wisse needs to be kept in mind in any reading of it. Readers also needs to be clear about the sort of category in which they are placing it in any reading. Categories tend to be supplied by the departmentalized structure of academic studies, a structure which can be at best unhelpful and at worst damaging. There has recently been a move towards cross-disciplinary study, but in the case of reading a medieval work such as Ancrene Wisse, the desire to compartmentalize, or put things in boxes and label them so they can be more easily handled, remains. What sort of thing is Ancrene Wisse? Into what sort of box can it be put? Is it to be studied as literature or as history? Is it misogynistic? Is it mystical? Ancrene Wisse has traditionally been studied by English students as an example of early Middle English prose, as literature that is, but now many historians turn to it for evidence of anchoritic lifestyles in the Middle Ages. In his 1997 study of the current state of Ancrene Wisse group studies, Roger Dahood points out that the study of Ancrene Wisse and its associated works is on the rise among historians and literary critics with particular interests in the roles of women in late twelfth and early thirteenth century English society.(4) But what would it mean to study Ancrene Wisse as literature? When this author was first studying literature at university in the late 1970s, literature departments were still under the influence of New Criticism. … Author Advanced search
__label__1
0.556407
Belgian Nuclear Society Dear BNS-member, Dear Visitor, At the end of last year, France was hosting and presiding the 21st session of the Conference of the parties to the United Nations Framework Convention on Climate Change (COP21/CMP11). Within this crucial conference, a new international agreement, pledged by more than 180 countries, was achieved, with the aim of keeping the increase in global temperatures well below 2°C. Climate change remains thus a “hot” and societal topic. Energy politics is a key element in this frame. In order to keep up with the pledge of COP21, the scientific community must support every technology which limits the carbon footprint and allows energy efficiency for a sustainable energy future available to everyone. It is therefore important for a scientific society such as the BNS to play a role in the debate around energy, as nuclear energy remains a key player all over the world and can in our opinion provide safe, clean and reliable energy. The BNS is delighted to have one of the most recognized experts in the field of climate change: Prof. Em. André Berger will enlighten us on this topic during the Evening Lecture of April. We sincerely hope to meet you on the occasion of this Evening Lecture! BNS Corporate Members 2015_Banner1 Roger Schène, Chairman of the Belgian Nuclear Society The Fukushima accident and its multiple consequences created a significant and renewed interest among the population to better understand nuclear sciences and applications, their associated risks and their potential future developments. Our mission at the BNS is to try to give you access to a set of information as complete and as diversified as possible about various aspects of peaceful applications of Nuclear Energy. We are willing to provide detailed relevant scientific information for nuclear specialists from Belgium industries, universities and research centers but also to the public including schools, teachers, researchers which have their own question marks about nuclear applications and nuclear sciences in general. As you know, nuclear sciences are very broad and include already many applications as well as many ongoing developments full of hope for many medical domains. Nuclear sciences include obviously but are not limited to design, construction and operation of nuclear plants. The Belgian Nuclear Society has an important role to play today: from expressing thoughts to writing papers and organizing lectures and conferences. Our monthly Evening Lectures contribute to create a dialogue between experts and the public. This allows for having a direct contact with some of our best nuclear Belgian experts and raising the questions you might have. You will find more information on the upcoming Evening Lectures in the Activities section of our website. Thank you for your visit and do not hesitate to contact us should you have any questions related to nuclear sciences.
__label__1
0.706426
•Plan, execute and evaluate events •Manage and utilize resources effectively •Strengthen teamwork and employ/ utilize everyone’s unique capacities •Continuously educate members on a variety of skills Welcome to Sigma Alpha Delta's Alumni Database. Explore the lives of our illustrious alumni. Find out who they are, where they came from, where they're going, and how Sigma Alpha Delta played a role in their personal and professional development. To get started, start typing a name in the search box below. You can also select a year, letter, or period to browse all biographies for that period or letter. Biographies are sorted by last name.
__label__1
0.991025
Pell, ‘a celebrity who knows nothing about RI needs’ My take on the news LET’S STOP VOTING FOR A FAMILY NAME: Clay Pell, the 31-year old grandson of the Rhode Island’s late U.S. Senator Claiborne Pell, has indicated that he may seek the Democratic gubernatorial nomination in 2014. Pell is probably a very nice young man. Further, he may have many admirable qualities that would serve him well in political office. However, his primary asset is his name; and names should not trump merit! Pell is nothing more than a celebrity because of his name, his marriage to Olympic skating star Michelle Kwan, and because his political name got him an Obama appointment to a third-tier position in the U.S. Education Department. He is also a Coast Guard Reserve officer. Pell is one of thousands of very good Coast Guard Reserve officers; so that’s no qualifying experience to become governor. More important, young Pell is no Gina Raimondo. He’s no Allan Fung. He’s no Angel Taveras. And, he’s no Kenneth Block. He’s only a famous name, a celebrity who knows nothing about Rhode Island’s needs. He’s not someone who deserves consideration to be our next governor, so let’s stop talking about it. It’s high time we stopped voting for candidates simply because of their family names. We’ve had enough Kennedys, Bushes, and other political dynasties. In the state founded on principles of individual merit, selection of our leaders should be made purely on just that - merit. TAVERAS AND DAVEY LOPES SWIMMING POOL: Providence Mayor Angel Taveras has hit a snag among those voters most likely to support him in his likely gubernatorial bid in 2014. It’s the minority community that has been served for decades by the Davey Lopes swimming pool in south central Providence. His decision to close the pool and divert the money to other recreational programs for children has elicited widespread objections from the minority community. Even though offers have poured in to help finance the pool’s continued operations, to include an offer from the Republican Party of Rhode Island, Taveras steadfastly refuses to reverse his decision and has opened bids from companies vying to fill the pool with concrete. With a likely primary election pitting him against General Treasurer Gina Raimondo, who has more than $2 million in her campaign chest, Taveras will need the complete support of the minority community in Providence and elsewhere in the state. Though Raimondo is too professional to use the pool closing as a campaign issue, political action committees that support her will not hesitate to excoriate Taveras for his stubbornness on the issue. Additionally, his opponents will emphasize statistics on the abnormally high number of black children who drown, primarily because they never learned to swim since they grew up with no pool available as youngsters. Perhaps it’s time for Taveras to admit that he is wrong and show that he can be conciliatory, an attribute that can very valuable to a successful governor. NOW FOR THE REST OF THE STORY: First, the Rhode Island Ethics Commission ruled, against all common sense, that the state director of administration, Richard Licht, one of Governor Chafee’s closest confidants and a key advisor, is not in “a senior policy-making, discretionary or confidential position” and, thus, is eligible to skirt the Code of Ethic’s revolving door requirement and be immediately appointed a superior court judge if selected. As discussed last week, that inane ruling smacks of politics and favoritism and reduces the Code of Ethics to mere rubbish. “Now for the rest of the story,” as Paul Harvey used to say. It has now been revealed that a law passed in 1974 that was clearly meant to benefit only one person, deceased former governor Frank Licht - Richard Licht’s uncle - will also apply to Richard if he retires from the superior court bench. The law, which has become known as the “Frank Licht” rule, says that anyone who served as a legislator, a state officer and a judge is eligible to receive in retirement 75 percent of his active pay. But wait; there’s more! How does the law affect Richard Licht? Without being appointed to a judgeship, he is eligible for a $49,122 annual pension. Retiring from the judicial bench, however, boosts his retirement pay to $128,691 per year.  What a story it is! Fortunately, practically all senior lawmakers and potential gubernatorial candidates have indicated their objection to the law and have promised to work to repeal it. WHEN ENOUGH IS ENOUGH: When voters in the 18,000 population Central Coventry Fire District rejected the district’s proposed $6.6 million budget last February, they were saying “enough is enough.” They voted down a subsequent proposed budget of $6.3 million the very next month. Finally, Monday night – after months of court arguments, a union contract rejection, and talks about liquidating the fire district - they voted to accept a proposed $5.3 million budget that is much smaller than what is needed to fund the district if it is not to be liquidated. The voters were, in essence, saying once again that enough is enough; either live within a budget that is affordable to taxpayers or liquidate. It looks like liquidation is on the horizon. Finally, at least in Coventry, taxpayers are showing enough concern and courage to reject tax and spend politicians and their union backers. It’s about time! Now, if only the rest of Rhode Island’s taxpayers will be as courageous as those in Coventry and vote such politicians out of office in 2014. EXETER RECALL ELECTION: It is unfortunate that voters have to resort to forcing a recall election in order to protect constitutional rights. The U.S. Constitution’s Second Amendment is clear and the Supreme Court has agreed. Americans have the right to own and bear arms. Certainly, government can place reasonable restrictions on this right - such as requiring background checks and waiting periods. Government cannot, however, arbitrarily restrict gun ownership and the right to carry firearms once a citizen has overcome the reasonable hurdles. Yet, that is exactly what happens when the attorney general’s office is involved in issuing concealed carry permits. State law rightfully requires that local police chiefs or other local issuing officials “shall” issue such permits so long as a background check is negative. The same law, however, says that the attorney general’s office “may” issue a permit - even when a background check is negative. This constitutes an arbitrary exercise of unconstitutional power by the attorney general’s office that is frequently used or abused. Our General Assembly needs to strengthen the law and make sure all permits issued by local police chiefs and other local officials are entered into a statewide database so that police can immediately determine whether a permit is valid, and the law needs to require a waiting period with a new national criminal background check before a permit holder can purchase another gun. Additionally, the new law needs to read that the Attorney General’s office, just like local officials, “shall” issue permits to those who have passed the background check. Were this the case already, Exeter residents would not have had to resort to a recall election to protect their constitutional rights. OBAMA ADMINISTRATION ADMITS FAILURE: The Obama administration’s Health and Human Services Department finally admitted failure Sunday when it posted a blog apologizing to the American people for its ineptitude in failing to create a functional website for Americans to sign up for health insurance under the Affordable Care Act (Obamacare).  After three weeks of claiming the website was experiences “glitches” but was performing effectively overall, the administration has finally admitted that after spending millions of dollars and devoting countless man-years of effort, the government just couldn’t create a functional website. The administration is now calling in private experts to help it get the program on track. The question every American should be asking is this: If the government isn’t even capable of registering Americans for health insurance, how in the world will it be able to administer such a colossal, expensive program as Obamacare? QUOTE OF THE WEEK: I was in Arlington, Virginia last weekend for the Army Ten Miler road race that starts at the Pentagon, crosses the Potomac, goes by DC monuments and government buildings, then crosses the Potomac again and ends back to the Pentagon. There were many memorable signs being held by onlookers along the running route. The best: “Keep going! You’re working a lot harder than Congress!”  Please log in or register to add your comment
__label__1
0.703491
In it was discovered that adding blocks to the testing profile causes errors, even though they are removed in the setUp(). If blocks are added and then removed again, it should not affect the following tests. The failing test is testBlockRehash. It fails on first line: Drupal\Component\Plugin\Exception\PluginException: The plugin (test_html_id) did not specify an instance class. in Drupal\Component\Plugin\Factory\DefaultFactory::getPluginClass() (line 62 of /var/www/drupal8/core/lib/Drupal/Component/Plugin/Factory/DefaultFactory.php). blocks-in-testing.patch2.58 KBRunePhilosof tim.plunkett’s picture I'm not sure this is a bug, since those lines are being deleted. RunePhilosof’s picture I know that those lines are deleted now, so I should actually renew this patch and add them again. This bug was discovered because of those seemingly unnecessary lines. Drupal 8.0.6 was released on April 6 and is the final bugfix release for the Drupal 8.0.x series. Drupal 8.0.x will not receive any further development aside from security fixes. Drupal 8.1.0-rc1 is now available and sites should prepare to update to 8.1.0. Bug reports should be targeted against the 8.1.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.2.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.
__label__1
0.669862
I got strace working. It is only opening the default.d directory, local.d directory (both empty) and my default.ini and local.ini files. I'm ignoring futon now since it doesn't agree with the actual behavior. It is easy to tell what db directory is being used so I'm just playing with that one setting. I tried deleting the default config line and it made no difference. I looked at the process with ps and it is a very long complicated erlang line that starts beam. So I don't think I'll try that trick. It may be a bug, but I'm not confident I could put together a decent report. The 'how to repeat" section would be pretty big. Also I guess it could be a problem with build-couch. I've got an old install of build-couchdb that works, so I'll use that now. I guess the next thing to try would be to build from source instead of using build-couchdb, but I've never had any luck doing that before.
__label__1
0.569856
Why Brent crude prices advanced, benefiting international oil producers Why Brent crude prices advanced, benefiting international oil producersBrent crude is regarded as the international oil benchmark Brent crude is viewed as the benchmark crude for international oil prices. So movements in Brent oil prices are a major driver in the valuation of international oil producers. Higher oil prices also incentivize producers to spend more money on drilling, which results in increased revenues for oilfield service companies (that is, companies that provide services such as drilling, fracking, and well servicing). Consequently, Brent crude prices are an important indicator to watch for investors who own international energy stocks. Brent crude finished up last week Last week, Brent crude prices finished up, closing at $108.81 per barrel compared to $107.72 per barrel the Friday before, as U.S. inventories had a much larger than expected draw (see Must-know: Oil prices rise to new highs on supportive inventory report). Since mid February, when Brent touched nearly $119 per barrel, prices fell dramatically to touch below $98 per barrel in mid April. Since then, Brent had been roughly range-bound in the $100-to-$105-per-barrel area. In recent weeks, oil prices moved higher due to unrest in the Middle East and draws in U.S. inventory levels. Divergence from U.S. benchmark crude Note that Brent crude doesn’t properly reflect the price that producers within the United States receive. For domestic producers, WTI (West Texas Intermediate) is a more appropriate benchmark (read about last week’s movement in WTI prices here), mainly due to a recent surge in domestic crude production, which has struggled to find sufficient transportation and outlets to bring WTI prices in line with Brent prices. For more on the price difference between the two benchmarks, please see Spread between WTI and Brent vanishes, lower relative U.S. oil prices. Note, though, that recently, the spread has narrowed significantly, and if it continues to narrow, the two benchmarks may soon trade in line again. Also, Brent represents a certain grade of crude, and differences in the qualities of oil can affect the price that producers receive. Nevertheless, most market participants view Brent as the international oil benchmark, and price movements in Brent affect international energy stocks such as Exxon Mobil (XOM), Chevron Corp. (CVX), ConocoPhillips (COP), and Anadarko Petroleum Corp. (APC). Higher oil prices can drive higher energy stock valuations As we’ve seen, higher crude prices generally have a positive effect on stocks in the energy sector. The graph below shows Brent crude oil price movements compared to XLE and XOM on a percentage change basis from January 2007 onward. You can see that crude oil, the XLE ETF, and XOM have largely moved in the same direction over the past several years. Why Brent crude prices advanced, benefiting international oil producers The price increase this past week was a short-term positive for the sector As demonstrated in the graph above, crude oil prices are a major driver in the valuation of many energy investments. Oil prices affect the revenues of oil producers, and consequently affect the amount of money oil producers are incentivized to spend on oilfield services. So the upward movement in prices this past week was a short-term positive for the sector. Investors with energy holdings in names with international exposure such as XOM, CVX, COP, and APC as well as the XLE (Energy Select Sector SPDR) ETF may find it prudent to track the movements of benchmarks such as Brent crude. The Realist Discussions Become a Market Realist member today to enjoy full access
__label__1
0.517451
Public Release:  New research ensures car LCDs work in extreme cold, heat Novel liquid crystal formulations usable from minus 40 degrees Fahrenheit to 212 degrees Fahrenheit University of Central Florida One of UCF's most prolific inventors has solved a stubborn problem: How to keep the electronic displays in your car working, whether you're driving in the frigid depths of winter or under the broiling desert sun. LCD screens are everywhere -- our smartphones, televisions, laptops and more. Increasingly, they're now popping up in automobiles, where it's now common to find liquid crystal displays showing speed, distance, fuel consumption and other information, as well as GPS mapping, rearview cameras and audio systems. But current technology has an Achilles heel: The displays grow blurry and sluggish in extreme temperatures. "Liquid crystals exist only in a certain temperature range. In order to work in extreme environments, we need to widen that temperature range," said researcher Shin-Tson Wu of the University of Central Florida. That's what Wu and his team have done in his lab in UCF's College of Optics & Photonics. As reported recently in the scholarly journal Optical Materials Express, Wu and his collaborators formulated several new liquid crystal mixtures that don't have the temperature limitations of those now in use. The liquid crystals should maintain their speed and viscosity in temperatures as high as 212 degrees Fahrenheit and as low as minus 40 degrees Fahrenheit. Wu, who holds UCF's highest faculty honor as a Pegasus Professor, is no stranger to new discoveries with practical uses in the real world. He previously played a key role in developing LCDs for smartphones and other devices that are readable in sunlight. Through his work with advanced LCDs, adaptive optics, laser- beam steering, biophotonics and new materials, Wu has registered about 84 patents. In 2014, he was one of the first inductees to the Florida Inventors Hall of Fame. Wu worked with a team of doctoral students from his research group -- Fenglin Peng, Yuge "Esther" Huang and Fangwang "Grace" Gou -- as well as collaborators from Xi'an Modern Chemistry Research Institute in Xi'an, China, and DIC Corp. in Japan. "Our team is always trying to find new recipes for materials," Huang said.
__label__1
0.999074
Short description The QUEST 4000m remotely operated vehicle (ROV) is a full sized, so called "workclass" deepwater ROV, originially designed to serve industrial needs for offshore production and intervention tasks. At Marum, QUEST is continuously adapted to the needs of deepsea research, utilizing its high electrical power and large payload capacity to provide a versatile platform for state-of-the-art deepwater science at water depths down to 4000 m. Today, it has proven it's value during over 30 expeditions since 2003. By December 2015, QUEST completed over 360 scientifc dives between 280 and 4014 m depth. QUEST was originally developed as a commercial robotic vehicle by Schilling Robotics, Davis, USA, primarily for heavy duty offshore industrial applications in the deep ocean. In contrast to the commercial model, the Bremen QUEST has been adapted at Marum for a number of scientific applications. Since it's initial deployment during expedition "M58-3" in June 2003, QUEST is routinely operated worldwide aboard research vessels between 2 to 5 times per year. Expeditions include work for several national german DFG and BMBF funded programs, internal MARUM research projects, the EU supported programmes ACES, HERMES, ESONET, the US NOAA's Ring of Fire expedition programme, and others. Please follow below links to gain a more detailed insight into QUEST's capabilities, requirements, related media and work examples. Detailed system information (detailed technical specification, dimensions, cameras and sensors) more ... User Guide (scientific capabilities, payload specifications, data products) more ... Operational Requirements (operational modes, personnel requirements, mobilization, logistics) more ... (exhibitions and TV programs, websites, media gallery) more ... (past and future dates and expeditions, dive numbers) more ... (Manipulation, Simulation, Courses) more ... Dr. Volker Ratmeyer +49  421 218 - 65604 +49  421 218 - 9865604
__label__1
0.850102
Botanical Garden Orto Botanico Putto Fountain The Botanical Gardens of Florence were founded on December 1st, 1545, when Grand Duke Cosimo dei Medici purchased the land from the Dominican sisters. The orchard that was known with the name of "Giardino dei Semplici", because of the fact that it was used to cultivate and raise medicinal plants, is the third oldest botanical garden after those of Padua and Pisa. The original layout was designed by Niccolò called "il Tribolo" who had already planned several other grand ducal gardens, like the one of the Medici villa in Castello. Initially the gardens were directed by the botanist Luca Ghini, who had already followed, two years earlier, the Botanical Gardens of Pisa by order of the Grand Duke. The garden was improved and embellished with the 18th century collections thanks to the commitment of Cosimo III dei Medici who assigned its direction to the Florentine Botanical Society, under the direction of the famous botanist Pier Antonio Micheli. Its direction was transferred in 1783 to the "Accademia dei Georgofili" and was referred to as "Agricultural experimental garden", then renamed into "Giardino dei Semplici" in 1847 and finally into "Botanical Garden of the Upper Education Institute" in 1880. The Gardens currently take up an area of 2.39 hectars, divided into smaller and larger avenues. The structure also has its own greenhouses and hot houses for the cultivation of special plants. The vegetable patrimony is formed by over 5,000 examples with several very old trees, some of which, such as the Taxus baccata, planted by Micheli himself round 1720, a very large cork oak planted in 1805 and never stripped, several examples of Coniferae like the Araucaria, Torreya, Sequoia and a beautiful example of Metasequoia glytostroboides, a species originally known as fossil and rediscovered in China only in 1941. The most important collections are those comprising Cicadidae, Tillandsia, Orchids and ferns. Extremely interesting, because of its dimensions and number, is the collection of azaleas that always draws the attention of a large number of visitors during the flowering period. Even the sections dedicated to medicinal plants, cactuses and carnivorous plants are also very interesting from the didactic point of view. The large greenhouse Time by time an exhibition In the garden a statue of Esculapio Vandopsis lissochiloides Nice pot Pachystachys lutea Aristolochia machrophylla Pachystachys coccinea Citrus aurantium bergamia (Bergamotto) Aristolochia elegans Pavonia multiflora Inside the greenhouse A very old cork oak Pears? No, lemons... A world of rare plants Drosera binata Climbing roses Carnivorous plants Dionaea muscipula
__label__1
0.997674
COLUMBIA - A joint House-Senate ethics committee began initial discussion Tuesday on a sweeping ethics reform bill as lawmakers try to beat the clock to get a bill passed before the end of the legislative session this week. The two versions setting up new rules for lawmakers' conduct vary widely, and both sides said they would talk to their colleagues to get a sense of what each body is willing to give. The House version establishes a 12-member independent commission that would be able to investigate members of each branch for ethics infractions. That body would investigate ethics-related allegations in all three branches of government, but disciplinary decisions would be left up to the current bodies that decide those matters. The bill places restrictions on appointing campaign donors and family members to that committee. The House bill reconstitutes the State Ethics Commission, which oversees executive branch officials, including the governor. As it stands, all nine members of the commission are appointed by the governor, but the House bill would spread appointees among other statewide offices. Sen. Wes Hayes, R-Rock Hill, who chairs the joint ethics panel empowered to come to a compromise between the two bodies, said that the independent commission was likely a non-starter with much of the Senate. The Senate already rejected the concept when it passed its version of ethics reform earlier this year. He said he was focused on a potential compromise including what he calls the "big three" issues: assuring lawmakers disclose their sources of income; requiring groups and Super PACs that spend money on campaigns in South Carolina to disclose their donors; and banning so-called Leadership PACs, lawmaker fundraising groups that raise funds and then spend dollars without the oversight and scrutiny of campaign funds. "We'll have a bill and hopefully have the votes to pass it," Hayes said. Rep. Bruce Bannister, R-Greenville, who is on the joint committee, said he would poll members of the Republican caucus Tuesday to see what was possible. "If the independent investigation committee is a non-starter (with the Senate) do we want to go forward?" Bannister said. "I'd like to see us do something." Critics have already knocked the bill for its failure to ban or limit the use of campaign funds for office uses, a source of much controversy recently. The House bill also allows some sources of income for lawmakers and family members to remain out of public view, something advocates say could provide future loopholes.
__label__1
0.912485
Police officers search parts of the San Diego river for homeless people who have taken up residence near the waterway. Officer John Horvath wakes a sleeping transient who had an outstanding arrest warrant on his record. — From the sidewalk of Friars Road in Mission Valley, the San Diego River can be seen slowly snaking through tall grasses. It’s a historic, 52-mile slice of nature, home to furry denizens and explored by hikers and wildlife enthusiasts. But others make the river’s shores their home. Drug addicts. Thieves. The mentally ill. A persistent homeless population has laid its claim on the environmentally sensitive zone for decades, polluting the area with trash and human waste. San Diego police officers and a homeless outreach team visit the area at least once a month, but the people who call the river home often don’t want help and create problems in surrounding communities, a lieutenant said. “This is a huge issue for the community and they’re screaming about it,” said police Sgt. Jack Knish. Officers raided one of the riverside encampments, then known as Edge City, as early as 1987, and have policed the territory ever since. Cleanup efforts are just as frequent. Trucks of trash are regularly hauled away. The water itself, visibly polluted, is harder to clean. The river’s shanty towns can at times be sophisticated, complete with stoves and appliances that run on stolen electricity. Zones have even been given neighborhood-like monikers by homeless residents and police alike — Mt. Trashmore, The Pit, The Dungeon and Gilligan’s Island. While it’s not a crime to be homeless, many who take up long-term residence are involved in criminal activity, police Capt. David Rohowitz said. One island resembles a bicycle chop shop. Gears, chains and other parts are organized in piles to be hocked later. Police field calls about the homeless victimizing pedestrians, breaking into cars and committing robberies. Arson is another persistent issue. Even more often, police are sent to the riverbanks to address violence committed by one homeless person against another. “Many are on probation or parole with psychological issues or drug or alcohol addictions,” Rohowitz said. “They typically aren’t balanced people down on their luck. These are people on the margins of society.” Controlling criminal activity poses challenges. Transients are venturing deeper into the brush, making it more difficult for patrols to contact them. Issuing citations that stick is even harder. When officers run into someone living in the area, they check for outstanding arrest warrants. Police tell the riverbed resident that no one is allowed to live there and offer services from the department’s Homeless Outreach Team. If patrols come upon that person again, they dole out a citation for illegal lodging. If, after a citation, homeless people still stick around, they can be arrested. Multiple arrests could lead to a stay-away warrant, barring that person from the area and resulting in a longer jail stay. Even after all these deterrents, too frequently, they head right back to the river, Knish said. “The person who lives in the riverbed knows that nothing is going to happen to them, so they say, ‘You can arrest me all day long, I don’t care,’” he said. Without more long-term consequences, the river population will likely stay steady, Knish said. Sgt. Teresa Clark, who manages all of San Diego’s homeless outreach teams, said the river’s residents tend be a more hard-core group of transients. “There are more veterans, more parolees,” she said. “They are rougher and are better at roughing it.” The sergeant said it’s a difficult group to work with and one that rarely wants help, but that doesn’t stop the department’s homeless outreach teams from offering. Working with the county’s health and human services department and the psychiatric emergency response team, officers are able to help homeless fill out affordable housing forms, get identification and connect with other social services.
__label__1
0.939646
12 November 2012 Google TV, accessed via a set-top box or integrated into the latest HDTVs from Samsung, Sony and LG, is designed to combine the best of the internet and Android plus the Google Play app store, with the best of TV. Viewers can surf the web, find programming based on themes, genres and likes, and download and install apps to improve the viewing experience. However, due to international licensing and rights laws, two of Google Play's most attractive features, downloading and streaming video and music content, have previously been unavailable to users outside the US. From November 13 that is set to change with the UK, France and Germany becoming the first outside America to be able to access everything. However, unlike in the US, they will not be able to download the content to other Android devices such as tablets or smartphones, though it is hoped this limitation will also soon be addressed. Amazon faced similar issues when it launched the Kindle Fire HD. Despite being able to draw on a huge ecosystem of multimedia content, the company hadn't negotiated cross-border licensing with the rights holders, which is why the original Kindle Fire was released in the US only. The company has since agreed deals with all of the major record labels and film studios, which means it has been able to launch the second generation of its 7-inch tablet internationally.
__label__1
0.855524
Have we just seen the turning point in equity markets? A slew of bearish data from the US includes renewed predictions of falling property prices, and the UK should ensure credit is available to non-financial institutions too, if it wants to avoid the fate of mid 1990s Japan Read more Oil trades may be denominated in a basket of currencies, rather than the US dollar. Brazil is having a good week, and there are growing calls for IMF reform to go further than that agreed at the G20 Pittsburgh meeting Read more “Well run” emerging economies will be encouraged to hold less in foreign reserves with the IMF acting as a giant insurer, under a new proposal. Unemployment is set to break through 10 per cent in the US, and stay there for quite some time Read more China’s push for materials and energy leads it to be South Africa’s biggest trading partner. In spite of this, questions over China’s continued recovery and further questions on whether we are seeing the end of the rally in equities Read more As Russia invites foreign investment, China encourages domestic investment in commercial property. US housing prices are marginally improved but on thin volume, and unemployment data looks grim Read more As money pours out of money market funds, we ask whether equity levels are sustainable and if the return of property-backed debt is a cause for celebration. Food security is a rising international concern and Bill Clinton asks if we should use the unemployed to increase energy efficiency. Read more Jobs data looks healthy in Brazil and the square mile, but less rosy for manufacturing workers and journalists, although wage data may be good for those employed, whatever the sector. Commodities are ever more popular and gold could surge if China bans exports Read more
__label__1
0.947213
Be Sociable, Share! What is strabismus? Strabismus is a condition in which your eyes are not properly aligned with each other, resulting in double vision or the suppression of the image from the affected eye. For a variety of reasons, one or both of your eyes may turn in, out, up or down. What causes strabismus? Coordination of your eyes and their ability to work together as a team develops in your first six months to four years of life. Failure of your eye muscles to work together properly can lead to strabismus. Strabismus can be hereditary, but may also be acquired secondary to an eye injury or disease. Who is affected by strabismus? Children under age six are most affected by strabismus, but it usually first appears between birth and 21 months. It is estimated that five per cent of all children have some type or degree of strabismus. Although rare, strabismus can sometimes begin in adulthood. Sudden onset of strabismus may occur as a result of a stroke, tumor or other vascular disease. Will a child outgrow strabismus? A child will not outgrow strabismus without treatment. In fact, the condition may simply become worse without treatment leading to an amblyopic (lazy) eye. What are the effects of strabismus? Children with strabismus may initially have double vision. This occurs because both eyes are not focusing on the same object. In an attempt to avoid double vision, the brain will eventually disregard the image from one eye. This is referred to as suppression. In time, the ignored eye will become unable to function normally and will become largely unused. This may result in the development of lazy eye (amblyopia). How is strabismus diagnosed? Parents may be the first to notice a slight wandering of one or both of a child’s eyes. A comprehensive eye examination by a doctor of optometry is recommended by six months of age or sooner if an eye appears to be misaligned. How is strabismus treated? Treatment for strabismus can include eyeglasses (single vision or bifocal), prisms, vision therapy and in some cases, surgery. Strabismus can be corrected with excellent results if detected and treated early, which is why intervention by age six months is suggested if eyes are not aligned.
__label__1
0.654077
The Muromachi Period (1338-1573), also known as the Ashikaga Period, began when Ashikaga Takauji became shogun in 1338 and was characterized by chaos, violence and civil war. The Southern and Northern Courts were reunified in 1392. The period was called Muromachi for the district in which its headquarters were in Kyoto after 1378. What distinguished the Ashikaga Shogunate from that of Kamakura was that, whereas Kamakura had existed in equilibrium with the Kyoto court, Ashikaga took over the remnants of the imperial government. Nevertheless, the Ashikaga Shogunate was not as strong as the Kamakura had been and was greatly preoccupied by the civil war. Not until the rule of Ashikaga Yoshimitsu (as third shogun, 1368-94, and chancellor, 1394-1408) did a semblance of order emerge. [Source: Library of Congress] There was almost constant warfare. Central authority had dissolved and about 20 clans fought for supremacy during a 100-year period called the “Age of the Country at War.”Ashikage Takauji, the first emperor of the Muromachi period, was regarded as a rebel against the Imperial system. Zen monks acted as advisors to shogunate and became involved in politics and political affairs. This period of Japanese history also saw the emergence of the influence of wealthy merchants who were able to create close relationships with daimyo at the expense of the samurai. The Namboku Period (1334-1392) was relatively brief period that began with the restoration of Emperor Godaigo in 1334 after his army defeated Kamakura army during its second try. The Emperor Godaigo favored the priesthood and aristocracy at the expense of the warrior class, which rose in revolt under the leadership of Takauji Ashikaga. Ashikaga defeated Godaigo at Kyoto. He then installed a new emperor and named himself as shogun. Godaigo set up a rival court in Yoshino in 1336. The conflict between Northern Court of Ashikaga and Southern Court of Godaigo lasted for more than 60 years. One noteworthy figure from period is Yoshimitsu, a leader who became shogun when he was 10, subdued rebellious feudal lords, helped unify southern and northern Japan, and built the Golden Temple in Kyoto. Yoshimitsu allowed the constables, who had had limited powers during the Kamakura period, to become strong regional rulers, later called daimyo (from dai, meaning great, and myoden, meanng named lands). In time, a balance of power evolved between the shogun and the daimyo; the three most prominent daimyo families rotated as deputies to the shogun at Kyoto. Yoshimitsu was finally successful in reunifying the Northern Court and the Southern Court in 1392, but, despite his promise of greater balance between the imperial lines, the Northern Court maintained control over the throne thereafter. The line of shoguns gradually weakened after Yoshimitsu and increasingly lost power to the daimyo and other regional strongmen. The shogun's decisions about imperial succession became meaningless, and the daimyo backed their own candidates. In time, the Ashikaga family had its own succession problems, resulting finally in the Onin War (1467-77), which left Kyoto devastated and effectively ended the national authority of the Shogunate. The power vacuum that ensued launched a century of anarchy. [Source: Library of Congress] Websites and Resources Good Websites and Sources: Essay on Kamakura and Muromachi Periods ; Middlebury College article ; Wikipedia article on the Kamakura Period Wikipedia ; Minamoto Yoritomo ; Samurai Archives article on Minamoto Yoritomo ; Wikipedia article on Muromachi Period Wikipedia ; Tale of Heike site ; Kamakura City Websites: Kamakura Today ; Wikipedia Wikipedia ; Kamakura City site Map: Japan National Tourism Organization JNTO Kamakura Today ,aps ; Good Photos at Japan-Photo Archive Good Japanese History Websites: History of Japan by William Gilmore-Lehne ; Wikipedia article on History of Japan Wikipedia ; Outline Chronology of Japanese Cultural History ; Samurai Archives ; National Museum of Japanese History ; Famous Personages in Japan ; Monumenta Nipponica, Respected Journal on Japanese History and Culture Links and Sources Japanese History Documentation Project ; Cambridge University Bibliography of Japanese History to 1912 ; Sengoku Daimyo ; English Translations of Important Historical Documents ; WWW-VL: History: Japan (semi good but dated source ) ; History Links ; Forums Delphi Forums, Good Discussion Group on Japanese History ; Tousando ; Economic and Cultural Developments in the Muromachi Period Contact with Ming Dynasty (1368-1644) China was renewed during the Muromachi period after the Chinese sought support in suppressing Japanese pirates, or wako, who controlled the seas and pillaged coastal areas of China. Wanting to improve relations with China and to rid Japan of the wako threat, Yoshimitsu accepted a relationship with the Chinese that was to last for half a century. Japanese wood, sulfur, copper ore, swords, and folding fans were traded for Chinese silk, porcelain, books, and coins, in what the Chinese considered tribute but the Japanese saw as profitable trade. [Source: Library of Congress] During the time of the Ashikaga Shogunate, a new national culture, called Muromachi culture, emerged from the Shogunate headquarters in Kyoto to reach all levels of society. Zen Buddhism played a large role in spreading not only religious but also artistic influences, especially those derived from Chinese painting of the Chinese Song (960-1279), Yuan, and Ming dynasties. The proximity of the imperial court and the Shogunate resulted in a commingling of imperial family members, courtiers, daimyo, samurai, and Zen priests. Art of all kinds--architecture, literature, No drama, comedy, poetry, the tea ceremony, landscape gardening, and flower arranging--all flourished during Muromachi times. [Ibid] There also was renewed interest in Shinto, which had quietly coexisted with Buddhism during the centuries of the latter's predominance. In fact, Shinto, which lacked its own scriptures and had few prayers, as a result of syncretic practices begun in the Nara period, had widely adopted Shingon Buddhist rituals. Between the eighth and fourteenth centuries, was nearly totally absorbed by Buddhism and became known as Ryobu Shinto (Dual Shinto). The Mongol invasions in the late thirteenth century, however, had evoked a national consciousness of the role of the kamikaze in defeating the enemy. Less than fifty years later (1339-43), Kitabatake Chikafusa (1293-1354), the chief commander of the Southern Court forces, wrote the Jinno sh t ki (Chronicle of the Direct Descent of the Divine Sovereigns). This chronicle emphasized the importance of maintaining the divine descent of the imperial line from Amaterasu to the current emperor, a condition that gave Japan a special national polity (kokutai). Besides reenforcing the concept of the emperor as a deity, the Jinno sh t ki provided a Shinto view of history, which stressed the divine nature of all Japanese and the country's spiritual supremacy over China and India. As a result, a change gradually occurred in the balance between the dual Buddhist-Shinto religious practice. Between the fourteenth and seventeenth centuries, Shinto reemerged as the primary belief system, developed its own philosophy and scripture (based on Confucian and Buddhist canons), and became a powerful nationalistic force. [Ibid] Culture in the Muromachi Period performing the tea ceremony Under the Ashikaga shogunate, samurai warrior culture and Zen Buddhism reached its peak. Daimyos and samurai grew more powerful and promoted a martial ideology. Samurai became involved in the arts and, under the influence of Zen Buddhism, samurai artists created great works that emphasized restrain and simplicity. Landscape painting, classical noh drama, flower arranging, tea ceremony and gardening all blossomed. Partition painting and folding screen painting were developed during Ashikaga Period (1338-1573) as a way for feudal lords to decorate their castles. This style of art featured bold India-ink lines and rich colors. The Ashikaga Period also saw the development and popularization of hanging pictures (kakemono) and sliding panels (fusuma). These often featured images on a gilt background. The true tea ceremony was devised by Murata Juko (died 1490), an advisor to the Shogun Ashikaga. Juko believed one of the greatest pleasures in life was to live like a hermit in harmony with nature, and he created the tea ceremony to evoke this pleasure. The art of flower arranging developed during the Ashikaga Period along with the tea ceremony although its origins can be traced to ritual flower offerings in Buddhist temples, which began in the 6th century. Shogun Ashikaga Yoshimasa developed a sophisticated form of flower arrangement. His palaces and small tea houses contained a small alcove where a flower arrangement or work of art was placed. During this period a simple form of flower arrangement was devised for this alcove (the tokonoma) that all classes of people could enjoy. Warfare during the period was also an inspiration for artists. Paul Theroux wrote in The Daily Beast: The Last Stand of the Kusunoki Clan, a battle fought at Shijo Nawate in 1348, is one of the enduring images in Japanese iconography, occurring in many woodblock prints (by, among others, Utagawa Kuniyoshi in the 19th century and Ogata Gekko in the early 20th), the doomed warriors defying an immense shower of arrows. These samurai who were defeated---their wounded leader committed suicide rather than be captured---are inspirational to the Japanese, representing courage and defiance, and the samurai spirit.[Source: Paul Theroux, The Daily Beast, March 20, 2011] Warfare in the Muromachi Period Civil wars and feudal battles occurred off and on during the unstable and chaotic 15th and 16th centuries. In the 1500s the situation got so out of out hand that bandits overthrew established leaders, and Japan almost descended into Somalia-like anarchy. During the White Sparrow Revolt in 1571 young (sparrow) monks were forced to fall to their deaths over a waterfall in the Unzen area of Kyushu. Battles often embraced tens of thousands of samurai, supported by farmers enlisted as foot soldiers. They armies employed mass attacks with long spears. Victories were often determined by castle sieges. >Early Japanese castles were usually built on flat land in the middle of the town they protected. Later, multi-storied pagoda-like castles called donjons, were built on top of raised stone platforms. Many important battles were fought in the mountains, difficult terrain suited for foot soldiers, not open plains where, horses and cavalries could be used to their best advantage. Fierce hand to hand battles with armor-clad Mongols showed the limitations of bows and arrows and elevated the sword and the lance as the preferred killing weapons Speed and surprise were important. Often the first group to attack the other’s encampment won. Warfare changed when guns were introduced. "Cowardly" firearms reduced the necessity of being the strongest man. Battles became bloodier and more decisive. Not long after guns were banned warfare itself ended. Onin War Period of Civil Wars (1467-1560) The Onin Rebellion (Ronin Rebellion) of 1467 escalated into the 11-year Onin civil war, which was regarded as a "brush with the void." The war essentially destroyed the country. Afterwards, Japan entered the Period of Civil Wars, in which the shoguns were weak or non-existent and daimyo established fiefs as separate political entities (rather than vassals states within a shogunate) and castles were built to protect them. The Onin War led to serious political fragmentation and obliteration of domains: a great struggle for land and power ensued among bushi chieftains until the mid-sixteenth century. Peasants rose against their landlords and samurai against their overlords as central control virtually ceased. The imperial house was left impoverished, and the Shogunate was controlled by contending chieftains in Kyoto. The provincial domains that emerged after the Onin War were smaller and easier to control. Many new small daimyo arose from among the samurai who had overthrown their great overlords. Border defenses were improved, and wellfortified castle towns were built to protect the newly opened domains, for which land surveys were made, roads built, and mines opened. New house laws provided practical means of administration, stressing duties and rules of behavior. Emphasis was put on success in war, estate management, and finance. Threatening alliances were guarded against through strict marriage rules. Aristocratic society was overwhelmingly military in character. The rest of society was controlled in a system of vassalage. The shoen were obliterated, and court nobles and absentee landlords were dispossessed. The new daimyo directly controlled the land, keeping the peasantry in permanent serfdom in exchange for protection. [Source: Library of Congress] Most wars of the period were short and localized, although they occurred throughout Japan. By 1500 the entire country was engulfed in civil wars. Rather than disrupting the local economies, however, the frequent movement of armies stimulated the growth of transportation and communications, which in turn provided additional revenues from customs and tolls. To avoid such fees, commerce shifted to the central region, which no daimyo had been able to control, and to the Inland Sea. Economic developments and the desire to protect trade achievements brought about the establishment of merchant and artisan guilds. [Ibid] Image Sources: 1) Samurai websites, MIT visualizing history Text Sources: New York Times, Washington Post, Los Angeles Times, Daily Yomiuri, Times of London, Japan National Tourist Organization (JNTO), National Geographic, The New Yorker, Time, Newsweek, Reuters, AP, Lonely Planet Guides, Compton’s Encyclopedia and various books and other publications. Page Top © 2009 Jeffrey Hays Last updated March 2012
__label__1
0.947866
Martin Scorsese ~ Person Martin Charles Scorsese (/skɔːrˈsɛsi/; Italian: [skorˈseːze]; born November 17, 1942) is an American director, producer, screenwriter, actor, and film historian, whose career spans more than 53 years. Scorsese's body of work addresses such themes as Sicilian-American identity, Roman Catholic concepts of guilt and redemption, machismo, modern crime, and gang conflict. Many of his films are also notable for their depiction of violence and liberal use of profanity. Part of the New Hollywood wave of filmmaking, he is widely regarded as one of the most significant and influential filmmakers in cinema history. In 1990, he founded The Film Foundation, a nonprofit organization dedicated to film preservation, and in 2007 he founded the World Cinema Foundation. He is a recipient of the AFI Life Achievement Award for his contributions to the cinema, and has won an Academy Award, a Palme d'Or, Cannes Film Festival Best Director Award, Silver Lion, Grammy Award, Emmys, Golden Globes, BAFTAs, and DGA Awards. He has directed landmark films such as the crime film Mean Streets (1973), the vigilante-thriller Taxi Driver (1976), the biographical sports drama Raging Bull (1980), the black comedy The King of Comedy (1983), and the crime films Goodfellas (1990) and Casino (1995), all of which he collaborated on with actor and close friend Robert De Niro. Scorsese has also been noted for his collaborations with actor Leonardo DiCaprio, having directed him in five films, beginning with Gangs of New York (2002) and most recently The Wolf of Wall Street (2013). Scorsese's other films include the concert film The Last Waltz (1978), the black comedy After Hours (1985), the epic drama The Last Temptation of Christ (1988), the psychological thrillers Cape Fear (1991) and Shutter Island (2010), the biographical drama The Aviator (2004) and the historical adventure drama Hugo (2011). His work in television includes the pilot episode of the HBO series Boardwalk Empire and Vinyl, the latter of which he also co-created. He won the Academy Award for Best Director for the crime drama The Departed (2006). With eight Best Director nominations, he is the most nominated living director, and is tied with Billy Wilder for the second most nominations overall.
__label__1
0.99983
Isolation and Biophysical Studies of Natural Eumelanins: Applications of Imaging Technologies and Ultrafast Spectroscopy * Address reprint requests to Prof. John D. Simon,Department of Chemistry, Duke University, Durham, NC 27708-0346, USA. E-mail: The major pigments found in the skin, hair, and eyes of humans and other animals are melanins. Despite significant research efforts, the current understanding of the molecular structure of melanins, the assembly of the pigment within its organelle, and the structural consequences of the association of melanins with protein and metal cations is limited. Likewise, a detailed understanding of the photochemical and photophysical properties of melanins has remained elusive. Many types of melanins have been studied to date, including natural and synthetic model pigments. Such studies are often contradictory and to some extent the diversity of systems studied may have detracted from the development of a basic understanding of the structure and function of the natural pigment. Advances in the understanding of the structure and function of melanins require careful characterization of the pigments examined so as to assure the data obtained may be relevant to the properties of the pigment in vivo. To address this issue, herein the influence of isolation procedures on the resulting structure of the pigment is examined. Sections describing the applications of new technologies to the study of melanins follow this. Advanced imaging technologies such as scanning probe microscopies are providing new insights into the morphology of the pigment assembly. Recent photochemical studies on photoreduction of cytochrome c by different mass fraction of sonicated natural melanins reveal that the photogeneration of reactive oxygen species (ROS) depends upon aggregation of melanin. Specifically, aggregation mitigates ROS photoproduction by UV-excitation, suggesting the integrity of melanosomes in tissue may play an important role in the balance between the photoprotective and photodamaging behaviors attributed to melanins. Ultrafast laser spectroscopy studies of melanins are providing insights into the time scales and mechanisms by which melanin dissipates absorbed light energy. Abbreviations –  atomic force microscopy 5,6-dihydroxyindole carboxylic acid minimal erythemal dose pyrrole 3,5-dicarboxylic acid pyrrole 2,3,5-tricarboxylic acid reactive oxygen species retinal pigmented epithelium scanning electron microscopy scanning tunneling microscopy transmission electron microscopy ultra-high resolution Melanin refers to a range of biologic pigments found in a variety of locations including the hair, skin, eyes, brain, and inner ear. Melanins are commonly divided into three types: the brown-black eumelanins, the yellow-reddish pheomelanins and the dark brown neuromelanins. These three melanins are distinguished by differences in their molecular precursors (1–3). Eumelanins are composed of indolic units derived from the oxidation of tyrosine. Pheomelanins are composed of benzothiazines derived from the oxidation of cysteinyldopa units. Neuromelanins are derived from the neurotransmitter, dopamine, and have properties of both pheomelanins and eumelanins: containing both indole and benzothiazine units (4–6). In this review, we have limited the discussion to the eumelanins. A central tenet in biochemistry is the relationship between structure and function. Given that the last few decades have witnessed significant advances in our ability to determine both the structure and function(s) of many classes of biologic molecules (e.g. proteins and nucleic acids), it is somewhat surprising that both the chemical structures and biologic role(s) of melanin are still subject to debate. To appreciate why this is the case, it is instructive to compare melanins with proteins. Features of these two classes of macromolecules are compared in Table 1 and elaborated upon below. Table 1.  Comparison of fundamental properties of eumelanins with those of proteins Molecular building blocksTwenty standard amino acidsDHI, DHICA are precursors to molecules present Primary structureLinear connectivity via peptide bond, some are cross-linked by disulfide bondsUndetermined, probably involves both linear or cross-linked connectivity 3-Dimensional structuresα-Helix, β-sheet, specific folded structures based on primary sequenceUndefined higher order structure Biogenesis mechanismWell definedMany enzymes for early steps of melanogenesis are known, but the polymerization mechanism is unknown Synthetic preparationRecombinant and total synthetic protocols exist to make naturally occurring proteinsProtocols for making synthetic melanin, but they are structurally and chemically different than natural materials Isolation and purificationEstablished protocols for isolation and purificationNo general methods exists, methods exist for specific tissue sources, many procedures used modify the chemical and physical properties of the pigment 1. The building blocks of proteins are the 20 standard α-amino acids. In the case of eumelanin, the basic molecular building blocks are derived from 5,6-dihydroxyindole (DHI) and 5,6-dihydroxyindole carboxylic acid (DHICA). So unlike proteins, only the precursors to the molecular building blocks of melanins have been elucidated. 2. In proteins, the amino acids are connected to one another linearly through formation of peptide bond linkages. In some proteins, inter- and intra-chain disulfide bonds cross-link the polypeptide chains. In melanins, the details of the connectivity between molecular building blocks are not defined. Both linear and cross-linking connectivity have been proposed based on analysis of the early products of synthetic melanin (7, 8), X-ray scattering (9, 10), and scanning tunneling microscopy (STM) imaging (11). 3. Standard and automated methods have been developed to sequence proteins, but no method exists to determine the molecular linkage(s) of melanins. Chemical procedures can degrade proteins into their constituent amino acids, while no such chemical protocols exist to disassemble melanins into their basic molecular units. However, some chemical degradation procedures offer insights into the relative proportions of DHI and DHICA contents in the melanin pigment (12, 13). While useful and revealing, these procedures offer no information about the structure or connectivity of the molecule(s) present in the original pigment. 4. In proteins, polypeptide chains (commonly called the primary structure) adopt secondary structures (α-helix and β-sheet) and generate folded structures (tertiary structure). The three-dimensional structures of many proteins have been determined from analysis of X-ray diffraction patterns generated from a single crystal of the protein. On the contrary, in melanins, the assembly of oligomeric (or polymeric) molecules into the micrometer scale three-dimensional structures observed in vivo is not understood. A hierarchical aggregation of melanin building blocks into the three-dimensional structure has been proposed (14, 15). In this hypothesis, the fundamental units are planar highly cross-linked molecules comprised 6–8 DHI and/or DHICA units. These molecules are proposed to form layered stacks, 10–12 Å in thickness and 10–12 Å in diameter (9–11, 16). Then further lateral aggregation and stacking forms bulk melanin. X-ray scattering, STM imaging, and atomic force microscopy (AFM) imaging provides some evidence supporting this proposed aggregation structural model. Mass spectrometry studies also support the existence of small oligomers (17–19). Nuclear magnetic resonance (NMR) studies counting the number of ‘visible’ H-atoms (aromatic) in melanin also supports the proposed cross-linking between constituent DHI and/or DHICA molecules (20). 5. Standardized methods are available to synthesize proteins, both in vivo (gene recombinant techniques) and in vitro (chemical solid-state synthesis). The generated protein has the same structure and functions as the natural protein. The synthetic methods for melanins generally involve oxidation (catalyzed by enzymes or auto-oxidation) of a specific molecule (e.g. tyrosine, l-Dopa and DHI) or a small set of accepted precursor molecules. The color and solubility of the synthetic melanin mimics that of the natural melanin, however in terms of chemical composition and physical morphology, synthetic melanins are largely non-representative of natural melanins (21, 22). 6. Standardized procedures for isolation and purification of proteins from cells and tissues have been developed. Proteins are easily soluble in aqueous solution or can be solubilized using surfactants. However, melanin is essentially insoluble in any solvent. The lack of a structural knowledge about melanin at the molecular level is partially because of its insolubility and also the lack of appropriate techniques for its study. Compared with the enormous progress in proteomics, limited progress has been made toward understanding the structure and function of melanin. Unfortunately, many of the powerful technologies used to study proteins are not amenable to the study of melanins. It is important to be aware of the above-described differences and to develop methods suitable for the study of melanin. A first step is to standardize the isolation and purification of melanin from its native environment. As we will discuss later in this review, it is necessary to work only with well-characterized samples to reconcile the contradictory results in the current literature. These discrepancies, pertaining to the physical properties and biologic function of melanins, could be the result of many different preparative procedures used to isolate melanins, which do not share common chemical or structural characteristics. Recent progress in the understanding of the genetic control of melanogenesis (23, 24) and the proteomics of the melanosome represent great strides forward in melanin research (25, 26). Enzymes specifically located in melanosomes catalyze many steps of the melanogenesis process. Many other proteins, non-specific to melanosomes, are also found in the organelle, and their function(s) within remain unclear, but cannot be ignored (26). In this review article, we primarily focus on recent works towards understanding the structure and photochemistry of eumelanin. This subject matter is organized as follows. First, procedures used to isolate natural eumelanins are examined. Collectively, these studies show that the isolation procedure can modify the chemical structure of eumelanins, and so caution must be exercised in relating studies on isolated eumelanins to those in vivo. Secondly, scanning electron microscopy (SEM) and AFM studies on the surface morphology of eumelanins from human hair and the ink sac of Sepia officinalis are summarized. From a comparison of structural features of these two eumelanins, we conclude that both the natural melanin granules from Sepia and human hair and bovine eye melanosomes are aggregates of 10–30 nm substructures, consistent with the ultra-structures revealed by earlier studies of the corresponding premelanosomes. Thirdly, the effect of pigment aggregation on the aerobic photoreactivity of these eumelanins is reviewed. Fourthly, time-resolved spectroscopic studies examining the initial dynamics of the oligomeric molecules present in eumelanin from S. officinalis are discussed. These data reveal the time-scales on which eumelanin dissipates absorbed UV-radiation, and provide an explanation for the low quantum efficiency for oxygen activation observed for melanins, in general. Finally, opportunities made available by recent technical advances will be discussed as related to furthering our understanding of the structure(s) and function of melanins. Isolation methods and comparison of the pigments they produce Melanins are synthesized in melanosomes, a specific organelle of pigment-producing cells. Melanosomes are generated in melanocytes located at the basal layer of skin, inside hair bulbs, and in some tissues of the eye. Melanosomes can also be found in the postmitotic cells of the retinal pigment epithelium (RPE) in eyes (27–29). Melanosomes can be delivered to other cells, as evidenced by the transfer of dermal melanosomes from melanocytes to neighboring keratinocytes. In some species, melanosomes are released into an extracellular compartment, e.g. the pigment-producing epithelial cells lining the ink sac of cuttlefish release the melanosomes into the lumen of the ink sac (30). Melanin in vivo is always associated with proteins, which were present in the premelanosomes. Some of the proteinaceous material present is likely to be the remnants of the enzymes that catalyzed various steps of melanogenesis (31). In the 1960s, several research groups used transmission electron microscopy (TEM) to examine the ultrastructure of premelanosomes (32–36). These studies revealed filaments and sheets-like structures inside the premelanosomes at early stages of melanogenesis. This material was shown not to be lipid membrane and was proposed to serve to localize and control melanin deposition within the organelle. Evidence has suggested this material is protein (35), but its exact structure and composition is not yet known. To study melanin requires separation and purification of the pigment from its biologic environment. The challenge is to develop procedures that do not modify the pigment during isolation, so the pigment obtained is reflective of its native form. Melanins found in different tissues require different isolation procedures. The strengths and weaknesses of various approaches are described below. Sepia Officinalis We first consider melanin from the ink sacs of cuttlefish such as S. officinalis. Melanogenesis in the ink gland of S. officinalis was recently reviewed (30). Sepia eumelanin is considered to be pure eumelanin and is used as a standard for natural eumelanins (37, 38). Sepia eumelanin is isolated through iterative washings of the ink with water and centrifugation. The pellet obtained after extensive washings contains largely spherical granules ∼150 nm in diameter (39). This shape and size is consistent with TEM images of granule production within the Sepia melanosome (40). SEM and AFM images of isolated granules are shown in Figs 1 and 2, respectively. Chemical analysis reveals these granules have a protein content of 6–8% by mass (39, 41, 42). In general, no attempt was made to separate melanin from these internal proteins, because such a separation would destroy the granules. Figure 1. In the isolation of Sepia eumelanin from its ink sac, different drying processes result in the formation of different aggregated structures. Under high magnification, all aggregates are composed of spherical granule of diameter ∼150 nm, which is the natural morphology. (A) Spray drying prepared by Mel-Co (Whittier, CA) and Sigma (St. Louis, MO) . Air-drying (B), freeze-drying (C), and CO2-supercritical drying (D) prepared from fresh ink sacs. The scale bars are 20 μm on the low magnification images (left) and 2 μm on the higher magnification images (right). Some of the images are reprinted with permission from Liu and Simon (39). Figure 2. AFM images of Sepia granules deposited on mica. (A) Height image, (B) phase image. Scan size is 383 × 383 nm. These images reveal the 10–30 nm substructures that comprise the granule. This isolation procedure serves to illustrate the first caution in melanin research. Much of what we know about the chemical and physical properties of melanins has been determined on synthetic pigments. Such ‘melanins’ may not contain any protein and certainly do not contain all the proteins present in melanosomes. In contrast, natural pigments contain both melanin (the organic component derived from the initial oxidation of tyrosine) and protein. As described above, the protein component is important in defining the assembly of melanin, and it is reasonable to assume through this mechanism the protein that affects the function of the assembled pigment. Furthermore, in the study of natural pigments reported in the literature, the term ‘melanin’ is used loosely and can represent materials with various protein contents. Caution should not be only limited to the association of proteins with melanin. A second caution concerns the variety of metal cations present in the natural pigment and absent in the synthetic pigments unless added in the reaction mixture. Even if added to the synthetic systems, the coordination of the metal ions in vivo is likely different from that in vitro. The concentration of these metals also varies among natural melanins (43–45), although it is unclear how the isolation procedure affects these results. It is also possible the metals play a significant functional role in the pigment, which would not be manifested in studies of synthetic systems. A third caution highlighted in the analysis of isolated Sepia granules is that the drying method employed yields different aggregated structures (Fig. 1). The method chosen affects the physical properties of the pigment, e.g. surface area to mass ratio and porosity of the material (39). The drying process does not affect the chemical composition. However, as will be discussed in the following section, the chemical reactivity of the pigment is dependent on aggregation (46). Therefore, care should be taken when comparing the photobiologic data obtained from samples using different preparation procedures. Sucrose Gradient Purification In some cases, pigmented cells can be lysed and intact melanosomes can be isolated from other cellular constituents, e.g. RPE and melanoma cells. Ultracentrifugation on a discontinuous sucrose gradient (47) yields high purity samples where the integrity of the melanosomes is preserved. This is exemplified by the AFM images of bovine RPE melanosomes (Fig. 3) isolated in this manner. Figure 3. AFM images of bovine eye melanosomes on mica. The melanosomes were isolated from bovine RPE cell lysate using a discontinuous sucrose gradient. The melanosomes were washed with distilled water prior to imaging. (A) Scan size is 3.08 × 3.08 μm. (B) Higher resolution scan of a region shown in (A); scan size 800 × 800 nm. Left panel is height image, and the right panel is phase image. Similar to Sepia, these images reveal the melanosomes is an aggregated assembly of ∼20 nm substructures. Melanosomes in Complex Environments Generally, melanosomes are constrained in a biologically complex environment, e.g. in hair or the brain, and their isolation is often a significant challenge. The separation of melanin from such tissues has largely been achieved by using harsh protocols. These protocols often involve repeated treatments with concentrated acid and base breaking down the surrounding tissue to separate the melanin from proteins and other biologic materials. In many such studies melanin was assumed to be resistant to degradation by the chemicals used in the isolation. As hair is the only readily available non-invasive human sample, pigments isolated from hair have been widely used as model for natural human melanin. In hair, the melanin is tightly held within a keratin matrix. Most studies of hair melanins degrade this protein matrix by exposing the hair to harsh chemical environments such as strong alkali (48), hydrazine/ethanol mixtures (49), or thioglycolic acid/phenol mixtures (50). The acid hydrolytic methods induce the protein to form pigment-like artifacts interfering with the isolation of natural melanin pigments, whereas alkaline hydrolytic methods cause transformation of the chemical composition of melanin pigment and associated protein (51). In particular, it has been reported that exposure to acid results in extensive decarboxylation of eumelanins (22). Electron microscopy imaging demonstrated acid/base treatments modify and disrupt the ultrastructure of melanosomes (51, 52). This necessitated changing the approach for degrading the keratin matrix that would leave the hair melanosome intact and chemically unaltered. A major advance was made in 1956 when Birbeck et al. used urea, papain and sodium metabisulfide to isolate melanosomes from human hair (53). They found the hair melanin granules isolated by enzymatic digestion were more homogenous than both acid and base extracted melanin. In 1973, Bratosin reported a method for isolation of melanosomes from black mouse hair (54). Keratin was removed from the hair through a two-step process consisting of alkaline hydrolysis and subsequent enzymatic (trypsin) digestion. It was observed that the isolated melanosomes retain their intact morphologic structure. In 1981, Arnaud and Bore used proteinase PSF 2019 to isolate human hair melanin (51). Incubation in either DMF or concentrated LiBr was required to accelerate the subsequent enzymatic digestion. The enzymatic digestion went on for 8 d at pH 9. Afterward, the hair was treated in three acid hydrolysis steps to remove external proteins associated with melanin. All three of these enzymatic methods were used in conjunction with either alkaline hydrolysis or acid hydrolysis, the drawbacks of which drew attention in the mid 1980s (22, 52). In the 1990s, new procedures for enzymatic isolation were developed, mostly in response to desires to study neuromelanin. In the isolation of neuromelanin from the substantia nigra of human brain, proteinase K (55–57), collagenase (6), and pronase E (58) were used. In 2000, Prota and co-workers introduced another enzymatic procedure for the isolation of melanins from human hair and iris (19). Using proteinase K, papain and protease type XIV consecutively and a detergent Triton-X-100, this method is truly proteolytic and without any harsh chemical reagents. These workers found the chemical and spectral characterization of the pigments from human hair and iris revealed marked structural differences, which challenged the common belief that all melanins are similar independent of their origin. They further suggested this could be due to the diversity of functions and sites of biosynthesis and storage. This conclusion is based on the assumption that the enzymatic isolation procedure produces a pigment representative of melanin in its native environment. In a recent study (43), we examined eumelanin from a sample of black, Indonesian hair using three different published procedures: two acid/base extractions (59, 60) and an enzymatic extraction (19). The morphology and spectroscopic properties of the isolated pigments differ significantly. The acid/base procedures both yield an amorphous material, whereas enzymatic extraction yields ellipsoidal melanosomes (Fig. 4). Amino acid analysis shows there is still a significant amount of protein associated with the isolated pigments, counting for 52, 40 and 14% of the total mass for the two acid/base extractions and the enzymatic extraction, respectively. Neither of the amino acid compositions correlate with keratin or tyrosinase. Metal elemental analysis by inductively coupled plasma mass spectrometry shows that the acid/base extraction removes the majority of many metal ions bound to the pigment. Chemical degradation analysis by KMnO4 and H2O2/OH indicates significant differences between the pigments by the acid/base extractions and the enzymatic extraction. After correction of the protein mass in the two pigments, the factor of 2–3 lower yields of both pyrrole 2,3,5-tricarboxylic acid (PTCA) and pyrrole 3,5-dicarboxylic acid (PDCA) indicates 45–65% of acid/base-extracted melanin has been chemically modified, consistent with the result of the Soluene solubilization assay. While the optical absorption spectra of the bulk pigments are similar, the spectra of the molecular weight <1000 amu fractions differ significantly. The data clearly indicate pigment obtained from human hair by acid/base extraction does not effectively separate the pigment from amino acids, and the conditions lead to the destruction of the melanosome and alter the molecular structure of melanin. The acid/base extracted hair melanin is not representative of the natural material and is a poor model system for studying the physical and biologic properties of melanins. The enzymatic extraction preserves the integrity of the melanosome, removes most of the external proteins, and therefore should be the preferred choice for isolation of melanin from hair samples. At present, the procedure reported by Prota and co-workers (19) is the best approach for isolating melanosomes from hair and iris samples. Figure 4. SEM images of hair eumelanin obtained using different extraction methods. The detailed procedures can be found in (43). (A) and (B) show the results from two different acid/base procedures. (C) shows the results from an enzymatic method. Only the enzymatic method preserves the natural morphology of the melanosome, and is clearly the method of choice for isolating hair eumelanin for physical and photobiological studies. All images are reprinted with permission from Liu et al. (43). Morphology of melanin assemblies as determined by modern imaging techniques A variety of mass spectrometry and imaging studies on synthetic eumelanins argue that the pigment is an aggregated system of small oligomeric (∼4–6 monomers) molecules. Supporting mass spectrometry data have been reported for synthetic (61–63) and natural melanins, such as isolated from Sepia (64), human hair (65) and iris (19). Solubilization of natural pigment by detergent solution suggested the dominant forces holding the melanin granules together are not covalent in nature, but rather arise from hydrophobic interactions (54). However, to date the relative contributions of Van der Waal's forces and hydrogen bonding are not clear. AFM is an imaging technique that provides three-dimensional topographical information. The AFM tip is located at the end of an oscillating cantilever. In tapping mode, this tip is scanned in a two dimensional raster pattern over the sample, and the height is adjusted to maintain the amplitude of the oscillation. In general, the height and lateral resolution of the image are 0.1 and ≤10 nm (depending of the dimensions of the scanning tip), respectively. Phase images can be collected simultaneously with the height image. Phase data measure the phase shift in the cantilever oscillation from that of the driving force. This results from attractive and repulsive interactions between the AFM tip and the sample. The phase signal can be related to the stiffness of the sample and is useful for revealing domains that may otherwise be hidden, overlooked, or difficult to see in the height image. In a recent AFM study on mass-selected fractions of Sepia eumelanin, many of the observations reported were consistent with the conclusion that the pigment is derived from small oligomeric species (15). First, in experiments utilizing the AFM tip to cut across granules, images revealed that the tip could be used to cut ∼30 nm deep and ∼30 nm wide troughs through a granule. Accompanying the cut, a ∼20-nm ridge is generated, revealing the material displaced by the act of cutting (Fig. 5). These data indicate that the tip displaces constituents that are small compared with its dimensions. Secondly, to gain more information on the dimensions of these molecular components, images were collected on a dried 1000 < MW < 3000 sample. For that sample, significant regions of the mica are covered with a filament structure (Fig. 6). The filaments are 3–6 nm in height, 15–50 nm in width, and often several microns in length. They are easily cut along the 15–50 nm axis using the AFM tip. The data support the hypothesis that these are stacked structures, not extended covalently bound structures. This result is consistent with the mass spectrometry and X-ray scattering studies suggesting the building blocks of the pigments are small oligomeric molecules. These studies also clearly reveal the granules are aggregated structures. When we consider the 3000 < MW < 10000 fraction, height images of this sample dried on mica reveal fractal-like growth patterns (Fig. 7). The fractal aggregation is likely a result of drying a hydrophobic material at low concentration on a hydrophilic surface. The growth involves the re-aggregation of small structures, but we do not observe any evidence for the reassembly of the material into granules. This suggests the granules cannot self-assemble from the constituent oligomers under the conditions used in vitro. Figure 5. (A) Tapping mode AFM height images of a Sepia melanin aggregate before (left) and after (right) the AFM tip was used to cut across the aggregate along the direction indicated by the white arrow. The scale bar is 210 nm. (B) Cross-sections along the solid line (perpendicular to the cut) shown in (A) before and after cutting the aggregate. The images are reprinted with permission from Clancy and Simon (15). These images show that the act of cutting displaces molecular constituents that are small compared with the dimensions of the AFM tip. Figure 6. Tapping mode AFM phase image of eumelanin filaments formed from the 1000 < MW < 3000 Sepia eumelanin fraction upon drying on mica. The scale bar is 250 nm. The filaments are 3–6 nm in height, 15–50 nm in width, and often several microns in length. This further supports that natural eumelanin structures are not large cross-linked polymers but are aggregates of molecular oligomers, whose structure(s) are yet to be determined. The image is reprinted with permission from Clancy and Simon (15). Figure 7. AFM image of mass fraction of 3000 < MW < 10000 of Sigma Sepia melanin deposited on mica. (A) Height image, scan size is 19.6 × 19.6 μm. (B) Height image, scan area is 7.7 × 7.7 μm. (C) Height image, scan size is 2.65 × 2.65 μm. (D) Phase image, scan size is 310 × 310 nm. A series of images of different scan scales are shown to illustrate the fractal pattern, which likely results from drying a hydrophobic material (eumelanin) at low concentration on a hydrophilic surface (mica). The deposit reveals reaggregation of small structures, but the reassembly of this material into its natural morphology (granules) has not been achieved in vitro. The images are reprinted with permission from Liu and Simon (39). While the complete details of the assembly of the granules in vivo remains unknown, additional insight into the substructure can be obtained from high-resolution SEM and AFM studies (39). Fig. 8A shows an ultra-high resolution (UHR)–SEM image of granules deposited on mica. Two important features are manifested in this image. First, the surface of the granules appears to exhibit some substructures, with constituents being ∼10–20 nm in lateral dimension. Secondly, in regions where granules are in contact, the shape of a granule changes from spherical to hexagonal. This not only represents optimal two-dimensional packing, but also clearly indicates that the granules have a degree of plasticity and can be somewhat deformed without disintegration. The preparation of the sample imaged in Fig. 8A involves coating the materials with an Au/Pd mist. This could give rise to the deposition of Au/Pd colloidal particles on the surface, which could mistakenly be interpreted as substructures. To address this possibility UHR–SEM images of uncoated melanin, deposited on conductive substrate, such as silicon wafer, were examined. The quality of the images is dramatically reduced but one can still discern the roughness of the surfaces (Fig. 8B). Pigment granules without any coating can also be imaged by tapping mode AFM, the variation of the height of the cantilever across the scanning area (height image) gives real three-dimensional topology information and the variation of the phase of the cantilever vibration across the scanning area (phase image) can provide information concerning if boundaries exist between regions of differing hardness. Thus, verification of substructure revealed by the SEM images of coated samples is possible by examining AFM images of uncoated granules, as shown in Fig. 2A,B. The images clearly show similar substructure for the granules. These results suggest that the granules are aggregates of ∼10 nm constituents. UHR–SEM images of sonicated Sepia samples subjected to five cycles of sonication and centrifugation also support the aggregation model (Fig. 9). Thus sonication in a water bath introduces mechanical perturbation strong enough to remove the ∼10 nm constituents from the surface of the granules or cause a partial decomposition of the granules. Figure 8. (A) UHR–SEM images of granules from the Sepia ink sac deposited on mica. The sample has been coated with colloidal Au/Pd. (B) UHR–SEM image of uncoated granules deposited on silicon wafer. Comparison of the images of the coated and uncoated samples show the surface substructure is inherent to the melanin and not the result of the protocol used to collect the images. The images are reprinted with permission from Liu and Simon (39). Figure 9. UHR–SEM images of supernatant after fifth round sonication and centrifugation, air-dried on mica. The images clearly show sonication can result in deformation of the granules. In addition, a thin layer of material coats some of the granules, which is not melanin, but a remnant of other cellular debris present in the ink sac. This material can be remove from the melanin following more extensive washing. The images are reprinted with permission from Liu and Simon (39). It is interesting to compare the AFM images of melanin from the Sepia ink sac (Fig. 2) with that of the human hair melanosome isolated enzymatically (Fig. 10). Although the two natural melanin granules have different shapes, spherical vs. ellipsoidal, and different dimensions, roughly 150 nm vs. 450 nm × 1 μm, the surface features look the same. The surfaces of both granules are clearly not smooth but with substructures in the size range of a few nanometer to 30 nm. The similarity between human hair melanosomes with bovine eye melanosome is more striking (Fig. 3). Although they are also of different size (bovine eye melanosomes are ∼900 nm × 2 μm), the shape is similar and the surface features are essentially indistinguishable. Amino acid analysis on the natural melanin indicates there is a significant amount of protein associated with melanin (8% in sepia and 14% in hair by mass). Based on mass, if we approximate the eumelanin oligomer(s) as 1000 amu, then a typical melanosome represents an assembly of more than a million such building blocks. The well-defined structure revealed by imaging experiments strongly suggests that the assembly of the fully pigmented melanosome is a tightly controlled biologic process. Thus, not only are the primary and early oxidation steps of melanogenesis under enzymatic control, but also the assembly of the deposited pigment within the melanosomes must also be controlled, possibly by distribution and localization of the enzymes in the internal membrane structure of the premelanosomes (66, 67). Figure 10. AFM images of a single human hair melanosome. The scan size is 1 × 1 μm. Left panel is height image, and the right panel is phase image. Similar to Sepia and bovine eye melanosomes (Figs 2 and 3) these images reveal the hair melanosomes is an aggregated assembly of ∼20 nm substructures. Aggregation affects photogeneration of reactive oxygen species (ROS) Photochemical excitation of melanin results in the activation of oxygen, the most important primary photochemical pathway being the formation of the superoxide anion, Oinline image. In an effort to quantify the activation of molecular oxygen by eumelanin, Sarna and co-workers determined the action spectrum for the photoconsumption of oxygen by synthetic and natural eumelanins (68). While eumelanin exhibit absorption throughout the visible and ultraviolet region, oxygen photoconsumption occurs only for wavelengths shorter than 400 nm. In their original report Sarna and co-workers suggested that the chromophore responsible for oxygen photoconsumption differed from that which dominated the absorption spectrum. One explanation would be that eumelanin is composed of a number of molecular entities and one particular constituent has an absorption spectrum matching the action spectrum. Given the above discussion of the structural morphology of the pigment, however, it is reasonable to propose that different sized aggregates have different spectroscopic and photoreactive properties. In this case, a specific arrangement of the oligomeric building blocks (individual or aggregated) would be responsible for the photoconsumption of oxygen. Different mass-selected fractions of eumelanin solutions were obtained by ultrafiltration, the optical properties of which show that the absorption for the MW < 1000 fraction of eumelanin from both Sepia (69) and black human hair (70) match the action spectrum for photoconsumption of oxygen (Fig. 11). Samples of larger masses (e.g. 1000 < MW < 3000, MW > 10 000) exhibit increased absorption at longer wavelengths, and whether these fractions are aggregates of the oligomers present in the MW < 1000 solution or longer oligomers remains to be determined. Photoacoustic calorimetry revealed the MW < 1000 sample is the only mass fraction showing measurable energy storage following UV-A excitation (71). These data suggest unaggregated oligomers underlie the phototoxic effects of melanin and aggregation mitigates such processes. Figure 11. The optical spectra for MW < 1000 fractions of (- - -) hair eumelanin and (…) Sepia eumelanin with the 270-nm absorption feature removed are compared with (-) the action spectrum for the free-radical photogeneration by eumelanin. Quantitative agreement is observed. These data suggests the unaggregated oligomers make the dominant contribution to the photoconsumption of oxygen by eumelanins. The similarity observed among Sepia and human hair suggests that they contain similar if not identical molecular species. The action spectrum is reproduced with permission from Sarna and Sealy (68). It is important to ask how and if aggregation affects the mechanism and yield of ROS photoproduction by eumelanin. To address this issue, the reduction of cytochrome c (cyt c) by eumelanin excited by UV-B (302 nm) was examined. The following conclusions could be drawn from that study (46). First, while the initial reduction rate of cyt c does not change with aggregation, the optical density of the solution does. As a result the apparent quantum yield for Oinline image formation decreases with aggregation. Secondly, studies with biochemical quenchers reveal a quantum efficiency of hydrogen peroxide, H2O2, production by the MW < 1000 fraction of 5.7 × 10−3. The formation of H2O2 is attributed to reaction between Oinline image and semihydroquinone groups on the oligomeric molecules. The quantum yield becomes immeasurable upon aggregation of the oligomers and the mechanisms of this quenching remains to be established. In any event, aggregation clearly reduces the production of this highly toxic oxidant. The aggregation-dependent generation of ROS by eumelanin presents a framework for understanding contrasting photoprotective and phototoxic roles exhibited by eumelanin. The equivalence between the action spectrum and absorption spectrum along with the aggregation-dependent quantum efficiencies for Oinline image and H2O2 implicate the oligomers as the phototoxic component. Thus, any changes in melanin that lead to a disruption of the aggregated structure could result in increased oxidative stress. The fact that oligomers generate more H2O2 than aggregated pigment may have significant biologic ramifications. H2O2 can react with a variety of cellular components, causing, for example, lipid peroxidation of membrane and hydroxylation of proteins and DNA. Such processes may be important in skin keratinocytes, where melanosomes are partially degraded (72), or in retinal pigment epithelium cells where the structural features of melanosomes are found to change with age (73, 74). Probing non-radiative relaxation of eumelanins by time-resolved optical spectroscopy The low emission quantum yield (10−5–10−3 depending on the aggregation) by photoexcited eumelanin suggests the pigment efficiently releases the absorbed light energy through non-radiative means (75). Furthermore, the low quantum yield for oxygen activation (10−4–10−3 depending on the aggregation) suggests that the dominant process is likely rapid relaxation to the ground electronic state of the pigment (46). Such a conclusion is supported by photoacoustic studies of melanins (71), but the time resolution of this technique only reveals that non-radiative relaxation occurs on the sub-nanosecond time scale. The initial photophysical and photochemical events following UV excitation of melanins can be quantified by using time-resolved optical spectroscopy. Both time-resolved absorption and fluorescence experiments have been performed on melanins. Time resolved absorption studies suggest rapid repopulation of the ground state following UV-A and UV-B excitation (T. Ye and J.D. Simon, unpublished data). The recovery dynamics are complex and three time constants are revealed (Table 2). Despite the complexity of the decay dynamics, it is clear that melanin efficiently and rapidly converts absorbed photon energy into heat. The rapidity of this process mitigates the potential of adverse photochemistry and supports the hypothesis that melanin exhibits, in part, a role in photoprotection. Table 2.  Time constants and amplitudes (in parentheses) from time-resolved experiments on eumelanin 1. *The amplitudes of these decay constants vary with probe wavelength. Transient absorption*0.56 ± 0.8 ps3.2 ± 0.5 ps31 ± 5 ps  Emission decay58 ± 7 ps (0.54)0.51 ± 0.07 ns (0.22)2.9 ± 0.5 ns (0.16)7 ± 1 ns (0.08) The decay associated spectrum for each time constant could provide information of the spectral regions associated with a particular decay component. In this manner, it may be possible to decompose the transient spectrum into contributions for different molecular entities, each of which exhibits a particular decay time constant. Unfortunately, in the case of eumelanin, such an analysis does not cleanly assign a specific absorption feature in the transient spectrum to a given time constant. The resolution afforded by the global data analysis suggests that all spectral features present in the transient spectrum contribute to the multi-exponential decay. It is likely that these spectra are derived from a small set of structurally similar chromophores within the eumelanin pigment. This would explain the non-exponentiality of the decays, the similar vibronic structure observed in the transient spectrum, and the differences in the relative intensities of the vibronic bands. Given that the connectivity of the monomer units in the structure of eumelanin is not known, it is currently not possible to speculate on the molecular details that give rise to these spectral differences. It is interesting to compare these dynamics with those obtained from fluorescence lifetime measurements. The emission dynamics of melanin are non-exponential and require a sum of exponentials to generate functional forms that provide fits to experimental data. In a previous report on Sepia eumelanin (76), four time constants were needed to describe the emission collected at 520 nm (near the maximum of the spectrum) following excitation at 355 nm (see Table 2). It is interesting to note that the fastest lifetime component of the emission is in rough agreement with the slowest time constant revealed by the ultrafast absorption measurements, and thus it is possible that both experimental techniques ‘sense’ the decay of the same molecular species, but the time resolution of the emission experiments is not sufficient to observe the faster decay component. One must exercise caution in interpreting the transient spectroscopic results of melanins in terms of the photobiologic properties of the pigments. While interesting dynamics are revealed, it remains to be established whether either transient absorption or emission experiments probe the molecules responsible for the photoaerobic reactivity of melanins. To address such issues, it will be important to measure the transient optical properties and the action spectra for photoaerobic processes (oxygen photoconsumption, superoxide formation) on the same set of samples. Comparison between existing published literatures is potentially problematic resulting from the range of methods used to isolate and prepare melanins and the effects these procedures have on the integrity and molecular structure of the pigment. Future directions This is an exciting time in melanin research. The last decade has witnessed the development of a diverse set of powerful analytical tools, many of which can be applied to characterize natural pigments. Advances in biochemical techniques are now enabling the isolation of fully assembled melanosomes, and so one can start asking detailed biophysical questions concerning the nature of the assembly. We are optimistic that significant advances will be forthcoming over the next several years. The following topics are presented in a hope to excite and inspire those either working in the field or interested in contributing to biophysical studies of melanins. Molecular Structure(s) of the Oligomers While the early steps of melanogenesis are understood (77–80), the molecular structures of the resulting oligomers remain elusive. Mass spectral studies have provided some insights into the molecular weights of constituent oligomers, and oligomers of DHI and/or DHICA with a linear connectivity have been proposed (19, 62–64). Recent NMR studies of melanins suggest cross-linked connectivity between monomers (20). Highly cross-linked planar oligomeric structures have been suggested by X-ray scattering (10, 81) and STM experiments (11, 82). At this point of time, it is unclear whether melanogenesis produces a heterogeneous set of oligomers of appreciable relative concentrations or a single molecular species dominates. Recent advances in high-performance liquid chromatography (HPLC)–MS/MS techniques suggest that this analytical tool could play a powerful role in elucidating the structures of oligomers. While standard MS techniques provide the molecular weight, MS/MS and higher order fragmentation can provide information on the detailed structure. In addition, the recent commercialization of HPLC–NMR may also prove to be a valuable technique for structural determination. The molecular structure(s) of melanin is one area where theoretical insights currently outpace experimental achievements. Galvao and Caldas were the first to carry out theoretical calculations on model polymers for eumelanins in late 1980s (83, 84). In the last 2 yr, several groups used modern computational chemistry tools to examine the structure and spectroscopy of monomers (85), dimers (86, 87), and oligomers of eumelanin (88, 89). While these papers are of interest in their own right, there is no evidence that the molecular structures explored are actually present in melanin. Experiments are surely needed in this area. With knowledge of naturally occurring structures, the computational tools could be used to provide a deep understanding of the spectroscopy and association of these molecules in the aggregated pigment. This article has focused on eumelanin. In nature there is a second pigment, pheomelanin, which is yellow-reddish in color. The biogenesis of pheomelanin differs from eumelanin in that cysteine is incorporated into the structure and the building blocks are derived from cysteinyl-dopa. Epidemiological data indicate individuals with fair skin are more susceptible to skin cancers than their darker skin counterparts (3). This observation is commonly associated with the hypothesis that pheomelanin exhibits a greater phototoxicity than eumelanin. In support of this hypothesis, Prota and co-workers explored whether there was a relationship between hair melanin composition and minimal erythemal dose (MED) in a group of red-haired individuals (90). They found a correlation between the eumelanin/pheomelanin ratio and the MED values, suggesting UV sensitivity is associated with high pheomelanin and low eumelanin levels and that the eumelanin/pheomelanin ratio may be a chemical parameter for predicting individuals at high risk for skin cancer and melanoma. Cellular studies support this general concept; UV-A-induced DNA single-strand breaks in human melanocytes differing only in the amount of pigment produced showed photosensitization by intrinsic chromophores, most likely pheomelanin and/or melanin intermediates (91). These results indicate the need to understand the molecular composition, the structure, and the photobiology of pheomelanin. To date, most of our limited knowledge on this pigment comes from the study of synthetic samples. Careful studies on pheomelanin from natural systems are needed, as are quantitative comparisons between such a sample and a related natural eumelanin. The Role of Metals Melanin is able to sequester heavy metals ions, such as iron, copper, zinc and lead, in principle protecting the surrounding tissue from their cytotoxicity. However, it is also proposed that when the binding sites of a melanosome become saturated, the integrity of melanosome could be compromised, and heavy metal ions and/or melanin oligomers could be released from melanin and trigger acute damage to the cell (92). Along these lines, a link between Fe(III) binding to neuromelanin and death of pigmented neurons and pathogenesis of Parkinson's disease has been proposed (93, 94). To what extent melanin can bind metals like Fe(III) and how this binding affects the structure and function of the pigment represent a set of interesting questions, which have not received careful attention. There is clearly an opportunity to carry out significant work in this area, especially if the work can be carried out with well-characterized samples of neuromelanin. Molecular Aspects of the Aging Melanosome Sarna and co-workers examined the photochemically induced uptake of oxygen by different age cohorts of human RPE melanosomes (95). These data clearly show the increase of the activation of oxygen by melanosomes with increasing age (Fig. 12). The emission intensity of RPE melanosomes also increases with age (96). It is interesting to note that Schraermeyer and co-workers recently demonstrated that melanin fluorescence from synthetic melanin and melanin isolated from bovine melanosomes increases after oxidation (97). One could therefore hypothesize that the age-dependent aerobic photoreactivity of RPE melanosomes reflect concomitant changes in the molecular structure of the melanin (e.g. oxidative damage) and/or increased concentrations of bound redox-active metal cations. Because of the potential link between changes in the photobiology of retinal melanosomes and cell atrophy of the RPE layer, a molecular understanding of the origin of these effects would be important in the prevention of diseases in this tissue. Figure 12. Broadband, blue-light-induced oxygen uptake in suspensions of melanosomes isolated from donors in the following ages: <40 yr (□), 42–60 yr (▵), 61–80 yr (○), >80 yr (◊). Oxygen uptake in the dark was negligible in all samples studied. The concentration of pigment granules was adjusted to 4 × 109 granules/ml. Oxygen uptake was measured with ESR oximetry. The data clearly show the aerobic photoreactivity of melanosomes increases with age. The images are reprinted with permission from Rozanowska et al. (95). Acknowledgements– This work was partially supported by the National Institute of General Medical Sciences. We thank Unilever Research US for continued support of this work. We also thank the following co-workers for their contributions to the work described herein: Dr Susan E. Forest, Dr Chris M.R. Clancy, Dr J. Brian Nofsinger, Dr Tong Ye, Valerie R. Kempf, Prof. Shosuke Ito, Prof. Kazumasa Wakamatsu, Dr Mark Rudnicki, Emily E. Weinert, Leslie Eibest, Prof. Yuri Il'ichev, and Prof. T. Sarna.
__label__1
0.777084
AUSTIN, Texas—An Austin, Texas, technology company says 20 of its employees were aboard the Malaysia Airlines plane that went missing over the South China Sea. Jacey Zuniga, a spokeswoman for Freescale Semiconductor, says 12 Malaysian and 8 Chinese employees are "confirmed passengers." She says no American citizen Freescale employees were on the flight. "At present, we are solely focused on our employees and their families," Gregg Lowe, president and CEO of Freescale says in a statement. "Our thoughts and prayers are with those affected by this tragic event." The company, the statement reads, has assembled a team of counselors for those impacted by the tragedy. Flight MH370, a Boeing 777 airplane, was last seen on radar at 1:30 a.m. (1730 GMT Friday) above the waters where the South China sea. Freescale Semiconductor is a technology company focused on what it calls "embedded processing solutions." It works with clients in a variety of markets, including automotive and consumer electronics, to address technology issues using microprocessors and sensors.
__label__1
0.87849
Placement of Fingers in Icons What is the significance of the placement of the fingers of the right hand I notice in icons of Christ and certain saints? The fingers are arranged to form the following letters—IC XC—which are the first and last letters of “Jesus” and “Christ” in Greek.
__label__1
0.658599
Birth of Two Chimeric Genes in the Hominidae Lineage Science  16 Feb 2001: Vol. 291, Issue 5507, pp. 1293-1297 DOI: 10.1126/science.1057284 How genes with newly characterized functions originate remains a fundamental question. PMCHL1 and PMCHL2, two chimeric genes derived from the melanin-concentrating hormone (MCH) gene, offer an opportunity to examine such an issue in the human lineage. Detailed structural, expression, and phylogenetic analysis showed that the PMCHL1 gene was created near 25 million years ago (Ma) by a complex mechanism of exon shuffling through retrotransposition of an antisense MCH messenger RNA coupled to de novo creation of splice sites. PMCHL2 arose 5 to 10 Ma by an event of duplication involving a large chromosomal region encompassing the PMCHL1 locus. The RNA expression patterns of those chimeric genes suggest that they have been submitted to strong regulatory constraints during primate evolution. Processes of exon shuffling, retrotransposition, and gene duplication have been suggested to lead to creation of newly found genes with specific expression characteristics and to fixation of advantageous novelties by acquisition of functional constraints (1, 2). However, because of the rapid sequence divergence characteristic of previously unknown genes, the study of the origin of a gene in detail requires the discovery of a young gene, and in particular one that has retained important features of its early stages (3, 4). Because of their recent history, two human chimeric genes, PMCHL1 andPMCHL2, open an unprecedented way to analyze the molecular mechanisms of gene remodeling and selection of functions that have operated during the late stages of primate evolution. The PMCHL genes were named pro-MCH–like 1 and 2 genes (PMCHL1 and PMCHL2) on the basis of partial identity to the MCH gene (5). The humanMCH gene maps on chromosome 12q23 and encodes a neuropeptide precursor, whereas PMCHL1 and PMCHL2 are located onto human chromosome 5p14 and 5q13, respectively, and correspond to 5′-end truncated versions of the MCH gene (6). In previous studies, we revealed that thePMCHL genes arose recently during primate evolution by a first event of truncation/transposition from the ancestral chromosome 12 to the ancestral chromosome 5p about 25 to 30 Ma, i.e., before divergence of the Cercopithecoidae. This was followed by a second duplication event, which operated in the Hominidaelineage about 5 to 10 Ma and which distributed the two genes on each side of the chromosome 5 centromere (7). Both unspliced sense and antisense transcripts from the PMCHL1gene but not the PMCHL2 gene have been observed in different areas of the developing human brain (8, 9). A puzzling issue concerns the relation between their recent emergence and their putative function or, more precisely, whether thePMCHL genes are functional genes not previously characterized or inactive pseudogenes. This made it crucial to further study the structure, expression, and early molecular evolution of thePMCHL genes. The focus on the molecular mechanisms responsible for the emergence ofMCH-derived sequences onto human chromosome 5 had first come from parallel studies on the regulation of MCH gene expression undertaken in our laboratory. Recently in human and rodents, we showed two classes of antisense RNAs complementary to theMCH gene (10): (i) spliced-variant mRNAs complementary in their 3′ end to the MCH gene, encoding newly found DNA/RNA binding proteins, and (ii) short noncoding unspliced RNAs that overlap only the coding part of the MCHgene (MCH exons II and III) and initiate at cap site CS3-5 (Fig. 1A). This transcriptional unit was named AROM for antisense-RNA-overlapping-MCH gene (10). Concurrently, our analysis of the structure of thePMCHL genes revealed the presence of a stretch of A at the end of the MCH-derived portion that exactly coincidates with one of the polyadenylation [poly(A)] sites found within theAROM gene, polyA(b) (Fig. 1A). This led to the conclusion that a MCH-derived sequence likely was inserted in the ancestral chromosome 5p by an event of retrotransposition of anAROM messenger RNA, incidentally strongly expressed in testis (10), as depicted in Fig. 1B. Figure 1 (A) Extent of the homology between the MCH/AROM locus on 12q24 and the PMCHLloci on 5p14/5q13. The MCH/AROM and PMCHLexon structure given here are based on Borsu et al.(10) and Viale et al. (9), respectively. MCH- and PMCHL-derived exons are marked with roman numerals, and AROM exons are in arabic numerals. Dotted lines define the limits of the 12q24 sequence which was retrotransposed onto chromosome 5 during primate evolution. The position of the region of homology and exon-intron nomenclature are as previously described (9). Inverted black triangles correspond to AROM polyadenylation sites [poly A (a, b, or c)]. Arrows (CS1 and CS2) and the thick black line (CS3-5) represent the AROM cap sites (CS) (10). Percent homology between the MCH/AROM and PMCHLloci are also indicated. AAAA illustrates the poly(A) tail found to the end of the retrotransposed sequence: (A)11 on 5p14 and (A)14 on 5q13. GenBank accession numbers are as follows: PMCHL1, AF238382; PMCHL2, AF238383;MCH, M57703; and AROM, AF303035. (B) Proposed model for the emergence of MCH-derived sequence onto chromosome 5p. (a) An AROM mRNA initiating in the CS3-5 region and ending at poly A (b) polyadenylation site was retrotransposed onto the equivalent of chromosome 5p at the time ofCatarrhini divergence 25 to 30 Ma. (b) After this first event or concurrent to it, an Alu sequence was inserted in intron A and a fragment corresponding to the 3′ end of the retrotransposed mRNA (part of exon II-intron A-Alu) was broken and transposed to the downstream insertion site. This led to the PMCHL gene versions observed in Cercopithecoidea andHominoidea. By combining “in silico” (through computer modeling) screening [BLAST search of GenBank against many databases in the Web site of the National Center for Biotechnology Information of the National Institutes of Health (11)] and direct sequencing of bacterial artificial chromosome (BAC) clones specific to the chromosomal regions 5p14 and 5q13 (12), the genomic structure of the PMCHL genes was further compared. According to the Web survey, several expressed sequence tags (ESTs) were found in two categories: (i) 3′ cDNA clone IMAGE ah92f11.s1 and qf54b04.x1, which are parts of PMCHL1 spliced sense transcripts and (ii) 3′ cDNA clone IMAGE qf66aO4.x1, al54h4.s1, and al47h07.s1, corresponding to parts of PMCHL2unspliced antisense transcripts and indicating that the regulation of the expression of the PMCHL genes was far more complex than previously thought (9). Structural analysis of those genes was refined by using rapid amplification of cDNA ends and polymerase chain reaction (RACE-PCR) and reverse transcriptase–PCR (RT-PCR) (13) in conjunction with the genomic analysis. As shown in Fig. 2A, we revealedPMCHL1/PMCHL2 gene expression in human testis and established the precise 5′ and 3′ ends of the sense and antisensePMCHL1 RNA unspliced products previously described in different areas of the human brain (9). We also found in human fetal brain and in human adult testis several classes of alternative spliced mRNAs (Fig. 2B). This suggested that on both loci,MCH-derived, retrotransposed sequences recruited a group of downstream exons and introns into their transcription units thereby creating previously unknown genes with a chimeric structure. The existence of such an impressive variety of PMCHL1 andPMCHL2 transcripts resulted from the use of four polyadenylation sites (A1-A4) and a tissue-specific modulation of alternative splicing (Fig. 2B). Several cap sites were also found on the basis of RACE-PCR experiments. PMCHL2 cap sites were mainly located from 500 base pairs (bp) to more than 2 kb upstream to the insertion site, whereas PMCHL1 cap sites were found 500-bp upstream as well as 50- to 100-bp downstream to the insertion site. However, because of the complex population of mRNAs in all the tissues analyzed, it was not possible to assign a precise cap site to each class of mRNAs. Even though we cannot exclude artifactual pausing of the reverse transcriptase during synthesis of the cDNA products, this suggests that alternative splicing coupled to different starting points of transcription is probably a mechanism that allows the cell to generate a “wide repertoire” of PMCHL genes transcripts. Figure 2 Schematic representation of the PMCHLtranscripts and potentially functional ORFs. (A) Sense and antisense unspliced mRNA products. (B) Alternatively spliced transcripts. Dotted lines delineate the chromosome 12 recruited region. The genomic organization of the transcription units is indicated in each case. The exons are boxed in gray and numbered in arabic numerals. Exon x′ illustrates alternative 3′ splice donor sites. 4′t and 4′b are tissue-specific 3′ splice donor sites; T is for testis and FB is for fetal brain. White stripes at the 5′ end of the RNAs indicate that a unique precise cap site was not assigned to these populations of mRNAs. The gene- and tissue-specificities of expression are indicated for each class of RNA: 5p and 5q are for PMCHL1 and PMCHL2transcription units, respectively. Polyadenylation sites are represented by small dark bars (A1-A4, As1, As2). Canonical polyadenylation signals AATAAA were found a few bases upstream to the sites of poly(A) addition (A1, As1 and As2). Putative polyadenylation signals ATTAAA were also found to be located 29 and 17 bases 5′ to the A2 and A3 sites of poly(A) addition, respectively, and a GATAAA signal was found 40 bases 5′ to the A4 site. Although nonconventional, ATTAAA and GATAAA have been previously noted to serve as polyadenylation signal sequences (5, 22,28). Black lines indicate the extent of the potentially functional ORFs. Upper black lines are ORFs specific of thePMCHL1 transcripts (5p locus) and down below black lines are ORFs specific of the PMCHL2 transcripts (5q locus). The translation of DNA sequences to protein sequences was conducted in the Web site of NCBI of the NIH ( The longest open reading frames (ORFs) initiated from an ATG codon in a reasonable translation initiation context (14) were deduced from the mRNA sequences obtained by RACE-PCR and RT-PCR. Two major classes of ORFs (≥33 amino acids) were found regardless of the alternative splicing pattern (bracketed in Fig. 2): (i) ORFs encoded by exon 1 and intron A (unspliced RNAs) and exon 1/exon 2/exon 2′ (spliced RNAs) exhibit a strong similarity with pro-MCH, and (ii) ORFs encoded by exons 4 to 5a and 5b display no sequence similarity with known proteins. No ORF of large length could be found for antisense RNAs. We previously demonstrated that sense unspliced PMCHL1transcripts may produce a nuclear localization signal (NLS)–containing protein deduced from ORF 1 sequence (Fig. 2A) in an in vitro translation assay and in transfected Cos cells (9). Direct proofs of the translational ability of the spliced mRNA products described here are still lacking. However, that both PMCHL1 and PMCHL2 are specifically and differentially regulated in testis and that only PMCHL1 is expressed in human fetal as well as newborn and adult brains (9) (Fig. 2) is consistent with the conclusion that those newly originated genetic elements are transcriptionally active and tightly regulated genes. To determine whether the divergent expression patterns ofPMCHL1 and PMCHL2 could be explained by a different genomic environment in the flanking regions, we expanded our comparative analysis of the genomic structure of the PMCHLgenes. The nucleotide sequence of the PMCHL1 andPMCHL2 genomic regions over 17-kb revealed similar genomic environments with strong sequence identity (98%) between the 5p14 and 5q13 loci. To further delineate the extent of the region duplicated on both arms of chromosome 5, we performed fluorescent in situ hybridization (FISH) analysis on human metaphase chromosomes with several BAC clones bearing the PMCHL1 locus and extending more than 100-kb both 5′ and 3′ to this gene (namely 2303C18, 344I3, 283L20, and 811M22) (Fig. 3A). All those clones displayed the same hybridization patterns with strong cross-hybridization on both arms of the human chromosome 5 at bands 5p14 and 5q13 (Fig. 3B). This showed that the event of duplication that took place 5 to 10 Ma involved a large region of ancestral 5p14 encompassing several hundreds of kilobases. However, further studies are required to delineate the particular environment of cis-regulatory elements driving the striking tissue-specific expression of bothPMCHL genes. Figure 3 (A) Genomic structural organization of the PMCHL genes. 15.4-kb of genomic sequence from both PMCHL loci was obtained by direct sequencing (both forward and reverse strands) of the 5p14-specific 283L20 and the 5q13-specific 484D2 BAC clones bearing the PMCHL1 andPMCHL2 loci, respectively (12). Dashed line represents the 1.6-kb unsequenced part of intron C. Arrows indicate BAC clone ends (not drawn to scale), and the lines represent the extent of the clones. Their localization and orientation were determined by in silico screening ( BAC clones in red were used for in situ hybridization analysis on metaphase chromosomes. All the clones described in this study come from the CIT-HSP BAC library. Blue boxes correspond to interspersed repeated sequences (same orientation thatPMCHL genes, light blue; opposite orientation, dark blue). a, LINE/L2; b, SINE/MIR-LINE/L2; c, SINE/Alu; d, LTR/THE-1B; e, SINE/Alu; f, MER91A; g, SINE/MIR; h, LINE/L1MA8; i, SINE/MIR-LINE/L1M1; j, LTR/ERVL-LINE/L1MA9; k, SINE/Alu; and l, LTR/MLT1E2. GenBank accession numbers are as follows:PMCHL1, AY08405 and PMCHL2, AY08406 (29). (B) FISH on human chromosomes with the chromosome 5p–specific BAC clone 283L20 (left) and 811M22 (right). (C) FISH of the same mouse metaphase with the chromosome 5p–specific BAC clone 811M22 (left) and a whole-chromosome painting (WCP) probe for mouse chromosome MMU15 (right). FISH was performed as previously described (15) on metaphase chromosomes from human peripheral blood lymphocytes and from mouse SV22-CD cell line. Fluorescent images were captured using a high-resolution cooled charge-coupled device (CCD) camera C4880 (Hamamatsu). Image acquisition, processing, and analysis were performed using the Vysis software package (Quips SmartCapture FISH). As we suggested above, the source of the 5′ exons was identified as a retrotransposed sequence originated from the MCH/AROM locus. However, the origin of 3′ exons remained unclear. We examined several hypotheses concerning the origin of these non–MCH-derivedPMCHL exons: (i) these exons might be part or duplicate of an unrelated previously existing gene, supporting the concept of exon shuffling or, alternatively, (ii) these exons might originate from a unique genomic sequence that fortuitously evolved as a standard intron-exon structure and regulatory sequences for PMCHL. To study the early molecular evolution of the PMCHLtranscription units, we first performed a FISH analysis (15) on mouse metaphase chromosomes with BAC clones surrounding the area of insertion of the MCH-derived sequences (namely 2303C18, 344I3, and 811M22) (Fig. 3A). Only the 811M22 BAC clone, bearing the 3′ PMCHL exons but not the 5′-transposed portion of the gene, displayed a clear unique hybridization signal. This signal was found onto the pericentromeric region of the mouse chromosome 15 (Fig. 3C). After comparing this result with the mapping data found in the “Mendelian Inheritance of Man gene map” and “mouse to human homology region map” databases (16), we propose that the transposed MCHsequence was inserted in a region close to the site of evolutionary rearrangement that disrupted the conserved synteny relationship with the mouse Mus musculus genome from MMU13 to MMU15. Furthermore, probes bearing the 3′ exons did not reveal cross-hybridization signal on mouse and primates [this study, (7)] and these exonic sequences did not display any similarity to any sequence of the GenBank database except the IMAGE cDNA clones previously cited. This ruled out the hypothesis that the 3′ exons might be a duplicate of an unrelated previously existing gene. However, this does not exclude that the retrotransposed sequence may have been inserted in a pre-existing gene on 5p. To test this alternative, the phylogeny of the PMCHLintronic and exonic sequences was analyzed. We attempted to amplify the corresponding region from DNA samples from nine species of primates and from mouse by using the set of primers used to amplify intronic and exonic sequences of human genomic DNA (17). Several PCR products of the same size as those obtained from human DNA were obtained from seven primate species [Pan troglodytes (PTR),Pan paniscus (PAN), Pongo pygmaeus (PPY),Hylobates lar (HLA), Cercopithecus hamlyni (CHA),Papio papio (PAP), Cebus capucinus (CCA)]. All of the amplified products obtained from anthropoids were sequenced and compared with the human DNA sequence. The comparative phylogenetic analysis of the PMCHLintron-exon boundaries (Fig. 4) revealed that consensus sequences at the 5′ donor splice site and in the 3′ acceptor splice site of the PMCHL1 intron A (intron Bv, Fig. 1A) were conserved in all the primates, suggesting existence of a functional constraint. Similarly, strong conservation of sequences was noted at the intron B and C boundaries. In contrast, a splice donor site in intron D was created in Cercopithecoidae (CHA) as a result of a C to T substitution at nucleotide +2. Alternative splice acceptor sites for exon 5a and exon 5b were also created by nucleotide substitution, GA to AG in Hylobatidae (HLA) and G to A at nucleotide +1 in Cercopithecoidae (PAP and CHA), respectively. Furthermore, poly(A) signals PS2 and PS3 corresponding to the poly(A) addition sites A2 and A3 (Fig. 2B) were also found to be the sites of mutations. Interestingly, a C nucleotide was found at nucleotide +3 of PS2 in CHA and PAP but not CCA, suggesting that this mutation arose specifically in the Cercopithecidae (Fig. 4). Furthermore, HLA possess the same ATTAAA sequences as the ones found in human, whereas CCA, PAP, and CHA have GA and TC at nucleotides +3 and +4 in PS3 (Fig. 4). Therefore, these results are consistent with the hypothesis that the 3′ part of the PMCHL transcription unit evolved from noncoding DNA in a common ancestor of Hominoids as a result of the creation of standard intron-exon boundaries and poly(A) signals that have been conserved in humans. Figure 4 Phylogenetic analysis of the intron-exon boundaries and poly(A) signals of the PMCHL gene. Exonic nucleotidic sequences are in uppercase letters, and intronic nucleotidic sequences are in lowercase. The most extended consensus sequences at the 5′ splice donor site and 3′ splice acceptor site are indicated. The nearly invariant dinucleotides GT/AG at the extreme 5′ (donor) and 3′ (acceptor) ends of the introns are in bold characters. Dashes indicate identity to the human sequence. Sequence differences at the consensus sites are in gray. Sequences are arranged according to the evolutionary lineage. Intron C does not possess a canonical functional 5′ donor end; it has TT instead of GT dinucleotide. GenBank accession numbers are as follows: PAN sequences,AY008414, AY008423, and AY008426; PTR sequences, AY008416,AY008418, AY008424, AY008429, and AY008433; PPY sequences, AY008415,AY008419, AY008422, and AY008425; HLA sequences, AY008417, AY008420,AY008421, AY008427, and AY008432; CHA sequences, AY008428 andAY008430; CCA sequence, AY008431. In CHA and PAP, which do not carry functional splice sites, we succeeded in amplifying only a small part of AROM/MCHretrotransposed sequence from the genomic DNA. In addition, a strong divergence of PMCHL1 sequence was noted in these species reflecting weak selective constraint (18). The similar exon structure of the PMCHL genes found in HSA, PAN, PTR, PPY, and HLA together with the divergence of sequence of the retrotransposedAROM/MCH sequences in the Cercopithecoidaeindicates that there was a relatively short time between the first insertion event and the subsequent mutation events leading to the recruitment of intronic and exonic components into a functional transcription unit and the speciation. As expected for emerging functions, the underlying genes were likely to undergo fast divergence until they gained stronger physiological constraints. This strongly suggests that the PMCHL gene was conserved inHominidae due to the acquisition of some constraints, probably an emerging role in primates. Our results reveal the molecular, genetic, and evolutionary mechanisms that participated in the origin of two chimeric functional genesPMCHL1 and PMCHL2 in the Hominidaelineage (Fig. 5). Taken together, our data on the tissue-specific expression and the conserved features of thePMCHL genes suggest that their mRNA or protein have been “exapted” into a functional role [i.e., co-opted into a variant or newly characterized function (19)] in the primate lineage. The identification of the many processes in genome evolution have shown that de novo generation of building blocks—single genes or gene segments coding for protein domains—seems to be rare. Instead, genome novelty was mainly built by modification, duplication, and functional changes of the available blocks by processes of gene duplication, exon shuffling, or retrotransposition of genes (3, 20–24). In the context of human genome evolution, the previously unknown mechanism of transcript fusion of the adjacent Kua and Uev genes was recently proposed to create a chimeric Kua-Uev mRNA and the cognate fused protein (25, 26). However, in the case we described the recruited portion fused to theAROM/MCH-derived sequences was shown to have originated from a unique noncoding sequence. Moreover, the complex structure and evolutionary history ofPMCHL encompass several phenomena pointing to an important role for introns in the origin of newly characterized genes, as the exon theory of gene has suggested (27): (i) emergence of the 5′ exons by an event of duplication of a 5′-end truncated part of the MCH gene via a process of retrotransposition of an antisense MCH mRNA; (ii) creation of 3′ exons from a unique noncoding genomic sequence that fortuitously evolved as a standard intron-exon structure and polyadenylation signal sequences; (iii) alternative transcriptional initiation and splicing processes, further complicated by the presence of antisense RNAs; and (iv) a nested gene encoding unspliced mRNAs products. In the context of genome research, the existence of such gene structures poses a particular dilemna in the perspectives of prediction of exons from genome sequence data. In fact, the complex gene structure of thePMCHL loci, as described here, was not predicted from the genome sequence and exon prediction programs (GRAIL, Fex, Hexon, MZEF, Genemark, Genefinder, Fgene, Polyah). Figure 5 Proposed model for the emergence of the chimeric PMCHL1 and PMCHL2 genes during primate evolution. A MCH-derived sequence has originated onto chromosome 5p by a complex event of retrotransposition (detailed inFig. 1B) at the time of Catarrhini divergence 25 to 30 Ma. Intron-exon boundaries and poly(A) signals were created by subsequent mutation processes before the divergence of Hylobatidae, 15 to 20 Ma, leading to the chimeric gene structure observed in theHominoidae. A last event of duplication involving a large region of ancestral 5p14 encompassing several hundreds of kb has led to the distribution of PMCHL1 andPMCHL2 on each side of the chromosome 5 centromere. This operated in the Hominidae lineage, about 5 to 10 Ma. Exons based on mRNA characterized in human are boxed in gray or white and marked with arabic numerals. The brackets indicate consensus alternative splice acceptor site for exons 4 and 5b. Polyadenylation sites are represented by small dark bars. Arabic numerals in gray indicate the location of unique noncoding sequences that gave rise to exons. Dashed lines indicates that the MCH-derived sequence was absent inPlatyrrhini. • * To whom correspondence should be addressed. E-mail: nahonjl{at} View Abstract Cited By... Navigate This Article
__label__1
0.90403
Coccinellidae+ Coccinellidae (/ˌkɒksɪˈnɛlɪdaɪ/) is a widespread family of small beetles ranging from 0.8 to 18 mm (0.0315 to 0.708 inches). Coccinellinae+ The Coccinellinae are a subfamily of ladybirds in the family Coccinellidae. Coccinellidae +Search for Videos Redirect-multi|3|Ladybird|Ladybug|Lady beetle|Ladybird (disambiguation) ''Coccinella magnifica+'' Latreille+, 1807  * Chilocorinae+ Mulsant+, 1846 * Coccidulinae+ Mulsant, 1846 * Coccinellinae+ Latreille, 1807 * Epilachninae+ Mulsant, 1846 * Hyperaspidinae+ Duverger, 1989 * Microweiseinae+ Leng, 1920 * Scymninae+ Mulsant, 1846 * Sticholotidinae+ Weise, 1901 '''Coccinellidae''' () is a widespread family+ of small beetle+s ranging from 0.8 to 18 mm (0.0315 to 0.708 inches). They are commonly yellow, orange, or red with small black spots on their wing covers+, with black legs, heads and antennae+. However such colour patterns vary greatly. For example, a minority of species, such as ''Vibidia duodecimguttata+'', a twelve-spotted species, have whitish spots on a brown background. Coccinellids are found worldwide, with over 6,000 species described. Coccinellids are known as '''ladybugs''' in North America, and '''ladybirds''' in other areas. Entomologists+ widely prefer the names '''ladybird beetles''' or '''lady beetles''' as these insects+ are not classified as true bugs+. The majority of coccinellid species are generally considered useful insects, because many species prey on herbivorous homoptera+ns such as aphid+s or scale insect+s, which are agricultural pests. Many coccinellids lay their eggs directly in aphid and scale insect colonies in order to ensure their larvae have an immediate food source. However, some species do have unwelcome effects; among these, the most prominent are of the subfamily+ Epilachninae+, which are herbivorous themselves. Usually, epilachnines are only mild agricultural pests, eating the leaves of grain, potatoes, beans, and various other crops, but their numbers can increase explosively in years when their natural enemies, such as parasitoid+ wasps that attack their eggs, are few. In such situations, they can do major crop damage. They occur in practically all the major crop-producing regions of temperate and tropical countries. The name ''coccinellids'' is derived from the Latin word ''coccineus'' meaning "scarlet". The name "ladybird" originated in Britain where the insects became known as "Our Lady's bird" or the Lady beetle. Mary+ (Our Lady) was often depicted wearing a red cloak in early paintings, and the spots of the seven-spot ladybird (the most common in Europe) were said to symbolise her seven joys+ and seven sorrows+. In the United States, the name was adapted to "ladybug". Common names in other European languages have the same association, for example, the German name ''Marienkäfer'' translates to Marybeetle. Most coccinellids have oval, dome-shaped bodies with six short legs. Depending on the species, they can have spots, stripes, or no markings at all. Seven-spotted coccinellids are red or orange with three spots on each side and one in the middle; they have a black head with white patches on each side. As well as the usual yellow and deep red colourings, many coccinellid species are mostly, or entirely, black, dark grey, gray, or brown, and may be difficult for an entomologist/nonentomologists to recognise as coccinellids at all. Conversely, non-entomologists might easily mistake many other small beetles for coccinellids. For example, the tortoise beetle+s, like the ladybird beetles, look similar because they are shaped so that they can cling to a flat surface so closely that ants and many other enemies cannot grip them. Non-entomologists are prone to misidentify a wide variety of beetle species in other families as "ladybirds", i.e. coccinellids. Beetles are particularly prone to such misidentification if they are spotted in red, orange or yellow and black. Examples include the much larger scarabaeid+ grapevine beetles+ and spotted species of the Chrysomelidae+, Melyridae+ and others. Conversely, laymen may fail to identify unmarked species of Coccinellidae as "ladybirds". Other beetles that have a defensive hemispherical shape, like that of the Coccinellidae (for example the Cassidinae+), also are often taken for ladybirds. A common myth, totally unfounded, is that the number of spots on the insect's back indicates its age. In fact, the underlying pattern and colouration are determined by the species and genetics of the beetle, and develop as the insect matures. In some species its appearance is fixed by the time it emerges from its pupa+, though in most it may take some days for the colour of the adult beetle to mature and stabilise. Generally, the mature colour tends to be fuller and darker than the colour of the callow+. Coccinellids are best known as predator+s of Sternorrhyncha+ such as aphids+ and scale insects+, but the range of prey species that various Coccinellidae may attack is much wider. A genus of small black ladybirds, ''Stethorus'', presents one example of predation on non-Sternorrhyncha; they specialise in mites as prey, notably ''Tetranychus+'' spider mites. ''Stethorus'' species accordingly are important in certain examples of biological control+.Hodek, Ivo; Honek, A. ; van Emden, Helmut F. Ecology and Behaviour of the Ladybird Beetles. Publisher: Wiley-Blackwell 2012. ISBN 978-1405184229 Various larger species of Coccinellidae attack caterpillars and other beetle larvae. Several genera feed on various insects or their eggs; for example, ''Coleomegilla''+ species are significant predators of the eggs and larvae of moths such as species of Spodoptera+ and the Plutellidae+. Larvae and eggs of ladybirds+, either their own or of other species, can also be important food resources when alternative prey are scarce. As a family, the Coccinellidae used to be regarded as purely carnivorous, but they are now known to be far more omnivorous than previously thought, both as a family and in individual species; examination of gut contents of apparently specialist predators commonly yield residues of pollen and other plant materials. Besides the prey they favour, most predatory coccinellids include other items in their diets, including honeydew, pollen, plant sap, nectar, and various fungi. The significance of such nonprey items in their diets is still under investigation and discussion.Almeida, Lúcia M. ; Corrêa, Geovan H. Giorgi, José A. ; Grossi, Paschoal C. New record of predatory ladybird beetle (Coleoptera, Coccinellidae) feeding on extrafloral nectaries. Revista Brasileira de Entomologia 55(3): 447–450, setembro, 2011 Apart from the generalist aphid and scale predators and incidental substances of botanical origin, many Coccinellidae do favour or even specialise in certain prey types. This makes some of them particularly valuable as agents in biological control programmes. Determination of specialisation need not be a trivial matter, though; for example the larva of the Vedalia ladybird ''Rodolia cardinalis+'' is a specialist predator on a few species of Monophlebidae+, in particular ''Icerya purchasi+'', which is the most notorious of the cottony cushion scale species. However, the adult ''R. cardinalis'' can subsist for some months on a wider range of insects plus some nectar. Certain species of coccinellids are thought to lay extra infertile eggs with the fertile eggs, apparently to provide a backup food source for the larvae when they hatch. The ratio of infertile to fertile eggs increases with scarcity of food at the time of egg laying. Such a strategy amounts to the production of trophic egg+s. Some species in the subfamily Epilachninae+ are herbivore+s, and can be very destructive agricultural pest+s (e.g., the Mexican bean beetle+). Again, in the subfamily Coccinellinae, members of the tribe Halyziini and the genus ''Tythaspis'' are mycophagous+. While predatory species are often used as biological control+ agents, introduced species of coccinellids are not necessarily benign. Species such as ''Harmonia axyridis+'' or ''Coccinella septempunctata+'' in North America outcompete and displace native coccinellids and become pests themselves. The main predators of coccinellids are usually birds, but they are also the prey of frogs, wasps, spiders, and dragonflies. The bright colours of many coccinellids discourage some potential predators from making a meal of them. This phenomenon, called aposematism+, works because predators learn by experience to associate certain prey phenotype+s with a bad taste. A further defence, known as "reflex bleeding+", exists in which an alkaloid toxin is exuded through the joints of the exoskeleton, triggered by mechanical stimulation (such as by predator attack) in both larval and adult beetles, deterring feeding. Coccinellids in temperate regions enter diapause+ during the winter, so they often are among the first insects to appear in the spring. Some species (e.g., ''Hippodamia convergens+'') gather into groups and move to higher elevations, such as a mountain, to enter diapause. Predatory coccinellids are usually found on plants which harbour their prey. They lay their eggs near their prey, to increase the likelihood the larvae will find the prey easily. In ''Harmonia axyridis'', eggs hatch in three to four days from clutches numbering from a few to several dozen. Depending on resource availability, the larvae pass through four instar+s over 10–14 days, after which pupa+tion occurs. After a teneral+ period of several days, the adults become reproductively active and are able to reproduce again, although they may become reproductively quiescent if eclosing+ late in the season. Total life span is one to two years on average. In the United States, coccinellids usually begin to appear indoors in the autumn when they leave their summer feeding sites in fields, forests, and yards and search out places to spend the winter. Typically, when temperatures warm to the mid-60s F (around 18 °C) in the late afternoon, following a period of cooler weather, they will swarm onto or into buildings illuminated by the sun. Swarms of coccinellids fly to buildings in September through November depending on location and weather conditions. Homes or other buildings near fields or woods are particularly prone to infestation. After an abnormally long period of hot, dry weather in the summer of 1976 in the UK+, a marked increase in the aphid population was followed by a "plague" of ladybirds, with many reports of people being bitten as the supply of aphids dwindled. The presence of coccinellids in grape harvests can cause ladybird taint+ in wines produced from the grapes. ''Harmonia axyridis+'' (the harlequin ladybird) is an example of how an animal might be partly welcome and partly harmful. It was introduced into North America from Asia in 1916 to control aphids, but is now the most common species, outcompeting many of the native species. It has since spread to much of western Europe, reaching the UK in 2004. It has become something of a domestic and agricultural pest in some regions, and gives cause for ecological concern. It similarly has turned up in parts of Africa, where it has proved variously unwelcome, perhaps most prominently in vine-related crops. The atlas ''Ladybirds (Coccinellidae) of Britain and Ireland'' published in 2011 showed a decline of more than 20% in native species due to environmental changes and competition from foreign invaders. The distribution maps, compiled over a 20-year period with help from thousands of volunteers, showed a decline in the numbers of the common 10-spot and 14-spot ladybirds and a number of other species, including the 11-spot, 22-spot, cream-spot, water and hieroglyphic ladybirds, ''Coccidula rufa, Rhyzobius litura'' and ''Nephus redtenbacheri''. Conversely, increases were seen in the numbers of harlequin, orange, pine, and 24-spot ladybirds, as well as ''Rhyzobius chrysomeloides''. The kidney spot ladybird was recorded in Scotland for the first time in recent years, and the 13-spot was found to have recolonised Cornwall+, Devon+, and the New Forest+. The most commonly recorded species was the 7-spot, closely followed by the Asian harlequin — an invader that arrived from continental Europe in 2003 after being introduced to control pests. An 'explosion' in the number of orange ladybirds, which feed on mildew, is thought to have been due to the warmer, damper conditions that now prevail in parts of England. File:Ladybug among ants - 2013-4-18 - Alberta Canada.webm|HD video of a ladybird near an anthill File:Scymnus sp - 2012-10-16.webm|thumbtime=0:39|A brownish ''Scymnus'' sp. (tribe Scymnini) Coccinellids are, and have been for very many years, an insect of interest and favour for children. The insects had many regional names (now mostly disused) in English, such as variations on Bishop-Barnaby (Norfolk+ and Suffolk dialect+) – Barnabee, Burnabee, the Bishop-that-burneth, and bishy bishy barnabee. The etymology is unclear, but it may be from St. Barnabas' feast in June, when the insect appears, or a corruption of "Bishop-that-burneth", from the fiery elytra+ of the beetles. The ladybird was immortalised in the popular children's nursery rhyme+ ''Ladybird, Ladybird+'': Ladybird, ladybird, fly away home Your house is on fire and your children are gone All except one, and that's Little Anne For she has crept under the warming pan+. This poem has its counterpart in German as ''Marienwürmchen,'' collected in ''Des Knaben Wunderhorn+'', and set to music by Robert Schumann+ as Op. 79, No. 13. There is also a Polish+ nursery rhyme+, "Little Ladybirds' Anthem", of which a part ("fly to the sky, little ladybird, bring me a piece of bread") became a saying: Mała Biedroneczka siedem kropek miała, Na zielonej łące wesoło fruwała. Złapał ją pajączek w swoją pajęczynę - uratuję Cię Biedronko, a ty mi coś przynieś. Biedroneczko leć do nieba, przynieś mi kawałek chleba. Little ladybird had seven dots, She was flying over a green meadow. A little spider caught her in its spiderweb Fly to the sky, little ladybird, bring me a piece of bread. Many cultures consider coccinellids lucky and have nursery rhymes or local names for the insects that reflect this. For instance, the Turkish name for the insect is ''uğur böceği'', literally meaning "good luck bug". In many countries, including Russia, Turkey, and Italy, the sight of a coccinellid is either a call to make a wish or a sign that a wish will soon be granted. In Christian areas, coccinellids are often associated with the Virgin Mary+ and the name that the insect bears in the various languages of Europe+ corresponds to this. Though historically many European languages referenced Freyja+, the fertility+ goddess+ of Norse mythology+, in the names, the Virgin Mary has now largely supplanted her, so that, for example, ''freyjuhœna'' (Old Norse+) and ''Frouehenge'' have been changed into ''marihøne'' (Norwegian+) and ''Marienkäfer'' (German+), which corresponds with Our Lady+'s bird. Sometimes, the insect is referred to as belonging directly to God (Irish+ ''bóín Dé'', Polish+ ''boża krówka'', all meaning "God's [little] cow"). In Dutch+ it is called ''lieveheersbeestje'', meaning "little animal of our Good Lord". In both Hebrew+ and Yiddish+, it is called "Moshe Rabbenu+'s (i.e. Moses's) little cow" or "little horse", apparently an adaptation from Slavic languages. Occasionally, it is called "little Messiah+".''Born to Kvetch'', Michael Wex+, St. Martin's Press, New York, 2005, ISBN 0-312-30741-1. The bold colours and simple shapes have led to use as a logo+ for a wide range of organisations and companies, including: * Ladybird Books+ (owned by Pearson PLC+) * Ladybird range of children's clothing sold by former and Woolworth's+ chain store in the UK * Polish supermarket chain Biedronka+ * Atmel AVR+ Studio software logo * Software development firm Axosoft+ * Symbol of the Swedish People's Party of Finland+ * Symbol of the Pestalozzi International Village+ charity * Symbol of the Croatian Lottery+ * The ladybird street tile (pictured) is a symbol against senseless violence+ in the Netherlands, and is often placed on the sites of deadly crimes. In addition, it has been chosen as: * US state insect+ of Delaware+, Massachusetts+, New Hampshire+, New York+, Ohio+, and Tennessee+, though only New York has selected a species native to the United States+ (''Coccinella novemnotata+''); the other states have all adopted an invasive European species (''Coccinella septempunctata+''). * An "official national mascot" for Alpha Sigma Alpha+, a national sorority+ in the United States * An "official national mascot" for Swing Phi Swing Social Fellowship Inc.+®, a national non-profit sisterhood in the United States * The mascot of Candanchú+, a ski resort situated near the town of Canfranc+ in the High Aragon+ of the western Pyrenees+ (Province of Huesca+, Spain) * Ongoing North American Ladybeetle Survey and Citizen Science Project based at Cornell University - Submit Photos * Cornell University's ''Guide to natural enemies in North America'' * In: Hodek I., Honěk A., van Emden H.F. (2012) Ecology and Behaviour of the Ladybird Beetles (Coccinellidae). John Wiley and Sons Ltd. pp. 526–531. * on the UF / IFAS+ Featured Creatures website. * on the UF / IFAS+ Featured Creatures website. Authority control:
__label__1
0.798321
Krammer 1997      Category: Asymmetrical biraphid TYPE SPECIES: Encyonopsis cesatii Krammer Image Credit: Loren Bahls CLASS: Bacillariophyceae   ORDER: Cymbellales     FAMILY: Cymbellaceae 1. Valves naviculoid, slight apical asymmetry 2. Areolae round or transapically elongate 3. Striae radiate or parallel at the apices 4. Proximal raphe ends deflected dorsally 5. Distal raphe fissures deflected ventrally Valves are naviculoid and slightly dorsiventral, symmetric to the transapical axis and asymmetric to the apical axis. The raphe is centrally located. Areolae in the striae are round or oblong and oriented with their long axes parallel to the transapical axis. Transapical striae are radiate near the valve center and radiate or parallel near the apices. Proximal raphe ends are deflected toward the dorsal margin. Distal raphe fissures are deflected ventrally. Stigmata and apical pore fields are absent. Encyonopsis is distinguished from Kurtkrammeria by the orientation of the striae at the valve apices, by the shape and orientation of the areolae, and by other features observed in SEM. In Encyonopsis species, areolae are round or elongate in a transapical direction. In Kurtkrammeria, areolae are slit-like or crescent shaped with the long axis oriented apically. In Encyonopsis, the terminal striae are radiate or parallel; in Kurtkrammeria, terminal striae are convergent. Stigmata and apical pore fields may be present in Kurtkrammeria, but they are absent in Encyonopsis. Cite This Page: Bahls, L. (2015). Encyonopsis. In Diatoms of the United States. Retrieved August 26, 2016, from http://westerndiatoms.colorado.edu/taxa/genus/Encyonopsis Contributor: Loren Bahls - December 2015 Reviewer: Sarah Spaulding - December 2015 November 8th, 2015 - Change in circumscription The concept of the genus Encyonopsis was changed to become more narrow as of this date. The genus Kurtkrammeria was recognized and established as a new genus. Formerly, Encyonopsis included the broader definition.
__label__1
0.853269
What is Glossitis?: Types, Causes and Symptoms Everything You Need to Know About Glossitis What Is Glossitis? Glossitis refers to inflammation of the tongue. The condition causes the tongue to swell in size, change in color, and develop a smooth appearance on the surface. The tongue is the small, muscular organ in the mouth that helps you chew and swallow food. It also helps with your speech. Glossitis can cause the small bumps on the surface of the tongue called the papillae to disappear. Your papillae play a role in how you eat. They contain thousands of tiny sensors called taste buds. Severe tongue inflammations that result in swelling and redness can cause pain and change the way you eat or speak. The Types of Glossitis Type 1 There are several different types of glossitis: Acute Glossitis Acute glossitis is an inflammation of the tongue that appears suddenly, and it often has severe symptoms. This type of glossitis typically develops during an allergic reaction. Chronic Glossitis Chronic glossitis is an inflammation of the tongue that continues to recur. This type may begin as a symptom of another health condition. Idiopathic Glossitis Idiopathic glossitis, also known as Hunter’s glossitis, affects the muscles of the tongue. In this condition, a significant amount of papillae can be lost. The cause of idiopathic glossitis is unknown. Atrophic Glossitis Atrophic glossitis occurs when a large number of papillae are lost, resulting in changes to the tongue’s color and texture. This type of glossitis typically turns the tongue dark red. What Causes Glossitis? A number of factors can cause inflammation of the tongue, including: Allergic Reactions Allergic reactions to medications, food, and other potential irritants may aggravate the papillae and the muscle tissues of the tongue. Potential irritants include toothpaste and certain types of medications that treat high blood pressure. Certain diseases that affect your immune system may attack the tongue’s muscles and papillae. Herpes simplex, a virus that causes cold sores and blisters around the mouth, may contribute to swelling and pain in the tongue. Low Iron Levels An inadequate amount of iron in the blood can trigger glossitis. Iron regulates cell growth by helping your body make red blood cells. Red blood cells carry oxygen to your organs, tissues, and muscles. Low levels of iron in the blood may result in low levels of myoglobin. Myoglobin is a protein in red blood cells that’s important for muscle health, including the tongue’s muscle tissue. Dry Mouth Dry mouth is caused by a lack of saliva, which may be due to a salivary gland disorder or overall dehydration. You need saliva to keep your tongue moist. Mouth Trauma Trauma caused by injuries to the mouth can affect the condition of your tongue. Inflammation may occur as a result of cuts and burns on the tongue or of dental appliances placed on your teeth, such as braces. Who Is at Risk for Glossitis? Risk Factors You may be at risk for tongue inflammation if you: • have a mouth injury • eat spicy foods • wear braces or dentures that irritate your tongue • have herpes • have low iron levels • have dry mouth • have food allergies • have an immune system disorder What Are the Symptoms of Glossitis? Your symptoms may vary depending on the cause of the inflammation. In general, however, you can experience the following symptoms: • pain or tenderness in the tongue • swelling of the tongue • change in the color of your tongue • an inability to speak, eat, or swallow • loss of papillae on the surface of your tongue How Is Glossitis Diagnosed? You may see your dentist or doctor for an assessment of your condition. They’ll examine your mouth to check for abnormal bumps and blisters on your tongue, gums, and soft tissues of your mouth. Samples of your saliva and blood may also be taken and sent to a laboratory for further examination. How Is Glossitis Treated? Treatment for glossitis typically includes a combination of medications and home remedies. Antibiotics and other medications that get rid of infections may be prescribed if bacteria are present in your mouth or body. Your doctor may also prescribe corticosteroids, such as prednisone, to reduce the redness and soreness. Home Care Brushing and flossing your teeth several times a day may improve the health of your tongue, gums, and teeth. This can help relieve the symptoms associated with glossitis and prevent the condition from happening again. What Can Be Expected in the Long Term? In most cases, glossitis goes away with medication. Treatment may be more successful if you avoid foods that cause inflammation of the tongue. Practicing proper oral hygiene may also help reduce or prevent further problems. Speak with your doctor if your symptoms don’t improve with treatment or continue to occur. Call 911 or go to the hospital right away if your tongue becomes severely swollen and begins to block your airway. This may be a sign of a more serious condition.  Read This Next Tongue Bumps: Enlarged Papillae and Other Problems When Do Babies Start Walking? Is Corn a Vegetable? How Is Yoga Beneficial for People with Diabetes?
__label__1
0.631039
What is 'Strong Form Efficiency' Strong form efficiency is the strongest version of market efficiency and states that all information in a market, whether public or private, is accounted for in a stock's price. Practitioners of strong form efficiency believe that not even insider information can give an investor an advantage. This degree of market efficiency implies that profits exceeding normal returns cannot be made, regardless of the amount of research or information investors have access to. BREAKING DOWN 'Strong Form Efficiency' Strong form efficiency is a component of the efficient market hypothesis and is considered part of the random walk theory. Strong form efficiency states that securities prices, and therefore the overall market, are not random and are influenced by past events. This efficiency is at the opposite end of weak form efficiency, which states that past events have no effect on current securities prices and price movements. The idea behind strong form efficiency was pioneered by Princeton economics professor Burton G. Malkiel in his 1973 book "A Random Walk Down Wall Street." The book championed two forms of the random walk theory. The form that explains strong form efficiency states that it is impossible to consistently outperform the market due to the fact that all information, both public and proprietary, is reflected in current market prices, and it is therefore impossible to earn long-term abnormal returns. Example of Strong Form Efficiency Most examples of strong form efficiency include some sort of insider information. This is because strong form efficiency is the only part of the efficient market hypothesis that takes into account proprietary information. However, the efficiency theory states that contrary to popular belief, harboring some sort of inside information won't help an investor earn high returns in the market. Let's say, for example, that the CTO of a public technology company believes that his company will begin to lose customers and revenues. After the internal rollout of a new product feature to beta testers, the CTO's fears are confirmed and he knows that the official rollout will be a flop. This would be considered insider information. The CTO decides to take up a short position on his own company, effectively betting against the stock price movement. If the stock price declines, he is poised to profit, and vice versa. However, much to the CTO's chagrin, when the product feature is released to the public, the stock price is unaffected and does not decline, even though customers are not happy. The market would be considered to be strong form efficient because even the insider information of the product flop was already priced into the stock. The CTO would lose money in this situation. 1. Weak Form Efficiency One of the different degrees of efficient market hypothesis (EMH) ... 2. Price Efficiency The premise that asset prices are efficient, to the extent that ... 3. Market Efficiency The degree to which stock prices reflect all available, relevant ... 4. Efficiency A level of performance that describes a process that uses the ... 5. Semi-Strong Form Efficiency A class of EMH (Efficient Market Hypothesis) that implies all ... 6. Allocational Efficiency A characteristic of an efficient market in which capital is allocated ... Related Articles 1. Investing Market Efficiency Basics 2. Investing Efficient Market Hypothesis: Is The Stock Market Efficient? 3. Insights What Is Market Efficiency? 4. Insights Explaining Economic Efficiency Economic efficiency is achieved when every resource is optimally allocated to minimize waste and best serve each person in that economy. 5. Small Business Explaining Efficiency Efficiency refers to the ability to make something with the fewest resources possible. 6. Investing Financial Markets: Random, Cyclical Or Both? 7. Investing Understanding the Random Walk Theory The random walk theory states stock prices are independent of other factors, so their past movements cannot predict their future. 8. Investing Efficient Market Hypothesis 9. Investing Efficiency Ratio 10. Investing Viewing The Market As Organized Chaos 1. What are the differences between weak, strong and semi-strong versions of the Efficient ... Discover how the efficient market theory is broken down into three versions, the hallmarks of each and the anomalies that ... Read Answer >> 2. What are the primary assumptions of Efficient Market Hypothesis? Find out about the key assumptions behind the efficient market hypothesis (EMH), its implications for investing and whether ... Read Answer >> 3. What is an efficient market and how does it affect individual investors? When people talk about market efficiency they are referring to the degree to which the aggregate decisions of all the market's ... Read Answer >> 4. Has the Efficient Market Hypothesis been proven correct or incorrect? Explore the efficient market hypothesis and understand the extent to which this theory and its conclusions are correct or ... Read Answer >> 5. Does a high efficiency ratio mean that the company is profitable? Understand the variety of efficiency ratios and why a high efficiency ratio does not necessarily mean a company is operating ... Read Answer >> 6. Why are financial markets considered to be transparent? Understand the efficient market hypothesis and how it relates to financial markets. Learn why financial markets are considered ... Read Answer >> Hot Definitions 1. Magna Cum Laude 2. Cover Letter 3. 403(b) Plan 4. Master Of Business Administration - MBA 5. Liquidity Event 6. Job Market Trading Center
__label__1
0.554444
Publicat 10 Feb 2014 | Reactualizat 10 Mar 2014 | Expiră 14 Mar 2014 1 post | 0 aplicanți Anunț inactiv Acest anunț a expirat și nu este disponibil pentru aplicare. Înapoi la joburi Candidatul ideal As Piping Designer you have fairly broad and challenging tasks. You are responsible for making all possible detail- and plant drawings. You are able to independently create isometrics from P&ID’s and take care of Material Take-Off (MTO). You are able to handle a large workload and able to achieve a high rate of production (“bulk work of ISO production). Of course you are also able to check and evaluate the information from other disciplines within the organization, and also from clients or suppliers. You will supervise junior designers and drafters if necessary. You are able to take on site measurements and sketches. Descrierea jobului Description of Tasks - Conduct necessary fieldwork inclusive of walk downs of current systems, locate tie-ins and measuring; - Prepare piping design sketches as required by drafting personnel; - Check piping drawings developed by drafting personnel; - Ensure performance of staff members is in compliance with established project standards; - Complete isometric drawings and piping layout as needed; - Ensure to complete pipe component and run detailed drawings as necessary; - Review project parameters related to piping requirements; - Develop preliminary Bills of Materials with respect to piping design; - Administer technically to outsource function as required; - Interpret and administer various Pipe Material and options; - Provide assistance to sales and contracts for proposals. Job Requirements - You need at least a degree in Mechanical Engineering on the Secondary Vocational level (in The Netherlands: MBO level degree), supplemented by a minimum of 5 to 10 years relevant work experience as Designer within a professional engineering environment; - You have knowledge of the international codes , standards and regulations and discipline related software; - You have extensive experience within the oil, gas, (petro)chemical and/or tank terminal industries, mostly with onshore projects; - Extensive experience with AutoCAD 2D is a demand; - Experience with Bentley Autoplant 3D is a preference; - Personal characteristics you stand for are: quality, customer orientation, flexibility, discipline, teamwork and stress resistance; - You are able to communicate in and handle the English language fluently in writing, reading and speaking; - You are in the possession of a valid “Basic Elements of Safety” (VCA Basic) certificate or “Elements of safety for Operational Supervisors” (VCA VOL) certificate, or are willing to get this certificate on short notice after taking an exam in English. Employer Location - Netherlands • Datele de identificare ale tuturor companiilor care publica anunturi de recrutare pe sunt verificate de consultantii nostri. nu influenteaza, insa, procesul de recrutare desfasurat de catre companii. • Analizati cu atentie informatiile din cadrul anunturilor de recrutare! Daca aveti dubii in privinta veridicitatii anumitor date sau in cazul unor solicitari suplimentare ale angajatorilor (trimiterea de documente personale, sume de bani etc.), va rugam sa ne scrieti la
__label__1
0.999651
Species Composition Species Composition Species composition refers to the contribution of each plant species to the vegetation. Botanical composition is another term used to describe species composition. Species composition is generally expressed as a percent, so that all species components add up to 100%. Species composition can be expressed on either an individual species basis, or by species groups that are defined according to the objectives of the inventory or monitoring program (eg., Aristida spp., perennial forage grasses, etc.). Species composition is a commonly determined attribute in rangeland inventory and monitoring. It is regarded as an important indicator of ecological and management processes at a site. Ecological indicators - species composition provides the essential description of the character of the vegetation at a site. Certain images are readily understood when major species are mentioned, eg., pinon (Pinus sp.) - juniper (Juniperus sp.) woodland, and other common species are also presumed to be present as one becomes familiar with the vegetation. These distinctions form the basis of rangeland mapping and the delineation of range site boundaries. The relative contribution of a species also signifies its dominance in the vegetation and its ability to capture resources. Slightly different inferences of competitive ability are suggested if species composition is expressed on the basis of cover, density, or biomass measurements. Management indicators - most objectives in rangeland management are directly concerned with the assessment or manipulation of species composition. For example, carrying capacity is influenced by the relative abundance of desirable forage species at a site. Wildlife habitat is also influenced by the relative contribution of various species that provide sources of shelter and food. Species composition is used to determine range condition and range trend, which are valuable tools to judge the impact of previous management and guide future decisions. Special Considerations for Species Composition Sampling Here is a critical issue to be considered when designing sampling protocols to determine species composition: Methods to Determine Species Composition Species composition is generally determined by sampling methods based on assessing the contribution of individual species or species groups in sample units. This section describes methods commonly used to determine species composition. References and Further Reading Barbour, M.G., Burk, J.H., and W.D. Pitts. 1987. Terrestrial plant ecology. Benjamin Cummins Publishing Co, Menlo Park, CA. 2nd ed. pp. 191-193. Bureau of Land Management. 1996. Sampling Vegetation Attributes. Interagency Technical Reference, BLM/RS/ST-96/002+1730. pp. 28-29. Society for Range Management. 1989. A Glossary of Terms Used in Range Management. Society for Range Management, Denver, CO. 3rd ed. p.13.
__label__1
1.000009
Buzz: Back from the ER, a wounded Ariad spurs a fresh round of takeover rumors Market rumors about a possible biotech buyout are usually triggered by research and marketing success. In Ariad's case, mere survival sufficed. The Mail in the U.K.--never known as the most reliable source of market information--reports that a pair of sizable local players, GlaxoSmithKline ($GSK) and Shire ($SHPG), has been fishing for a possible deal. But it's Eli Lilly ($LLY) that appears to be the lead potential bidder, according to the market rumors, which regardless of their source pushed up the stock price ($ARIA) by 10% this morning. The Mail somehow manages to overlook Ariad's near-death experience in recent months, with questions about safety issues surrounding their drug Iclusig (ponatinib) boiling up into a crisis and forcing the withdrawal of the leukemia drug in the U.S. The stock price slumped, the company restructured, laying off a big chunk of its workforce. And then the FDA relented, allowing a revised label that tailored the population to patients to a genetically defined group who didn't respond to anything else that's available. Ariad shares jumped a bit on the new label, though they remain far, far below the pre-October crisis highs. Its market cap sits at about $1.25 billion. That price makes the company easily obtainable, but if the rumors are even remotely true, why would Lilly--or Shire or GSK--buy it? Lilly in particular has experienced a series of challenges on the oncology side of its R&D efforts. It has two high-profile cancer drugs in the late-stage pipeline, ramucirumab and necitumumab. Ramucirumab looks promising for stomach cancer, but flunked out for the more lucrative breast cancer indication. And while necitumumab scored a win for lung cancer, researchers have also been plagued by concerns over blood clots, the same issue that threatens to scuttle Iclusig. Would another troubled cancer product offer significant help for a pharma giant headed into a trough year after losing control of its big franchises for Zyprexa and Cymbalta? Eli Lilly badly needs new products to flesh out a dwindling portfolio. But it hasn't done many deals, insisting that the internal pipeline could provide all the new products it needs for a turnaround. Lilly's last significant buyout was for Amyvid, an imaging agent without a market after Medicare refused to cover its use. Any deal for Ariad would have to be very advantageous if Lilly wants to avoid a fresh round of scoffing. - here's the story from The Mail
__label__1
0.748335
Second Language Studies | Topics Second Language Studies S600 | 26854 | Dekydtspotter, L Topic: Problems of Learning. This course investigates a basic problem of knowledge: To what extent do second language learners acquire knowledge of structures that are not licensed by their native grammar? Generally, we will focus on the gap between the information provided by the input and the state of knowledge that the learner develops (both cases where knowledge seems to far exceed the information provided by input and cases where acquisition seems plodding). We will investigate the following notions: the poverty of the stimulus problem, the underdetermination problem, the projection problem, and related paradoxes. We will consider a typology of poverty of the stimulus argumentation in L2 acquisition and the centrality of a learnability discussion in L2 work. We will examine types of explanations that various schools of thought have provided or suggested about the relationship between input and learner?s internal systems. Students will be developing their own research around a central learning problem under the guidance of the instructor.
__label__1
0.916726
McCain For President? Obama For President? Does Gerrymandering Cause Polarization? Joe Lieberman in Polarized America Corrections to the Hard Cover Edition of Polarized America Polarized America: The Dance of Ideology and Unequal Riches (June 2006, MIT Press) Nolan McCarty, Keith T. Poole, and Howard Rosenthal Political polarization, income inequality, and immigration have all increased dramatically in the United States over the past three decades. The increases have followed an equally dramatic decline in these three social indicators over the first seven decades of the twentieth century. The pattern in the social indicators has been matched by a pattern in public policies with regard to taxation of high incomes and estates and with regard to minimum wage policy. We seek to identify the forces that have led to this observation of a social turn about in American society, with a primary focus on political polarization. Our primary evidence of political polarization comes from analysis of the voting patterns of members of the U.S. House of Representatives and Senate. Based on estimates of legislator ideal points (Poole and Rosenthal 1997 and McCarty, Poole, and Rosenthal 1997), we find that the average positions of Democratic and Republican legislators have diverged markedly since the mid-1970s. This increased polarization took place following a fifty-year blurring of partisan divisions. This turning point occurs almost exactly the same time that income inequality begins to grow after a long decline and the full effects of immigration policy liberalization are beginning to be felt. Some direct causes of polarization can be ruled out rather quickly. The consequences of one person, one vote decisions and redistricting can be ruled out since the Senate, as well as the House of Representatives, has polarized. The shift to a Republican South can be ruled out since the North has also polarized. Primary elections can be ruled out since polarization actually decreased once primaries became widespread. In additional to our focus on the polarization of elected office-holders, we look at patterns of polarization among economic elites. By examining campaign contributions, we find very high levels of polarized giving. While some billionaires clearly spread their contributions to both parties to buy access, increasing numbers concentrate their largess on the ideological extremes. This polarized campaign giving, coupled with the emergence of the soft money loophole has arguably contributed to the ideological extremism of political parties and elected officials. Finally, we also examine polarization among the electorate. While it is fairly clear that the views of most citizens have not become more extreme, those with strong partisan identifications have (DiMaggio, et al., Fiorina). Consistent with other findings (King, Jacobson), we find that partisans are more likely to apply ideological labels to themselves and a declining number of them call themselves moderate. Strong party identifiers are the most likely to define politics and ideological terms while the differences in the ideological self-placements of Republicans and Democrats have grown dramatically since the 1980s. Given Bartels findings that partisanship has become a better predict of vote choice, this polarization of partisans has contributed to much more ideological voting behavior. We also find that the polarization of the electorate has increasingly taken place along economic or class lines. Unlike the patterns of the 1950s and 1960s, upper income citizens are more likely to identify with and vote for Republicans than are lower income voters. However, we find that class polarization is most likely a result of the ideological shift of the Republican Party towards a more economic libertarian position. This shift to the right was aided by a number of social, political, and economic factors. First, as American society has become wealthier on average, a larger segment of society prefers to self-insure rather than depend on government social programs. Such voters have become more attracted to the Republicans and their agenda for an Ownership Society. Second, due to patterns of immigration and incarceration, members of lower income groups are less likely to be part of the electorate. This has the effect of moving the median income voter closer to the mean income citizen, reducing the demand for redistribution (Romer, Meltzer and Richard). Third, middle-income voters in the so-called Red states increasingly sympathize with Republican positions on social, cultural, and religious issues (e.g. Franks). The Republican advantage on these issues has mitigated any loss of votes that might have been associated with their shift on economic issues. Finally, the emergence of a class-based, two-party system in the United States has benefited the Republicans and mirrored the patterns of economic polarization found in other regions. Finally, we examine the policy consequences of the fall and rise of political polarization. The separation of powers makes it difficult to generate coalitions large enough to produce policy change even when opinion shifts. We exploit this observation to get some leverage in disentangling the effects of political, economic, and social policies. For much of the period when polarization fell, immigration policy was restrictive and unchanged while income and estate taxes, defined in nominal terms, became more onerous. For the period since the onset of renewed polarization, we find strong evidence that gridlock has resulted in a less activist federal government. The passage of new laws has been curtailed due to the increasing difficulty of generating the requisite bipartisan coalitions. The effects on social and tax policy have been especially dramatic as real minimum wages have fallen, welfare devolved to the states, and tax rates have diminished. We also show how polarized politics has affected administrative and judicial politics. (Of related interest! Note the Patterns. Added, July 2011) For further information about McCarty, Poole, and Rosenthal's work on Polarization see: The Next Big Issue: Inequality in America, by Godfrey Hodgson Interview of Nolan McCarty by The American Prospect The Decline and Rise of Party Polarization in Congress During the 20th Century Growing Apart: The Mathematical Evidence for Congress' Growing Polarization, by Jordan Ellenberg (Note that the links to voteview.uh.edu should be pooleandrosenthal.com. The pages are still on the website.) Party Polarization: 1879 - 2015
__label__1
0.827321
John Buridan on Self-Reference: Chapter Eight of Buridan's Sophismata, with a translation, an introduction and a philosophical commentary by G.E. Hughes (New York: Cambridge University Press, 1982), xi + 233 pp. Alfred J. Freddoso University of Notre Dame This is an excellent book, a paradigm of its genre. In Chapter Eight of the Sophismata Buridan proposes his highly sophisticated solutions to a wide variety of alethic (Liar-type), epistemic and pragmatic paradoxes involving self-reference. Hughes' almost equally impressive contribution consists of no less than (i) a clear and penetrating introduction (pp. 1-37), (ii) a new (albeit non-critical) edition of the Latin text along with a facing English translation that is unfailingly accurate and smooth (pp. 38-129), (iii) notes to the Latin text (pp. 131-139), and (iv) a painstaking and philosophically illuminating commentary (pp. 141-227). The work as a whole has been fashioned with great care and obviously represents a labor of love as well as of scholarship on Hughes' part. (For confirmation of this last point, consult the final note on p. 139.) The epistemic paradoxes are probably the deepest, while the pragmatic paradoxes are undoubtedly the most entertaining. Here, however, I will confine my brief remarks to Buridan's distinctive resolution of the alethic paradoxes. Since the issues raised by these paradoxes are, in Hughes' words, "profound, complicated and ramified," I can do little more in a short review than scratch the surface. Accordingly, I will be content to describe Buridan's strategy in rough outline and then to focus on a limited, though significant, problem which both Buridan and Hughes have, I think, neglected to face up to. (Unless the context indicates otherwise, I will be using the term 'proposition' in the way that Buridan uses the Latin 'propositio', viz., to refer to contingently existing sentence-tokens.1) Almost all the gifted 20th century philosophers who have thought deeply about problematic propositions (sophisms) such as (A) (A) is false (B) (C) is false (C) (B) is true /78/ have held that they are neither true nor false. That is to say, these philosophers have accepted the prima facie compelling claim that such sophisms are true or false only if they are both true and false. Buridan demurs. (A), (B), (C) and their ilk are, he contends, one and all false and not true.2 To buttress this contention, he must refute the seemingly powerful arguments for the claim that if the sophisms in question are false, then they are true as well. I will look at two such arguments, each centering about (A). Consider, first, the following chain of reasoning purporting to take us from (A)'s falsity to its truth: (1) (A) is false. (assumption) (2) If (A) is false, then there is something for which (A)'s subject-term ('(A)') and its predicate-term ('false') both supposit (or, on a Fregean account: then the thing denoted by (A)'s subject-term satisfies the concept expressed by its predicate-term).3 (premise) (3) So there is something for which (A)'s subject-term and predicate-term both supposit. (1,2) (4) But a (singular) proposition is true if and only if there is something for which its subject-term and predicate-term both supposit. (premise) (5) So (A) is true. (3,4) Buridan retorts that, contrary to appearances as well as to the common opinion of philosophers, (4) is false. The reason is that its right-hand side (or the Fregean equivalent thereof) embodies only a necessary and not a sufficient condition for the truth of a singular proposition. This claim, needless to say, cries out for elaboration and defense. Let 'N is P' be a schema representing singular propositions, with 'N' and 'P' serving to represent the subject- and predicate-terms, respectively. To oversimplify just a bit, on Buridan's showing a complete account of the semantic truth conditions of singular propositions will look like this: (T) 'N is P' is true if and only if (i) there is something for which both 'N' and 'P' supposit in the proposition 'N is P'; and (ii) '"N is P" is true' is true. ('"N is P" is true' constitutes what Hughes calls the 'implied proposition'.4) Ordinarily, conditions (i) and (ii) are both satisfied if either is. For instance, given that the propositions 'Socrates is sitting' and '"Socrates is sitting" is true' both exist, the former is true if and only if the latter is also true. That is why it is so easy for us to slip into thinking that condition (i) is sufficient by itself. According to Buridan, however, it is precisely in the case of certain self-referential propositions (e.g., (A)) /79/ that condition (i) is satisfied without condition (ii) also being satisfied. For even though there is something, viz. (A) itself, for which '(A)' and 'false' both supposit, the implied proposition, viz. '"(A) is false" is true', is nonetheless false--as is shown, Buridan asserts, by the standard arguments for (A)'s falsity. (Hughes presents two such arguments on p. 24.) Now this appeal to the standard arguments for (A)'s falsity has all the appearances of being question-begging. After all, the champion of the argument expressed by (1)-(5) will reject the standard arguments for (A)'s falsity precisely because he has what he takes to be a sound argument showing that (A) is false only if it is also true. That is, the argument captured by (1)-(5) is meant to be at least an indirect response to the standard arguments for (A)'s falsity. So Buridan's invocation of the latter to undermine the former seems clearly to be dialectically improper. Hughes, however, is evidently not sensitive to this particular criticism of Buridan. (See pp. 24-25 for the relevant discussion.) The very same criticism applies to Buridan's treatment of the second argument from (A)'s falsity to its truth. This argument is found in the Eleventh Sophism (see p. 89) and is almost identical to the argument which Hughes considers on pp. 25-27. To understand this argument we must have a decent grasp of what Hughes calls the 'principle of truth-entailment'. I will try to state it as clearly and accurately as I can. Let S be a schematic letter which takes propositions as substitutions, and let [S] represent a proper name of the proposition substituted for S. Then, according to Buridan, the following is true for any such substitution: (PTE) Necessarily, if S and [the proposition] [S] exists, then [the proposition] [S] is true. So, for instance, it follows from (PTE) that if Socrates is sitting and the proposition 'Socrates is sitting' exists, then the proposition 'Socrates is sitting' is true. (Since on Buridan's view propositions are contingently existing sentence-tokens, it is entirely possible that the first conjunct of the antecedent of a substitution instance of (PTE) should be true without the second conjunct being true. Suppose, for instance, that Socrates were sitting but that no propositions existed. In that case Socrates would be sitting even though the proposition 'Socrates is sitting' would not exist and hence would not be true. Indeed, some propositions are such that it is impossible for them to satisfy the antecedent of (PTE). Consider the proposition 'There are no negative propositions'. It is impossible that there should be no negative propositions and yet that the proposition 'There are no negative propositions' should exist. In such a case, the relevant substitution instance of (PTE) is true by virtue of having an impossible antecedent.) (PTE), I think we can agree, has an aura of truth about it. But now consider the following argument: (6) (A) is false. (assumption) (7) If (A) is false, then (A) exists. (premise) /80/ (8) So (A) exists. (6,7) (9) But necessarily, if (A) is false and (A) exists, then (A) is true. (PTE) (10) So (A) is true. (6,8,9) Buridan responds here by rejecting the claim that (10) follows from (6), (8) and (9), on the grounds that far from being a true and innocuous non-self-referential statement about the sophism (as is (6)), the first conjunct of the antecedent of (9) in fact just is the sophism--and the sophism, of course, is false and not true. Hughes gives a more complicated, disjunctive response to the argument discussed on pp. 25-27, a response calculated to work regardless of whether or not the first conjunct of the antecedent of (9) is taken to be the sophism. But, as before, both Buridan and Hughes presuppose (i) that the sophism is false and (ii) that this claim about the sophism can justifiably be used to undermine the attempt to derive the sophism's truth from its falsity. And, again as before, neither Buridan nor Hughes seems to exhibit any qualms about the dialectical propriety of this tactic. How might Buridan and his followers respond here? Perhaps they could plausibly claim that they are not obliged to provide a non-question-begging answer to the two arguments laid out above. More concretely, they might insist that because their proposal for handling the sophisms in question stands alone in preserving the principle of bivalence, it automatically wins out over its competitors as long as it can be shown to be merely consistent. So, they might continue, the objection pressed above rests on a mistaken picture of the dialectical milieu in which the debate between Buridan and Hughes, on the one hand, and their opponents, on the other hand, is taking place. I am willing to concede that this line of response shows some promise, though it obviously raises further questions that must be addressed forthrightly. For instance, those who, unlike Buridan, distinguish sharply between sentence-tokens and propositions might well hesitate to attribute such overriding significance to the preservation of sentential (as opposed to propositional) bivalence. Indeed, one who reflects upon the matter carefully might find it intuitively more evident that neither (B) nor (C) above expresses a proposition than that bivalence holds for all syntactically well-formed sentence-tokens. It would be interesting to have Hughes' thoughts on this matter. In any case, it is fitting that Buridan, properly packaged and interpreted, should be able to contribute, in propria persona as it were, to the lively contemporary discussion of self-reference. Hughes tells us that his main concern has been "to make Buridan's ideas accessible to present day philosophical readers for the sake of their inherent importance." In this he has succeeded admirably. 1. To be exact, Buridan takes a proposition to be a meaningful sentence-token that is spoken or written or (in the case of the mental language) thought with assertive intent. 2. Philosophers who hold that the sophisms in question are neither true nor false /81/ fall into two broad classes. Included in the first class are those who deny that the sophisms are (or express) propositions and hence deny that they have truth-values at all. The second class comprises those who affirm that the sophisms are propositional and thus that they have truth-values, but who deny that they have classical truth-values. The latter group of philosophers (at least) must, of course, still contend with the so-called 'strengthened liar', e.g., (D) (D) is either false or neither true nor false. (D), it seems, is true if false, false if true, and true if neither true nor false. On Buridan's theory, by contrast, (D) is simply false, and so it poses no new problems not already posed by (A), (B) and (C). 3. I will not bother to restate (3), (4) or (T) below in Fregean terms. But it is important to see that Buridan's resolution of the alethic paradoxes in no way depends upon his preference for a two-name account of predication over a function-argument account. 4. Condition (ii) could also be stated as follows: There is something for which both '"N is P"' and 'true' supposit in the proposition '"N is P" is true'.
__label__1
0.539148
Search tips Search criteria  Hepatology. 2013 June; 57(6): 2164–2170. Published online 2013 May 6. doi:  10.1002/hep.26218 PMCID: PMC3763475 Chronic Hepatitis C Virus (HCV) Disease Burden and Cost in the United States Hepatitis C virus (HCV) infection is a leading cause of cirrhosis, hepatocellular carcinoma, and liver transplantation. A better understanding of HCV disease progression and the associated cost can help the medical community manage HCV and develop treatment strategies in light of the emergence of several potent anti-HCV therapies. A system dynamic model with 36 cohorts was used to provide maximum flexibility and improved forecasting. New infections incidence of 16,020 (95% confidence interval, 13,510-19,510) was estimated in 2010. HCV viremic prevalence peaked in 1994 at 3.3 (2.8-4.0) million, but it is expected to decline by two-thirds by 2030. The prevalence of more advanced liver disease, however, is expected to increase, as well as the total cost associated with chronic HCV infection. Today, the total cost is estimated at $6.5 ($4.3-$8.4) billion and it will peak in 2024 at $9.1 ($6.4-$13.3) billion. The lifetime cost of an individual infected with HCV in 2011 was estimated at $64,490. However, this cost is significantly higher among individuals with a longer life expectancy. This analysis demonstrates that US HCV prevalence is in decline due to a lower incidence of infections. However, the prevalence of advanced liver disease will continue to increase as well as the corresponding healthcare costs. Lifetime healthcare costs for an HCV-infected person are significantly higher than for noninfected persons. In addition, it is possible to substantially reduce HCV infection through active management. According to estimates from the National Health and Nutrition Examination Survey (NHANES), 1.6% of the US population was infected with the hepatitis C virus (HCV) in 1999-2002.1 In a recent study, over 15,000 deaths were attributed to chronic hepatitis C virus (HCV) infection in 2007,2 already exceeding earlier estimates.3 HCV infection is associated with chronic, progressive liver disease. Chronic hepatitis C is a leading cause of cirrhosis and hepatocellular carcinoma (HCC),4,5 which are major indications for liver transplantation.6 A better understanding of HCV disease progression and the associated baseline cost, which excludes the cost of antiviral treatment, can help the medical community manage HCV and develop treatment strategies in light of the emergence of several potent anti-HCV therapies. Historically, researchers have studied HCV disease progression and cost using Markov models.3,714 In these models, a homogenous cohort of HCV-infected individuals are introduced, and the model is used to track their progression and cost over time. A recent study15 varied the age at infection, gender, and disease duration over time using six cohorts to estimate future disease burden. However, in a previous analysis16 it was found that the predictability of the HCV epidemiology model is very sensitive to the number of age and gender cohorts used, due to the large difference in new infections' incidence and mortality across cohorts. Thus, we set out to create a disease progression and cost model that was more refined than those used in previous studies. The present study represents an improvement over previous work. A total of 36 cohorts composed of 17 5-year age cohorts and one age cohort for 85+ was used for each gender. A system dynamic model was developed to provide maximum flexibility in changing inputs (incidence rate, age at infection, background mortality, transplantation rate, treatment rate, and cost) over time. Finally, more recent healthcare cost data17 were used to estimate the HCV cost burden as compared to previous studies that relied on older data.18 The goal of this study is to describe the future disease and cost burden of HCV infection in the United States using a systems approach, assuming there is no incremental increase in treatment as the result of the new therapies. Materials and Methods A system dynamic modeling framework was used to construct the model in Microsoft Excel (Redmond, WA) to quantify the HCV-infected population, the disease progression, and the associated cost from 1950-2030. Uncertainty and sensitivity analyses were completed using Crystal Ball, an Excel add-in by Oracle. Beta-PERT distributions were used to model uncertainty associated with all inputs. Sensitivity analysis was used to identify the uncertainties that had the largest impact on the peak cost in 2025. Monte-Carlo simulation was used to determine the 95% confidence interval (CI) for cost and prevalence. When historical data were available, nonlinear polynomial extrapolation of historical data was used for future assumptions in 2012-2030. The Excel optimization add-in, Solver, was used to calibrate the model using reported National Health and Nutrition Examination Survey (NHANES) prevalence data1 as described below: equation image Populations in a given health state (incident HCV, cured, F1, F2, etc.) were handled as stocks, while annual transitions from one health state to another were treated as flows with an associated rate/probability (see Supporting Appendix A, Fig. 1). Historical data reporting the number and indications for liver transplantations from 1988 to 2010 were used to estimate the number of transplantations attributable to chronic HCV infection.6 Trended transplantation rates from 1988-2011 were used for 1971-1987 and 2011-2030. The populations were tracked by age cohorts and gender. Five-year age cohorts were used through age 84, and those aged 85 and older were treated as one cohort. Each year, one-fifth of the population in each age group, except for 85 and older, was moved to the next age cohort to simulate aging, after accounting for mortality. The model started in 1950 to track the prevalent population from the time of infection and forecasted the sequelae populations to 2030. The impact of individuals infected with HCV prior to 1950 was expected to be small and within the margin of error of our analysis. Prevalence of chronic HCV in any given year was calculated by the sum of viremic incidence of new infections (incidence) minus mortality and cured cases, up to that year, as shown below. Annual background mortality rates by age and gender19 were adjusted for incremental increase in mortality due to injection drug use (IDU) and transfusion.16 These rates were applied to all populations. For individuals with decompensated cirrhosis (diuretic sensitive and refractory ascites, variceal hemorrhage, and hepatic encephalopathy), HCC, and those who required a liver transplantation, a separate mortality rate was also applied for liver-related deaths,8,10,20 as shown in Supporting Appendix A, Tables 1, 2. The number of cured patients in 2002-2007 was estimated using published data for the number of treated patients21 and an average sustained viral response (SVR) of 34%, as shown in Supporting Appendix B, Table 1. The number of cured patients prior to 2002 was ignored. The number of patients cured in 2008-2030 was extrapolated using 2002-2007 data. The objective of this analysis was to estimate the HCV disease progression and the associated cost in the US when there was no incremental increase in treatment as the result of the new therapies. The launch of direct-acting antivirals in 2011, the increased number of treated patients, and the higher SVR of new therapies were not incorporated in this model. The impact and cost of new therapies were specifically excluded in order to establish a baseline for future comparisons. This, however, will lead to higher projections of advanced liver diseases and poor outcomes as compared to the real world. With known annual mortality and cured population, annual incidence was calculated using a constant multiplied by the relative incidence. Relative incidence was calculated from the literature data15 by dividing each year's incidence by the 1950 incidence to result in a relative incidence of 1 in 1950, as shown in Supporting Appendix C, Table 1. Incidence in the US peaked in 1989 when it was 11.5 times higher than incidence in 1950. Solver was used to find the constant that resulted in a prevalence of 3.2 (95% CI, 2.7-3.9) million in 2000.1 The annual incidence was distributed among different age and gender cohorts using distributions reported by the Centers for Disease Control and Prevention (CDC)2225 from 1992-2007. Incidence distribution from 2007 was used for 2008-2030 based on the assumption that the future risk factors will remain the same. In 1967-1991, the incidence distribution by age and gender was changed every 5 years and the rates within each 5-year period (e.g., 1967-1971) were extrapolated linearly by age cohort and gender. The distribution was kept constant prior to 1966 based on the assumption that the risk factors remained the same. Solver was used to calculate the annual age and gender distributions, which minimized the difference between the forecasted prevalence age and gender distribution in 2000 and those reported by NHANES.1 Since the objective of this study was to determine healthcare costs associated with HCV infection, incremental costs derived from a matched cohort study were used. The cost by sequelae data came from previously published work by McAdam-Marx et al.17 The healthcare costs among chronic HCV individuals in F0-F3 stages were adjusted for the proportion not under care (see Supporting Appendix D). The 1950-2010 costs were inflation-adjusted using the Medical Care Services component of the Consumer Price Index.26 The 2011 annual medical inflation rate of 3.06% (2.88%-5.33%) was used to estimate future costs in 2012-2030. The lifetime cost of an HCV-infected individual by age and gender was calculated by introducing 1,000 viremic incident cases in 2011 and using the model to track the progression of these cases and the annual cost over time. The annual healthcare costs for all sequelae and all years were summed and divided by 1,000 to calculate the individual cost. The average cost was calculated by distributing 1,000 new viremic incident cases using 2010 incidence age and gender distribution.27 The annual background and liver-related mortality are shown in Supporting Appendix E. Background mortality is forecasted to peak at 39,935 in 2022 as the HCV population ages, while liver-related deaths peak at 29,695 in 2019 as the number of deaths from decompensated cirrhosis reach their maximum. Relative incidence and estimated incidence are shown in Supporting Appendix C, Table 1. The constant multiplier for incidence was estimated at 23,790 (20,070-28,990), resulting in a prevalence of 3.2 (2.7-3.9) million in the year 2000.1 Incidence values represent acute cases, and 82% (55%-85%)28 of these cases progressed to chronic HCV with a METAVIR score of F0, as shown in Supporting Appendix A, Table 1. Incidence for all sequelae is shown in Fig. 1. Fig. 1 HCV sequelae incidence: US 1950-2030. Peak viremic prevalence of chronic HCV infection was reached in 1994 with 3.3 (2.8-4.0) million infected individuals (Fig. 2). While the overall prevalence has been declining since, the prevalence of more advanced liver diseases has been increasing. The prevalent population with compensated cirrhosis is projected to peak in 2015 at 626,500 cases, while the population with decompensated cirrhosis will peak in 2019 with 107,400 cases. The number of individuals with HCC, caused by HCV infection, will increase to 23,800 cases in 2018 before starting to decline. Fig. 2 HCV sequelae and total prevalence (millions): US 1950-2030. In 2011, the total healthcare cost associated with HCV infection was $6.5 ($4.3-$8.2) billion. Total cost is expected to peak in 2024 at $9.1 billion ($6.4-$13.3 billion), as shown in Fig. 4. The majority of peak cost will be attributable to more advanced liver diseases—decompensated cirrhosis (46%), compensated cirrhosis (20%), and HCC (16%). The maximum cost associated with mild to moderate fibrosis (F0-F3) occurred in 2007 at nearly $780 million. The cost associated with compensated cirrhosis is expected to peak in 2022 at $1.9 billion, while the peak cost for decompensated cirrhosis and HCC is predicted to occur in 2025, with annual costs in excess of $4.2 billion and $1.4 billion, respectively (Fig. 3). Fig. 3 Projected HCV sequelae cost: US 1950-2030. Fig. 4 Total prevalence and healthcare costs with 95% CIs. The lifetime cost of an individual infected in 2011 was estimated at $64,490 ($46,780-$73,190) in 2011 dollars. When medical inflation was applied, the lifetime cost increased to $205,760 ($154,890-$486,890). The lifetime cost estimate varies widely by age and gender due to life expectancy. As shown in Table 1, costs for HCV infections among younger individuals and females will be higher than among the elderly and males. Table 1 Lifetime Cost by Age, HCV Infection, and Gender (in 2011 Dollars) The predictive value of a model can be confirmed by comparing its forecasts with real-world observations. The model was calibrated using HCV prevalence by age and gender in the year 2000, as reported by NHANES.1 The incidence was back-calculated and the model was used to fit reported prevalence in 2000. Total prevalence in other years, prevalence and incidence by sequelae, and mortality were calculated. A 2010 incidence of 16,020 (13,510-19,510) was forecasted versus the reported incidence of 17,000.29 The wide CI for incidence was driven by the large uncertainty in reported prevalence.1 According to the study by Davis et al.,15 HCV incidence peaked in 1989 when it was 11.5 times higher than the incidence in 1950. This corresponded to a peak incidence of 274,000 in a single year. A 2010 prevalence of 2.5 (2.1-3.2) million cases was estimated, matching the most recent NHANES data that showed 2.5 million cases in the 2009-2010.30 In comparison, Davis et al.15 reported an HCV prevalence of about 3.3 million in the same period. Our analysis predicted that HCV prevalence in the US peaked in 1994 at 3.3 million viremic cases. The overall prevalence is declining, and the 2030 prevalence is expected to be one-third of the peak prevalence. Incidence has dropped significantly since its peak in 1989 due to the implementation of HCV antibody screening of the blood supply in 1992, with full implementation of universal donation screening for viral RNA through nucleic acid testing (NAT) in 1999,31,32 and to a decline in IDU.33 However, disease burden continues to grow. The dichotomy of HCV is that, while the overall number of infections is projected to decline, the number of individuals experiencing advanced liver diseases, liver related deaths, and healthcare costs are expected to increase. This was a key insight provided by this analysis. A recent study by the CDC2 reported an increased recorded mortality rate in the US HCV-infected population in 1999-2007. Consistent with this study, we forecast that mortality will continue to increase and peak in 2020 (Supporting Appendix E). After 2020, the decline in the number of HCV infections will outweigh the increase in background mortality, and liver-related deaths and the number of deaths will decrease. Mortality is projected to peak at ~69,440 deaths, with 29,650 deaths attributable to liver disease, including over 9,000 attributed to HCC in 2020. As shown in Fig. 1, the incidence of more advanced liver diseases will continue to increase, with incidence of decompensated cirrhosis and HCC peaking in 2016-2017. However, not all infected individuals progress to the next stage, and the peak incidence is lower at each consecutive sequelae. The total prevalent population of each sequela is shown in Fig. 2. Over 50% of the HCV prevalent population resides in F0-F3 stage of the disease at any point in time. However, by 2030 compensated cirrhosis cases will account for 37% of all prevalent cases. The HCV compensated cirrhosis population is projected to peak in 2015, while the decompensated cirrhosis population will peak in 2019. A smaller portion of the HCV-infected population will go on to have HCC, but the size of this population does not grow substantially beyond 24,000 due to the very high mortality rate in this population. A key observation was that peak healthcare costs lag peak prevalence by almost three decades. This is due to the time required for infected cases to progress to more advanced forms of liver disease, which are more expensive to treat. Sensitivity analysis identified the key drivers of variance in peak healthcare cost. The incidence uncertainty (20,070-28,990), calculated from the uncertainty in NHANES 2000 prevalence, accounted for 52% of the variance in peak cost. Higher incidence led to more prevalent cases and higher cost. Uncertainty in the annual cost of diuretic sensitive ascites ($2,525-$29,860)17,18 accounted for 15% of the total variance. Finally, uncertainty in persistence (32%-80%)34,35 accounted for 13% of the variance. Higher persistence resulted in higher SVR and a greater number of cured patients, which in turn resulted in lower healthcare costs. This highlights the importance of SVR on future costs. In this study, the treatment cost was specifically excluded, and yet the SVR of historically treated cases still turned out to be important. The treated population had to be included in the disease progression portion of the model since it affected the size of prevalent populations. In 2002-2011, we estimated that 322,700 individuals were cured. If persistence in the real world were the same as observed in clinical trials (80%),35 the average SVR would be 46%, resulting in 430,000 cured cases in 2002-2011. This would result in a decrease of $1 billion dollars in peak healthcare costs. Patients experiencing decompensated cirrhosis accounted for the majority of future costs. In 2011, it accounted for 40% of total costs, and by 2030 it accounted for 47%. This was followed by compensated cirrhosis (22% of 2011 and 19% of 2030 total cost) and HCC (15% of 2011 and 16% of 2030 total cost). The prevalence of decompensated cirrhosis was 20% of compensated cirrhosis, but the annual cost was 12 times higher.17 The average lifetime cost of a patient was estimated at $64,490 as compared to a recent study that reported an average cost of $19,660 per patient in 2002-2010 alone.17 The analysis of cost by age at infection demonstrates a link between life expectancy and healthcare cost. Individuals infected in the 1950s were expected to have lower lifetime costs due to lower life expectancy (and lower medical costs), while newly HCV-infected individuals are expected to cost the healthcare systems more due to the longer life expectancy. This highlights the continued importance of prevention as a means of managing future healthcare expenditure. The effects of new therapies were excluded from our model. However, if the number of treated patients is doubled and kept constant at 126,000 per year in 2012-2030 and the average SVR is increased to 70%, the 2030 prevalent population is projected to be fewer than 100,000 cases. This illustrates that it is possible to substantially reduce HCV infection in the US through active management. There were a number of limitations in this study that impact the accuracy of our base projections. There is strong evidence that progression transition rates change with age and gender. A single transition rate was used for all ages and genders. This led to a higher incidence/prevalence in early years and among females, as well as higher liver-related mortality among the younger age groups. However, the CIs in our study did capture uncertainty in the above assumptions. The model does not explicitly account for alcohol consumption and metabolic syndrome. Frequent heavy intake of alcohol significantly increases fibrosis progression,36,37 and accelerated disease progression has been associated with metabolic syndrome.38,39 The model implicitly takes these factors into account, as the transition probabilities and sequelae cost incorporate some level of alcohol consumption and metabolic syndrome. If an increasing proportion of the prevalent population experiences heavy alcohol intake or metabolic syndrome, progression to advanced liver disease, and the associated costs, will likely increase. The model does not take into account the persistent risk of fibrosis progression and liver cancer in virologically cured patients. Observational studies have demonstrated that most patients who achieve SVR experience stabilization or regression of fibrosis. After SVR, episodes of cirrhosis decompensation are extremely rare, and instances of HCC are likely to be small in number and not greatly impact overall disease burden or costs.40 A limitation of prevalence measures used in this analysis is that high prevalence populations may be undersampled through the NHANES.41 In particular, undersampling of veterans, prisoners, and the homeless would result in underestimation of the current prevalence, future disease, and cost burden. In addition, while IDU has declined from a peak in the 1970s, there is some evidence of a recent increase in IDU among middle-aged adults, potentially leading to a higher incidence of HCV.33 In all cases, the sequelae prevalence and the healthcare costs will be higher than the estimated base value. A further limitation is that the model does not consider recent recommendations42 to implement birth cohort screening for HCV. Such screening could reduce the future incidence of advanced liver disease and associated costs, when infected individuals identified through screening receive appropriate treatment and achieve SVR.43 Treatment of HCV prior to 2002 was also ignored. The first pegylated interferon was launched in August of 2001, and the number of patients treated with pegylated interferons was small in that year. Prior to that launch, patients were treated with nonpegylated interferon. The number of individuals cured prior to 2001 was small, and their exclusion did not have a material impact on the outcome of the model. The rate of SVR used in the model was derived from studies of treatment-naïve patients; however, average SVR is lower in treatment-experienced patients. Because the majority of treated patients are naïve, it is unlikely that the use of a single rate for SVR substantially impacted estimates of treated and cured patients beyond our CIs. A final limitation is that the future cost of liver transplants is based on the assumption that transplantation will remain at the same rate as today. All other sequelae costs were determined as the result of the disease progression. The number of liver transplants, however, is determined by the clinical guidelines and availability of donors. Thus, the future costs associated with liver transplants could be higher if transplantation rates increase. In conclusion, our analysis demonstrated that overall HCV prevalence in the US is in decline due to lower incidence. However, the prevalence of advanced liver disease will continue to increase, as will the corresponding healthcare costs. Lifetime healthcare costs for an HCV-infected person are significantly higher than for noninfected persons, and the expected cost is higher among populations with a higher life expectancy. Finally, it is possible to substantially reduce HCV infection in the US through active management. We thank Steven Wiersma of the World Health Organization (WHO) and Charles Gore of the World Hepatitis Alliance, who challenged us to develop a robust cost burden model for HCV. The authors also thank Scott Holmberg of the Centers for Disease Control and Prevention (CDC). His explanations of how to interpret the data published by CDC and feedback on our forecasts were critical in calibrating this model. In addition, we thank Greg Armstrong of CDC for proposing the methodology used to estimate incidence when prevalence, mortality, and cured populations are known. He developed this methodology and shared it with us as a way of estimating incidence. We thank Regina Klein of the Center for Disease Analysis (CDA) for the background research and Kim Murphy of CDA for developing the custom Excel codes to run the model. Finally, we thank Carrie McAdam-Marx of the University of Utah for explaining the methodology used by her group to calculate the incremental cost of HCV sequelae. Centers for Disease Control and Prevention confidence interval hepatocellular carcinoma hepatitis C virus injection drug use nucleic acid testing National Health and Nutrition Examination Survey sustained viral response. Supporting Information 2. Ly KN, Xing J, Klevens RM, Jiles RB, Ward JW, Holmberg SD. The increasing burden of mortality from viral hepatitis in the United States between 1999 and 2007. Ann Intern Med. 2012;156:271–278. [PubMed] 3. Deuffic-Burban S, Poynard T, Sulkowski MS, Wong JB. Estimating the future health burden of chronic hepatitis C and human immunodeficiency virus infections in the United States. J Viral Hepat. 2007;14:107–115. [PubMed] 4. El-Serag HB, Mason AC. Rising incidence of hepatocellular carcinoma in the United States. N Engl J Med. 1999;340:745–750. [PubMed] 5. Poynard T, Yuen MF, Ratziu V, Lai CL. Viral hepatitis C. Lancet. 2003;362:2095–2100. [PubMed] 6. Waiting list candidates [Computer file] Organ Procurement and Transplantation Network (OPTN) U.S. Dept. of Health and Human Services, Public Health Service, Bureau of Health Resources Development, Division of Organ Transplantation; 2011. 7. Siebert U, Sroczynski G. Effectiveness and cost-effectiveness of initial combination therapy with interferon/peginterferon plus ribavirin in patients with chronic hepatitis C in Germany: a health technology assessment commissioned by the German Federal Ministry of Health and Social Security. Int J Technol Assess Health Care. 2005;21:55–65. [PubMed] 8. Bennett WG, Inoue Y, Beck JR, Wong JB, Pauker SG, Davis GL. Estimates of the cost-effectiveness of a single course of interferon-alpha 2b in patients with histologically mild chronic hepatitis C. Ann Intern Med. 1997;127:855–865. [PubMed] 9. Sennfalt K, Reichard O, Hultkrantz R, Wong JB, Jonsson D. Cost-effectiveness of interferon alfa-2b with and without ribavirin as therapy for chronic hepatitis C in Sweden. Scand J Gastroenterol. 2001;36:870–876. [PubMed] 10. Bernfort L, Sennfalt K, Reichard O. Cost-effectiveness of peginterferon alfa-2b in combination with ribavirin as initial treatment for chronic hepatitis C in Sweden. Scand J Infect Dis. 2006;38:497–505. [PubMed] 11. Wong JB, McQuillan GM, McHutchison JG, Poynard T. Estimating future hepatitis C morbidity, mortality, and costs in the United States. Am J Public Health. 2000;90:1562–1569. [PubMed] 12. Salomon JA, Weinstein MC, Hammitt JK, Goldie SJ. Cost-effectiveness of treatment for chronic hepatitis C infection in an evolving patient population. JAMA. 2003;290:228–237. [PubMed] 13. Kim WR, Poterucha JJ, Hermans JE, Therneau TM, Dickson ER, Evans RW, et al. Cost-effectiveness of 6 and 12 months of interferon-alpha therapy for chronic hepatitis C. Ann Intern Med. 1997;127:866–874. [PubMed] 14. Gerkens S, Nechelput M, Annemans L, Peraux B, Beguin C, Horsmans Y. A health economic model to assess the cost-effectiveness of pegylated interferon alpha-2a and ribavirin in patients with moderate chronic hepatitis C and persistently normal alanine aminotransferase levels. Acta Gastroenterol Belg. 2007;70:177–187. [PubMed] 16. Kershenobich D, Razavi HA, Cooper CL, Alberti A, Dusheiko GM, Pol S, et al. Applying a system approach to forecast the total hepatitis C virus-infected population size: model validation using US data. Liver Int. 2011;31(Suppl 2):4–17. [PubMed] 17. McAdam-Marx C, McGarry LJ, Hane CA, Biskupiak J, Deniz B, Brixner DI. All-cause and incremental per patient per year cost associated with chronic hepatitis C virus and associated liver complications in the United States: a managed care perspective. J Manag Care Pharm. 2011;17:531–546. [PubMed] 18. El Khoury AC, Wallace C, Klimack WK, Razavi H. Economic burden of hepatitis C-associated diseases in the United States. J Viral Hepat. 2012;19:153–160. [PubMed] 19. Human mortality database. Berkeley: University of California; 2012. 20. Younossi ZM, Singer ME, McHutchison JG, Shermock KM. Cost effectiveness of interferon alpha2b combined with ribavirin for the treatment of chronic hepatitis C. HEPATOLOGY. 1999;30:1318–1324. [PubMed] 21. Volk ML, Tocco R, Saini S, Lok AS. Public health impact of antiviral therapy for hepatitis C in the United States. HEPATOLOGY. 2009;50:1750–1755. [PubMed] 22. Centers for Disease Control and Prevention. Hepatitis surveillance. Hepatitis surveillance/Center for Disease Control 2006 Sept. 1 (Report 61) 23. Wasley A, Miller JT, Finelli L. Surveillance for acute viral hepatitis-United States, 2005. MMWR Surveill Summ. 2007;56:1–24. [PubMed] 24. Wasley A, Grytdal S, Gallagher K. Surveillance for acute viral hepatitis-United States, 2006. MMWR Surveill Summ. 2008;57:1–24. [PubMed] 25. Daniels D, Grytdal S, Wasley A. Surveillance for acute viral hepatitis-United States, 2007. MMWR Surveill Summ. 2009;58:1–27. [PubMed] 26. Consumer Price Index Bureau of Labor Statistics USDoL. Washington, D.C: United States Department of Labor; 2010. May 11. 27. Viral hepatitis surveillance, United States 2010 National Center for HIV SaTPDoHASB. Atlanta, GA: Centers for Disease Control and Prevention; 2010. 29. Disease Burden from Viral Hepatitis A, B, and C in the United States Centers for Disease Control and Prevention. Atlanta, GA: Centers for Disease Control; 2010. Nov 15. 30. National Health and Nutrition Examination Survey Data, 2009-2010 Centers for Disease Control and Prevention. National Center for Health Statistics. Hyattsville, MD: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention; 2012. 31. Dodd RY, Notari EP, Stramer SL. Current prevalence and incidence of infectious disease markers and estimated window-period risk in the American Red Cross blood donor population. Transfusion. 2002;42:975–979. [PubMed] 32. Zou S, Dorsey KA, Notari EP, Foster GA, Krysztof DE, Musavi F, et al. Prevalence, incidence, and residual risk of human immunodeficiency virus and hepatitis C virus infections among United States blood donors since the introduction of nucleic acid testing. Transfusion. 2010;50:1495–1504. [PubMed] 33. Armstrong GL. Injection drug users in the United States, 1979-2002: an aging population. Arch Intern Med. 2007;167:166–173. [PubMed] 34. Backus LI, Boothroyd DB, Phillips BR, Mole LA. Predictors of response of US veterans to treatment for the hepatitis C virus. HEPATOLOGY. 2007;46:37–47. [PubMed] 36. Bellentani S, Pozzato G, Saccoccio G, Crovatto M, Croce LS, Mazzoran L, et al. Clinical course and risk factors of hepatitis C virus related liver disease in the general population: report from the Dionysos study. Gut. 1999;44:874–880. [PMC free article] [PubMed] 39. Hourigan LF, Macdonald GA, Purdie D, Whitehall VH, Shorthouse C, Clouston A, et al. Fibrosis in chronic hepatitis C correlates significantly with body mass index and steatosis. HEPATOLOGY. 1999;29:1215–1219. [PubMed] 40. Alberti A. Impact of a sustained virological response on the long-term outcome of hepatitis C. Liver Int. 2011;31(Suppl 1):18–22. [PubMed] 41. Chak E, Talal A, Sherman KE, Schiff E, Saab S. Hepatitis C virus infection in the United States: an estimate of true prevalence. Liver Int. 2011;31:1090–1101. [PubMed] 42. Recommendations for the identification of chronic hepatitis C virus infection among persons born during 1945-1965. MMWR Recomm Rep. 2012;61:1–32. [PubMed] 43. McGarry LJ, Pawar VS, Panchmatia HR, Rubin JL, Davis GL, Younossi ZM, et al. Economic model of a birth cohort screening program for hepatitis C virus. HEPATOLOGY. 2012;55:1344–1355. [PubMed] 45. Villano SA, Vlahov D, Nelson KE, Cohn S, Thomas DL. Persistence of viremia and the importance of long-term follow-up after acute hepatitis C infection. HEPATOLOGY. 1999;29:908–914. [PubMed] 46. Thein HH, Yi Q, Dore GJ, Krahn MD. Estimation of stage-specific fibrosis progression rates in chronic hepatitis C virus infection: a meta-analysis and meta-regression. HEPATOLOGY. 2008;48:418–431. [PubMed] 47. Ries LAG, Young GL, Keel GE, Eisner MP, Lin YD, Horner M-J. SEER survival monograph: cancer survival among adults: U.S. SEER program, 1988-2001, patient and tumor characteristics. [NIH Pub.No.07-6215] Bethesda, MD: National Cancer Institute, SEER Program; 2007.
__label__1
0.627528
We are all familiar with cross examination in trial—it seems to be the highlight of every legal TV show or televised trial. What you may not be aware of is that the constitution’s sixth amendment actually guarantees defendants a right to cross examine—that is, the right to question their accusers (or in criminal cases, the state and its witnesses). Defendant has Probation Revoked A recent Maryland appellate court case discusses the parameters of that right when it comes to a probation revocation hearing. The case involves a defendant who pleaded guilty to a robbery charge. His sentence involved probation. During his probation, he tested positive for marijuana,  thus violating his probation, and as a result was sentenced to serve prison time for the robbery. In 2013, Maryland revoked the death penalty. That may have been because trials are imperfect, and thus, the possibility of taking an innocent person’s life is too great a risk. That left Maryland’s highest penalty being life without parole. But with the change came a new problem—unlike there was with the death penalty, there are few standards that a judge or jury need to abide by to sentence someone to life. This potentially leaves what is known as the second most severe penalty in our American justice system to chance, risking inconsistent standards and uncertainty in sentencing. Bill Seeks to Create Standards Sometimes it is interesting to see how issues that look like they have nothing to do with criminal justice end up affecting criminal justice. Such was the case this week, when the United States Supreme Court made a ruling based upon whether Puerto Rico was a sovereign state or not. Can Puerto Rico Prosecute? We know that Puerto Rico is not a U.S. State. In fact, it has its own constitution and its own government, a power given to it by the United States in 1950. Still, the country is tied to the U.S. Federal Government and its powers are provided to it by the U.S. Essentially, Congress makes the rules. So can Puerto Rico prosecute someone who has already been prosecuted by a federal court without running afoul of the double jeopardy clause? The constitution guarantees us a jury of our peers in criminal matters. It is one of the most fundamental constitutional rights that we have. But our judicial system is also intended to be racially blind, and selecting jurors based on race or nationality is prohibited. Often these two seemingly contradictory principles collide, as they did recently in a case that went all the way to the United States Supreme Court. African-American Jurors Stricken From Jury The case involved an African-American man who was convicted in Georgia of murdering an elderly white woman. The jury was made up of jurors who were all white. The four African-American potential jurors, were eliminated from the jury before the trial. Twenty years after the conviction, the man’s attorneys argued to the Supreme Court that the selection of that jury was racially based. Maryland is again looking to be one of the more progressive states in the nation when it comes to criminal justice reform, as legislators have agreed to pass a sweeping crime reform bill. The bill is unique in that it strengthens criminal penalties in some areas, but shifts the focus of criminal penalties in others. The New Law The new law, called the Justice Reinvestment Act, has a number of changes to Maryland’s criminal justice laws. The new law allows those who have finished 25% of sentences to be administratively released. The provision would only apply to nonviolent drug offenses, or thefts where the amount was less than $1,500. In some cases, if public safety is threatened, a parole commission can opt not to provide the early release. Sometimes, the United States Supreme Court rules by declining to rule. When a case is appealed to the Supreme Court, the Court may opt to not accept the case. When that happens, the practical effect is that the law created by the lower court remains the law. This was the case recently when the Supreme Court declined to get involved in a case involving the Eighth Amendment’s restriction on cruel and unusual punishment. Severe Penalty for Drug Possession Many people know, or at least are aware of the doctrine of causation. It means that someone’s negligence needs to cause the injury that was sustained by the victim. Causation—sometimes called proximate cause—is a crucial element in almost every personal injury case. A lot of people may not be aware that causation can play a part in criminal cases as well, and that someone who breaks the law and causes an injury can be criminally charged even if they did not directly injure anyone. The constitutional issues surrounding cell phones and due process continue to evolve. Stingrays are devices that track cellphones even when they are not in use. The devices emulate cell phone towers, thus prompting phones to connect to them, and thus revealing location information of the phones. Law enforcement has used these devices which can track someone’s location through their cell phone, even when the cell phone isn’t actually  being used. Recently, there has been a new development in this issue. A court has determined that before law enforcement can use a Stingray device, they must in fact obtain a warrant. Warrants Were Not Being Used University Policies We have previously kept you up to date on Maryland’s slow but steady legal changes, which represent a different approach to fighting the drug war. Maryland’s attempt to overhaul its drug laws is continuing, as the legislature is now reviewing proposed radical changes, which would make the state somewhat of a pioneer on the drug front. Proposed Changes Take a Different Approach The first proposed law is one that would provide for on-demand treatment at emergency rooms, for people admitted with drug-related problems. While this may seem counterintuitive, it stems from a study showing that for every dollar spent on treatment $12 is saved on healthcare and criminal justice costs. The law would require hospitals have a drug counselor available at all times, and require them to have pre-set arrangements to transfer patients to rehabilitation centers.
__label__1
0.907043
background preloader Facebook Twitter Sciencegirl. Science. Science with Out of this World Info. Electric Weather. The following excerpts come from a report that appeared in the Institute of Electrical and Electronics Engineers (IEEE) magazine, SPECTRUM, for April. Electric Weather The report demonstrates that when science has lost its way, engineers must use their intuition to make progress. Electric Rainmaking Technology Gets Mexico’s BlessingBut for now, doubters prevail north of the border.From at least the early 1940′s to the end of the 20th century, it always rained more in the state of Jalisco, in central Mexico, than in its neighbor Aguascalientes. But in 2000, on a patch of parched pasture in Aguascalientes, workers from Mexico City-based Electrificación Local de la Atmósfera Terrestre SA (ELAT) erected a peculiar field of interconnected metal poles and wires somewhat resembling the skeleton of a carnival tent. Comment: This is the common phenomenon of cognitive dissonance in science. The Russians are performing a weather experiment which should fail according to accepted theory. Wal Thornhill. SCIENCE HOBBYIST: Top Page. Nature Publishing Group : science journals, jobs, and information. sScienceMap. Baiget. Equations - Photonic > Light. Astro. Biology. Education Science. Physics. The Nature of Reality. Scientific information. SciCentral: Gateway to the best science news sources. Science. ScienceDirect - Home. Ask any Science questions. Distributed Intelligence. Discover Magazine dark matter. Your hands are, roughly speaking, 360 million years old. Discover Magazine dark matter We know a fair amount about the transition from fins to hands thanks to the moderately mad obsession of paleontologists, who venture to inhospitable places around the Arctic where the best fossils from that period of our evolution are buried. A team of Spanish scientists has provided us with a glimpse of that story. Both fins and hands get their start in embryos. Discover Science Mag. Book Publishers. Matrix mechanics. Matrix mechanics is a formulation of quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925. Matrix mechanics Matrix mechanics was the first conceptually autonomous and logically consistent formulation of quantum mechanics. It extended the Bohr Model by describing how the quantum jumps occur. It did so by interpreting the physical properties of particles as matrices that evolve in time. It is equivalent to the Schrödinger wave formulation of quantum mechanics, and is the basis of Dirac's bra-ket notation for the wave function. Development of matrix mechanics In 1925, Werner Heisenberg, Max Born, and Pascual Jordan formulated the matrix mechanics representation of quantum mechanics. Epiphany at Helgoland In 1925 Werner Heisenberg was working in Göttingen on the problem of calculating the spectral lines of hydrogen. "It was about three o' clock at night when the final result of the calculation lay before me. The Three Papers Heisenberg's reasoning. Electricity and Magnetism. Science news. 7 Man-Made Substances that Laugh in the Face of Physics A ferrofluid is a liquid that reacts to magnetic fields in trippy ways that make you think that science is both magical and potentially evil. Tell us that didn't look like the birth of the most sinister dildo ever. What happens is that when a magnetic field is applied to the fluid, the particles of iron compound inside align to it. Visual science. Scientific Network. Stock science and specialist images and footage. Museum of science, art and human perception. Exploratorium science museum art ,perception.
__label__1
0.722442
The first full moon of this month is on Thursday, August 2nd. This moon is most frequently called the Sturgeon Moon, Corn Moon, or Green Corn Moon. But there is no lack of names for this month’s moon: Barley moon, Dog Day’s Moon, Mating Moon, Grain Moon, Dog’s Day Moon, Woodcutter’s Moon, Chokeberry Moon, Summertime Moon,  Dispute Moon, Weodmonath (Vegetation Month), Harvest Moon. The Dakotah Sioux call this the Moon When All Things Ripen and the  Cherokee people call it the Fruit Moon, Ga’loni, when the foods of the trees and bushes are gathered at this time. The various “Paint Clans” begin to gather many of the herbs used for medicines and the  “Wild Potato Clans” harvest foods growing along the streams, marshes, lakes and ponds. North American fishing tribes called this the Sturgeon Moon because that species of fish was abundant during this month. August was originally called Sextilis by the Romans and was was later renamed Augustus in honor of Augustus Caesar. I said it was the first full moon of the month because there will be a second “blue moon” on the 31st. More about that later this month. Lunasa or Lughnassadh is a Celtic feast of the harvest and new grain for bread held on the full moon. In Old English this became Lammas, or “Loaf Mass.” In some English-speaking countries in the Northern Hemisphere, the Lammas Day festival of the wheat harvest is the first harvest festival of the year. On this day it was customary to bring to church a loaf made from the new crop, which began to be harvested at Lammastide. The loaf was blessed, and in Anglo-Saxon England it was considered to be magical. A book of Anglo-Saxon charms directed that the lammas bread be broken into four bits, which were to be placed at the four corners of the barn, to protect the grain. In the Anglo-Saxon Chronicle, it is called “the feast of first fruits”. Culendom is the eleventh month for Druids and this moon is the Harvest Moon, a time of reaping and fruition. The first day of Culendom is the day of the full moon. Culendom (the month) is from the Harvest Moon to the next full moon which is known as the Moon of Claiming.
__label__1
0.980323
Surveillance Nation Route 9 is an old two-lane highway that cuts across Massachusetts from Boston in the east to Pittsfield in the west. Near the small city of Northampton,the highway crosses the wide Connecticut River.The Calvin Coolidge Memorial Bridge, named after the president who once served as Northampton’s mayor, is a major regional traffic link.When the state began a long-delayed and still-ongoing reconstruction of the bridge in the summer of 2001, traffic jams stretched for kilometers into the bucolic New England countryside. In a project aimed at alleviating drivers’ frustration, the University of Massachusetts Transportation Center, located in nearby Amherst, installed eight shoe-size digital surveillance cameras along the roads leading to the bridge. Six are mounted on utility poles and the roofs of local businesses. Made by Axis Communications in Sweden, they are connected to dial-up modems and transmit images of the roadway before them to a Web page, which commuters can check for congestion before tackling the road. According to Dan Dulaski, the system’s technical manager, running the entire webcam system—power, phone, and Internet fees—costs just $600 a month. The other two cameras in the Coolidge Bridge project are a little less routine. Built by Computer Recognition Systems in Wokingham, England, with high-quality lenses and fast shutter speeds (1/10,000 second), they are designed to photograph every car and truck that passes by. Located eight kilometers apart, at the ends of the zone of maximum traffic congestion, the two cameras send vehicle images to attached computers, which use special character-recognition software to decipher vehicle license plates. The license data go to a server at the company’s U.S. office in Cambridge, MA, about 130 kilometers away. As each license plate passes the second camera,the server ascertains the time difference between the two readings. The average of the travel durations of all successfully matched vehicles defines the likely travel time for crossing the bridge at any given moment, and that information is posted on the traffic watch Web page To local residents, the traffic data are helpful, even vital: police use the information to plan emergency routes. But as the computers calculate traffic flow, they are also making a record of all cars that cross the bridge—when they do so, their average speed, and (depending on lighting and weather conditions) how many people are in each car. Trying to avoid provoking privacy fears, Keith Fallon, a Computer Recognition Systems project engineer, says, “we’re not saving any of the information we capture. Everything is deleted immediately.” But the company could change its mind and start saving the data at any time. No one on the road would know The Coolidge Bridge is just one of thousands of locations around the planet where citizens are crossing—willingly, more often than not—into a world of networked, highly computerized surveillance.According to a January report by J.P. Freeman, a security market-research firm in Newtown, CT, 26 million surveillance cameras have already been installed worldwide, and more than 11 million of them are in the United States. In heavily moni- tored London, England, Hull University criminologist Clive Norris has estimated, the average person is filmed by more than 300 cameras each day. The $150 million-a-year remote digital-surveillance-camera market will grow, according to Freeman, at an annual clip of 40 to 50 percent for the next 10 years. But astonishingly, other, non-video forms of monitoring will increase even faster. In a process that mirrors the unplanned growth of the Internet itself, thousands of personal, commercial, medical, police, and government databases and monitoring sys- tems will intersect and entwine. Ultimately, surveillance will become so ubiquitous, networked, and searchable that unmonitored public space will effectively cease to exist. This prospect—what science fiction writer David Brin calls “the transparent society”—may sound too distant to be worth thinking about. But even the far-sighted Brin underestimated how quickly technological advances—more powerful microprocessors, faster network transmissions, larger hard drives, cheaper electronics, and more sophisticated and powerful software—would make universal surveillance possible. It’s not all about Big Brother or Big Business, either. Widespread electronic scrutiny is usually denounced as a creation of political tyranny or corporate greed. But the rise of omnipresent surveillance will be driven as much by ordinary citizens’ understandable—even laudatory—desires for security, control, and comfort as by the imperatives of business and government. “Nanny cams,” global-positioning locators, police and home security networks, traffic jam monitors,medical-device radio-frequency tags, small-business webcams: the list of monitoring devices employed by and for average Americans is already long, and it will only become longer. Extensive surveillance, in short, is coming into being because people like and want it. “Almost all of the pieces for a surveillance society are already here,” says Gene Spafford, director of Purdue University’s Center for Education and Research in Information Assurance and Security.”It’s just a matter of assembling them.” Unfortunately, he says, ubiquitous surveillance faces intractable social and technological problems that could well reduce its usefulness or even make it dangerous.As a result, each type of monitoring may be beneficial in itself, at least for the people who put it in place, but the collective result could be calamitous. To begin with, surveillance data from multiple sources are being combined into large databases. For example, businesses track employees’ car, computer, and telephone use to evaluate their job performance; similarly, the U.S. Defense Department’s experimental Total Information Awareness project has announced plans to sift through information about millions of people to find data that identify criminals and terrorists. But many of these merged pools of data are less reliable than small-scale, localized monitoring efforts; big databases are harder to comb for bad entries, and their conclusions are far more difficult to verify. In addition, the inescapable nature of surveillance can itself create alarm, even among its beneficiaries. “Your little camera network may seem like a good idea to you,” Spafford says. “Living with everyone else’s could be a nightmare.” Today a company or agency with a $10 million hardware budget can buy processing power equivalent to 2,000 workstations, two petabytes of hard drive space (two million gigabytes, or 50,000 standard 40-gigabyte hard drives like those found on today’s PCs), and a two- gigabit Internet connection (more than 2,000 times the capacity of a typical home broadband connection). If current trends continue, simple arithmetic predicts that in 20 years the same purchasing power will buy the processing capability of 10 million of today’s workstations, 200 exabytes (200 million gigabytes) of storage capacity, and 200 exabits (200 million megabits) of bandwidth. Another way of saying this is that by 2023 large organizations will be able to devote the equivalent of a contemporary PC to monitoring every single one of the 330 million people who will then be living in the United States. One of the first applications for this combination of surveillance and compu- tational power,says Raghu Ramakrishnan, a database researcher at the University of Wisconsin-Madison, will be continuous intensive monitoring of buildings,offices, and stores: the spaces where middle-class people spend most of their lives. Surveillance in the workplace is common now: in 2001, according to the American Management Association survey, 77.7 percent of major U.S. corporations electronically monitored their employees, and that statistic had more than doubled since 1997. But much more is on the way. Companies like Johnson Controls and Siemens, Ramakrishnan says, are already “doing simplistic kinds of ‘asset tracking,’as they call it.” They use radio frequency identification tags to monitor the locations of people as well as inventory. In January, Gillette began attaching such tags to 500 million of its Mach 3 Turbo razors. Special “smart shelves” at Wal-Mart stores will record the removal of razors by shop- pers, thereby alerting stock clerks whenever shelves need to be refilled—and effectively transforming Gillette customers into walking radio beacons. In the future, such tags will be used by hospitals to ensure that patients and staff maintain quarantines,by law offices to keep visitors from straying into rooms containing clients’ confidential papers,and in kindergartens to track toddlers. By employing multiple, overlapping types of monitoring, Ramakrishnan says, managers will be able to “keep track of people, objects, and environmental levels throughout a whole complex.” Initially, these networks will be installed for “such mundane things as trying to figure out when to replace the carpets or which areas of lawn get the most traffic so you need to spread some grass seed preven- tively.”But as computers and monitoring equipment become cheaper and more powerful, managers will use surveillance data to construct complex, multidimensional records of how spaces are used. The models will be analyzed to improve efficiency and security—and they will be sold to other businesses or governments. Over time, the thousands of individual monitoring schemes inevitably will merge together and feed their data into large commercial and state-owned networks. When surveillance databases can describe or depict what every individual is doing at a particular time, Ramakrishnan says, they will be providing humankind with the digital equivalent of an ancient dream: being “present, in effect, almost anywhere and anytime.” In 1974 Francis Ford Coppola wrote and directed The Conversation, which starred Gene Hackman as Harry Caul, a socially maladroit surveillance expert. In this remarkably prescient movie, a mysterious organization hires Caul to record a quiet discussion that will take place in the middle of a crowd in San Francisco’s Union Square. Caul deploys three microphones: one in a bag carried by a confederate and two directional mikes installed on buildings overlooking the area. Afterward Caul discovers that each of the three recordings is plagued by background noise and distortions, but by combining the different sources, he is able to piece together the conversation. Or, rather, he thinks he has pieced it together. Later, to his horror, Caul learns that he misinterpreted a crucial line, a discovery that leads directly to the movie’s chilling denouement. The Conversation illustrates a central dilemma for tomorrow’s surveillance society. Although much of the explosive growth in monitoring is being driven by consumer demand, that growth has not yet been accompanied by solutions to the classic difficulties computer systems have integrating disparate sources of information and arriving at valid conclusions. Data quality problems that cause little inconvenience on a local scale— when Wal-Mart’s smart shelves misread a razor’s radio frequency identification tag—have much larger consequences when organizations assemble big databases from many sources and attempt to draw conclusions about, say, someone’s capacity for criminal action. Such problems, in the long run, will play a large role in determining both the technical and social impact of surveillance. The experimental and controversial Total Information Awareness program of the Defense Advanced Research Projects Agency exemplifies these issues. By merging records from corporate,medical,retail, educational, travel, telephone, and even veterinary sources, as well as such “biometric”data as fingerprints,iris and retina scans, DNA tests, and facial-characteristic measurements,the program is intended to create an unprecedented repository of information about both U.S. citizens and foreigners with U.S. contacts. Program director John M. Poindexter has explained that analysts will use custom data-mining techniques to sift through the mass of information, attempting to “detect, classify, and identify foreign terrorists” in order to “preempt and defeat terrorist acts”—a virtual Eye of Sauron, in critics’ view, constructed from telephone bills and shopping preference cards. In February Congress required the Pentagon to obtain its specific approval before implementing Total Information Awareness in the United States (though certain actions are allowed on foreign soil). But President George W. Bush had already announced that he was creating an apparently similar effort, the Terrorist Threat Integration Center, to be led by the Central Intelligence Agency. Regardless of the fate of these two programs, other equally sweeping attempts to pool monitoring data are proceeding apace.Among these initiatives is Regulatory DataCorp, a for-profit consortium of 19 top financial institutions worldwide. The consortium, which was formed last July, combines members’ customer data in an effort to combat “money laundering, fraud, terrorist financing, organized crime, and corruption.” By constantly poring through more than 20,000 sources of public information about potential wrongdoings—from newspaper articles and Interpol warrants to disciplinary actions by the U.S. Securities and Exchange Commission—the consortium’s Global Regulatory Information Database will, according to its owner, help clients “know their customers.” Equally important in the long run are the databases that will be created by the nearly spontaneous aggregation of scores or hundreds of smaller databases. “What seem to be small-scale, discrete systems end up being combined into large databases,” says Marc Rotenberg, executive director of the Electronic Privacy Information Center,a nonprofit research organization in Washington, DC. He points to the recent, voluntary efforts of merchants in Washington’s affluent Georgetown district. They are integrating their in-store closed-circuit television networks and making the combined results available to city police. In Rotenberg’s view, the collection and consolidation of individual surveillance networks into big government and industry programs “is a strange mix of public and private, and it’s not something that the legal system has encountered much before.” Managing the sheer size of these aggregate surveillance databases, surprisingly, will not pose insurmountable technical difficulties. Most personal data are either very compact or easily com- pressible. Financial, medical, and shopping records can be represented as strings of text that are easily stored and transmitted; as a general rule, the records do not grow substantially over time. Even biometric records are no strain on computing systems.To identify people, genetic-testing firms typically need stretches of DNA that can be represented in just one kilobyte—the size of a short e- mail message. Fingerprints, iris scans, and other types of biometric data consume little more. Other forms of data can be preprocessed in much the way that the cameras on Route 9 transform multi-megabyte images of cars into short strings of text with license plate numbers and times. (For investigators, having a video of suspects driving down a road usually is not as important as simply knowing that they were there at a given time.) To create a digital dossier for every individual in the United States—as pro- grams like Total Information Awareness would require—only “a couple terabytes of well-defined information” would be needed, says Jeffrey Ullman, a former Stanford University database researcher. “I don’t think that’s really stressing the capacity of [even today’s] databases.” Instead, argues Rajeev Motwani, another member of Stanford’s database group, the real challenge for large surveillance databases will be the seemingly simple task of gathering valid data. Computer scientists use the term GIGO— garbage in, garbage out—to describe situations in which erroneous input cre-ates erroneous output.Whether people are building bombs or buying bagels, governments and corporations try to predict their behavior by integrating data from sources as disparate as electronic toll-collection sensors, library records, restaurant credit-card receipts, and grocery store customer cards—to say nothing of the Internet, surely the world’s largest repository of personal information. Unfortunately, all these sources are full of errors, as are financial and medical records. Names are misspelled and digits transposed; address and e-mail records become outdated when people move and switch Internet service providers; and formatting differences among databases cause information loss and distortion when they are merged. “It is routine to find in large customer databases defective records—records with at least one major error or omission—at rates of at least 20 to 35 percent,”says Larry English of Information Impact, a database consulting company in Brentwood, TN. Unfortunately, says Motwani, “data cleaning is a major open problem in the research community. We are still struggling to get a formal technical definition of the problem.” Even when the original data are correct, he argues, merging them can introduce errors where none had existed before.Worse, none of these worries about the garbage going into the system even begin to address the still larger problems with the garbage going out. Almost every computer-science student takes a course in algorithms. Algorithms are sets of specified, repeatable rules or procedures for accomplishing tasks such as sorting numbers; they are, so to speak, the engines that make programs run. Unfortunately, innovations in algorithms are not subject to Moore’s law, and progress in the field is notoriously sporadic.”There are certain areas in algorithms we basically can’t do better and others where creative work will have to be done,” Ullman says. Sifting through large surveillance databases for information, he says, will essentially be “a problem in research in algorithms. We need to exploit some of the stuff that’s been done in the data-mining community recently and do it much, much better.” Working with databases requires users to have two mental models. One is a model of the data. Teasing out answers to questions from the popular search engine Google, for example, is easier if users grasp the varieties and types of data on the Internet—Web pages with words and pictures, whole documents in a multiplicity of formats, downloadable software and media files—and how they are stored. In exactly the same way, extracting information from surveillance databases will depend on a user’s knowledge of the system. “It’s a chess game,” Ullman says.”An unusually smart analyst will get things that a not-so-smart one will not.” Second, and more important according to Spafford, effective use of big surveillance databases will depend on having a model of what one is looking for. This factor is especially crucial, he says, when trying to predict the future, a goal of many commercial and government projects. For this reason, what might be called reactive searches that scan recorded data for specific patterns are generally much more likely to obtain useful answers than proactive searches that seek to get ahead of things. If, for instance, police in the Washington sniper investigation had been able to tap into a pervasive network of surveillance cameras, they could have tracked people seen near the crime scenes until they could be stopped and questioned: a reactive process.But it is unlikely that police would have been helped by proactively asking surveillance databases for the names of people in the Washington area with the requisite characteristics (family difficulties, perhaps, or military training and a recent penchant for drinking) to become snipers. In many cases, invalid answers are harmless. If Victoria’s Secret mistakenly mails 1 percent of its spring catalogs to people with no interest in lingerie, the price paid by all parties is small. But if a national terrorist-tracking system has the same 1 percent error rate, it will produce millions of false alarms, wasting huge amounts of investigators’ time and, worse, labeling many innocent U.S. citi- zens as suspects.”A 99 percent hit rate is great for advertising,”Spafford says,”but terrible for spotting terrorism.” Because no system can have a success rate of 100 percent, analysts can try to decrease the likelihood that surveillance databases will identify blameless people as possible terrorists. By making the criteria for flagging suspects more stringent, officials can raise the bar, and fewer ordinary citizens will be wrongly fingered. Inevitably, however, that will mean also that the “borderline” terrorists—those who don’t match all the search criteria but still have lethal intentions—might be overlooked as well. For both types of error, the potential consequences are alarming. Yet none of these concerns will stop the growth of surveillance, says Ben Shneiderman, a computer scientist at the University of Maryland.Its potential benefits are simply too large. An example is what Shneiderman, in his recent book Leonardo’s Laptop: Human Needs and the New Computing Technologies, calls the World Wide Med: a global, unified database that makes every patient’s complete medical history instantly available to doctors through the Internet,replacing today’s scattered sheaves of paper records.”The idea,”he says, “is that if you’re brought to an ER anywhere in the world, your medical records pop up in 30 seconds.” Similar programs are already coming into existence. Backed by the Centers for Disease Control and Prevention, a team based at Harvard Medical School is planning to monitor the records of 20 million walk-in hospital patients throughout the United States for clusters of symptoms associated with bioterror agents.Given the huge number of lost or confused medical records, the benefits of such plans are clear. But because doctors would be continually adding information to medical histories,the system would be monitoring patients’ most intimate personal data.The network, therefore, threatens to violate patient confidentiality on a global scale. In Shneiderman’s view, such trade-offs are inherent to surveillance. The collective by-product of thousands of unexceptionable,even praiseworthy efforts to gather data could be something nobody wants: the demise of privacy.”These networks are growing much faster than people realize,” he says.”We need to pay attention to what we’re doing right now.” In The Conversation, surveillance expert Harry Caul is forced to confront the trade-offs of his profession directly. The conversation in Union Square provides information that he uses to try to stop a murder. Unfortunately, his faulty interpretation of its meaning prevents him from averting tragedy. Worse still, we see in scene after scene that even the expert snoop is unable to avoid being monitored and recorded.At the movie’s intense, almost wordless climax, Caul rips his home apart in a futile effort to find the electronic bugs that are hounding him. The Conversation foreshadowed a view now taken by many experts: surveillance cannot be stopped.There is no possibility of “opting out.” The question instead is how to use technology, policy, and shared societal values to guide the spread of surveillance—by the government, by corporations, and perhaps most of all by our own unwitting and enthusiastic participation—while limiting its downside. Click here for part II. Insider Online Only $19.95/yr US PRICE Technologies are revolutionizing how we work and how companies operate. You've read of free articles this month.
__label__1
0.780148
Up to three dozen journalists and activists still remain outside the building of the Donetsk court of the Rostov region of the Russian Federation as they are denied access. Censor.NET correspondent reports from the scene. Watch more: Russian court starts passing sentence to Savchenko, denies her final plea. VIDEO The anti-riot policemen, who refused to identify themselves, told Censor.NET reporter that no more people would be let into the building. "The courtrooms are overcrowded," a masked man said. The picture shows one of the two courtrooms provided for the media. There are plenty of empty seats there. One is free to leave the building, but the policemen warn that they may not let one in again.
__label__1
0.937904
Mobile site Details for User details Abstracts by this author at Goldschmidt2013 Abstracts submitted to previous conferences (2015) Keynote: Why do Intermediate Magmas Stall? Jagoutz O & Caricchi L (2011) Keynote: Magma Emplacement Durations and Rates and the Dynamics of Magmatism and Volcanism Annen C, Blundy J, Caricchi L, Menand T, de Saint-Blanquat M, Schöpa A & Sparks S (2011) The Origin of Carbonate Globules in Silicate Melts: Solids or Liquids? McMahon S, Bailey K, Walter M & Caricchi L
__label__1
0.997847
Serotech Laboratories Ltd. - DNA Testing and Analysis About Serotech About DNA Testing Legal Agency Info Sample Collection Contact Us Who's the dad? Don't just guess, know. About DNA Testing What is DNA and what does Paternity Testing look at? DNA, the genetic blueprint, is composed of a string of 3 billion individual units called nucleic acids, of which there are four kinds.  The exact order of these units, called the sequence, is unique to all individuals except identical twins.  DNA is subdivided into 23 paired structures called chromosomes. One of each pair of the child’s chromosomes is derived from the mother and one from the father.  DNA is the same from cell to cell in any given person and is passed along from parent to child according to the principles of Mendelian genetics, whereby one of each pair of chromosomes passes from parent to child in an independent fashion. In the laboratory sixteen different paired regions of DNA are individually examined from the mother, child and presumed father. These regions have been carefully chosen because they show great variability in size (number or units) from person to person.  Thus the sequence inherited by a child from its mother usually differs from that from its father in each of these variable regions. Testing delineates which half of the child’s DNA derives from the mother and the other half is examined to see if it matches that of the putative father. How are Paternity Test results reported? Exclusion from paternity is indicated when the child’s DNA does not match that of the presumed father.  In this case the probability of paternity is given as 0%. If the analysis does not exclude the presumed father, it does not necessarily indicate that he is the genetic father.  His genetic makeup may be identical to that of another man in the general population, who may be the genetic father.  In this case, Serotech computes the statistical probability that the accused is the true genetic father of the child.  This statement of probability is based on the frequencies of the regional chromosomal sequences studied in large, randomly selected population of a given race. The laboratory will calculate how many other men in the population possess the combination of regional chromosomal sequences of the specified DNA marker regions paternally inherited by the child, and express the result as a percentage that indicates the probability of paternity of the accused man compared to that by a group of randomly chosen men in the population. The statistical probability of paternity for a man not excluded from being the biological father of the child is typically greater than 99%. Click to Order Online
__label__1
0.60012
Web Results Antlers are extensions of the skull grown by members of the deer family. They are true bone ... from ant-, meaning before, oeil, meaning eye and -ier, a suffix indicating an action or state of being... So why do deer even have antlers ? : Jake's Bones May 8, 2014 ... Finding out why deer have antlers is more difficult than it seems, and you have to look at how each species of deer uses antlers in different ... Why Do Deer Shed Their Antlers? - Grand View Outdoors Editor's note: To clarify this article, we have added a paragraph at the end to explain the biological cause of shedding antlers. Deer (and other ungulates, like  ... Difference Between Antlers and Horns - Yellowstone National Park ... Antlers, on members of the deer family, are grown as an extension of the animal's skull. They are true bone and are a single structure They are generally found ... Deer antlers are made of true bone that is fed by the covering of velvet. Deer need both protein and minerals to grow their antlers. ... The other purpose is to use them as a grappling tool to engage in a test of status and breeding ... White Souse: Why do giraffes have horns? May 30, 2007 ... Thus, the "horns" of the giraffe may present a strong example of a structure which no longer serves a purpose, because the antlers these horns ... Antlers - MountainNature.com While antler size has no bearing on the age of the individual, it IS a great indicator of the health of the animal. Antlers are renewed each year meaning the stags ... G9486 Antler Development in White-tailed Deer ... - MU Extension Several theories attempt to explain the evolutionary purpose of antlers among some male members of the deer family. Four of ... Feb 1, 2013 ... Experts weigh in on deer antler velvet, the substance at the center of a new sports controversy involving Ravens linebacker Ray Lewis. Moose Antlers • How They Grow and What They Tell You A Moose Antlers development is relatively similar among Bull Moose of similar age, ... Antlers do not serve a useful purpose until the fall and during the mating ... More Info Mississippi State University Deer Lab - Antlers Although antlers would appear to be used for combat against predators, it's unlikely deer antlers evolved for this purpose. If antlers are a defense against ... About Deer Antlers W. Matt Knox Deer Project Coordinator. Antlers are found on all members of the deer family (Cervidae) in North America including deer, elk, caribou, and moose. ADW: Horns and Antlers
__label__1
0.991726
The Hollywood toy story Films based on children's toys are proving popular in Tinseltown, but are the commercial gains super Earlier this week Warner Brothers announced that it is set to bring a film adaptation of Lego to the big screen in 2014. The Lego film will, excuse the pun, build on Hollywood's proven track record of making films based on children's toys. Michael Bay's Transformers trilogy has grossed a staggering $2.6 billion since the first film was released in 2007. While toy manufacturer Hasbro and Spyglass Entertainment studio will be hoping that next year's follow-up to the 2009 G.I. Joe film can match the $300 million box office takings of the first. Though it would do both the film-makers and marketeers a disservice to assume that making money from films based on toys is child's play, Hollywood is certainly enthusiastically tapping a fruitful resource. Next year will see the release of perhaps the strangest of these toy adaptations to date with Battleships. Liam Neeson may have "acquired a certain set of skills" throughout his acting career , but it is questionable quite how many of them he will need to draw on when he stars alongside Rihanna in John Berg's interpretation of a game many of us associate with long car journeys. With a budget of $250 million dollars, headline writers may already be perfecting their variations on a box office sinking pun, but Hollywood's major studios seem to think they are onto a winner with the boardgames on the big screen formula. So much so in fact that a strategic partnership between Hasbro and Universal has put film versions of Risk, Candy land and Monopoly purportedly in the pipeline. Indeed the latter has even managed to get Ridley Scott on board as director. Dorothy Parker is once thought to have said "the only 'ism' Hollywood believes in is plagiarism". It is certainly true that Hollywood has a voracious appetite for adapting certain genres to cinema and it is also true that over time the source of Hollywood's inspirations regularly changes. Books (Lord of the Rings, The Godfather), plays (Driving Ms Daisy, Romeo and Juliet), TV programmes (Star Trek, Naked Gun), comics (Batman, Superman, Spiderman), video-games (Tomb Raider, Resident Evil) even theme park rides (Pirates of the Caribbean) have all at one time or another been the stimulus du jour, and now it seems, it's children's toys and boardgames. But isn't this latest development slightly different? Isn't Hollywood now fishing for ideas in such shallow waters, not because of their artistic merit, but because of their potential for commercial gains? Professor Thomas Leitch, Director of Film Studies at the University of Delaware and author of Film Adaptation and its Discontents, believes this was always the case. "I'd question the assumption that Hollywood used to be abrim with creative energy but has lately run dry, since it seems to me that Hollywood has always quite deliberately chosen to be in the business of manufacturing reliably reproducible mass entertainment, an enterprise in which originality is neither sought nor welcomed except insofar as original concepts can be readily replicated." If Hollywood's methods haven't changed, what of its purpose? The overt messages in films like G.I Joe or Transformers seems more mass marketing than mass entertainment. What was once an ancillary function, even a necessary evil to fund a project - the merchandising - now seems to be the sole intention of some films. This Leitch concedes to be true in some cases, but notes that it is not as recent a phenomenon as we might suspect. "I think the pivotal figure here is Walt Disney and the crucial period the mid-1950s, when Disney was launching both his television program and Disneyland, the first of his theme parks. Each of these endeavours was clearly designed to promote the others, and to showcase both Disney's forthcoming projects and his impressive back list as well." In 1995, another American professor, Janet Wasko, wrote: "It is not inconceivable that in the future...manufacturers and joint promoters will demand more knowledge of the film and may even try to influence the production in order to maximise the benefits accruing to them." Writing at a time when films sold commemorative toys and weren't based on them, Wasko's comments seem almost innocent now. Although avarice has probably always trumped art in mainstream cinema, it has never done so in a more apparent way than now, leaving the marketing tail well and truly wagging the Hollywood dog. David McNew/Getty Images Show Hide image The Wood Wide Web: the world of trees underneath the surface Mycorrhizal networks, better known as the Wood Wide Web, have allowed scientists to understand the social networks formed by trees underground. In 1854, Henry David Thoreau published Walden, an extensive rumination on his two years, two months and two days spent in a cabin in the woodlands near Walden Pond. It was situated on a plot of land owned by his friend, mentor and noted transcendentalist Ralph Waldo Emerson. Thoreau’s escape from the city was a self-imposed experiment - one which sought to find peace and harmony through a minimalistic, simple way of living amongst nature. Voicing his reasons for embarking on the rural getaway, Thoreau said, “I went to the woods because I wished to live deliberately, to front only the essential facts of life.” Walden cemented Thoreau’s reputation as a key figure in naturalism; his reflections have since been studied, his practices meticulously replicated. But in the knowledge that Thoreau’s excursion into the woods was a means to better understand how to integrate into society, curious minds are left to wonder what essays and aphorisms Thoreau would have produced had he known what the botanists of today know of nature’s very own societal networks. As scientists have now discovered, what lies beneath the ground Thoreau walked upon, and indeed beneath the ground anyone walks upon when near trees, is perhaps the most storied history and study of collaborative society in something which is now known as the mycorrhizal network or the “Wood Wide Web”. Coined by the journal Nature, the term Wood Wide Web has come to describe the complex mass of interactions between trees and their microbial counterparts underneath the soil. Spend enough time among trees and you may get a sense that they have been around for centuries, standing tall and sturdy, self-sufficient and independent. But anchoring trees and forestry everywhere, and therefore enjoining them into an almost singular superoganism, is a very intimate relationship between their roots and microbes called mycorrhizal fungi. Understanding the relationship between the roots of trees and mycorrhizal fungi has completely shifted the way we think about the world underneath them. Once thought to be harmful, mycorrhizal fungi are now known to have a bond of mutualism with the roots – a symbiotic connection from which both parties benefit. Despite the discovery being a recent one, the link between the two goes as far back as 450 million years. A pinch of soil can hold up to seven miles worth of coiled, tubular, thread-like fungi. The fungi release tubes called hyphae which infiltrate the soil and roots in a non-invasive way, creating a tie between tree and fungus at a cellular level. It is this bond which is called mycorrhiza. As a result, plants 20m away from each other can be connected in the same way as plants connected 200 metres away; a hyphal network forms which brings the organisms into connection. At the heart of the mutualistic relationship is an exchange; the fungi have minerals which the tree needs, and the trees have carbon (which is essentially food) which the fungi need. The trees receive nitrogen for things such as lignin – a component which keep the trees upright, and various other minerals such as phosphorus, magnesium, calcium, copper and more. In return, fungi get the sugars they need from the trees’ ongoing photosynthesis to energise their activities and build their bodies. The connection runs so deep that 20-80% of a tree’s sugar can be transferred to the fungi, while the transfer of nitrogen to trees is such that without the swap, trees would be toy-sized. It’s a bond that has resulted in some remarkable phenomena. Suzanne Simard, an ecologist at the University of British Columbia, has researched into these back and forth exchanges and has found that rather than competing against one another as often assumed, there is a sort of teamwork between the trees facilitated by the mycorrhizal fungi. In one particular example, Simard looked at a Douglas fir tree planted next to a birch tree. Upon taking the birch tree out, there was a completely unexpected result: the fir tree – instead of prospering from the reduced competition for sunlight – began to decay and die. The trees were connected underground via the mycorrhizal system, transferring carbon, nitrogen and water to one another, communicating underground, talking to each other. As Simard says in her TED talk, “it might remind you of a sort of intelligence.” It has been documented that trees share food not just with trees of the same species, but with trees of all kinds of species, forming a social network which some have come to describe as a socialist system. Growth rates are positively affected while seedlings face greater chances of survival. There is in fact a group of plants – the mycoheterotrophic plants of which there are around 400 species – which wouldn’t survive without the mycorrhizal network. These plants are unable to photosynthesise and are therefore heavily dependent on other plants for carbon and minerals. Over the years, Thoreau has had his fair share of critics who deemed his trip to the woods nothing more than an exercise in self-indulgence and narcissism. Perhaps if Thoreau had the chance to head back to Walden Pond with the knowledge of the Wood Wide Web at hand, he would fully understand that no one man is an island, as no one tree is a forest.
__label__1
0.79687
Many organizations use Secure Shell for privileged access to systems and data across the enterprise, yet few organizations have ever examined their deployments of Secure Shell for data breach risk and compliance exposures. Secure Shell HealthCheck is a security assessment service that delivers a detailed analysis of how Secure Shell is deployed and used in your network.
__label__1
0.771017
Adolescent subthreshold-depression and anxiety: psychopathology, functional impairment and increased suicide risk • Conflicts of interest statement: No conflicts declared. Background:  Subthreshold-depression and anxiety have been associated with significant impairments in adults. This study investigates the characteristics of adolescent subthreshold-depression and anxiety with a focus on suicidality, using both categorical and dimensional diagnostic models. Methods:  Data were drawn from the Saving and Empowering Young Lives in Europe (SEYLE) study, comprising 12,395 adolescents from 11 countries. Based on self-report, including Beck Depression Inventory-II (BDI-II), Zung Self-Rating Anxiety Scale (SAS), Strengths and Difficulties Questionnaire (SDQ) and Paykel Suicide Scale (PSS) were administered to students. Based on BDI-II, adolescents were divided into three groups: nondepressed, subthreshold-depressed and depressed; based on the SAS, they were divided into nonanxiety, subthreshold-anxiety and anxiety groups. Analyses of Covariance were conducted on SDQ scores to explore psychopathology of the defined groups. Logistic regression analyses were conducted to explore the relationships between functional impairments, suicidality and subthreshold and full syndromes. Results:  Thirty-two percent of the adolescents were subthreshold-anxious and 5.8% anxious, 29.2% subthreshold-depressed and 10.5% depressed, with high comorbidity. Mean scores of SDQ of subthreshold-depressed/anxious were significantly higher than the mean scores of the nondepressed/nonanxious groups and significantly lower than those of the depressed/anxious groups. Both subthreshold and threshold-anxiety and depression were related to functional impairment and suicidality. Conclusions:  Subthreshold-depression and subthreshold-anxiety are associated with an increased burden of disease and suicide risk. These results highlight the importance of early identification of adolescent subthreshold-depression and anxiety to minimize suicide. Incorporating these subthreshold disorders into a diagnosis could provide a bridge between categorical and dimensional diagnostic models.
__label__1
0.999385
Search tips Search criteria Results 1-10 (10) Clipboard (0) Select a Filter Below Year of Publication Document Types 1.  Modification of Akt by SUMO conjugation regulates alternative splicing and cell cycle  Cell Cycle  2013;12(19):3165-3174. Akt/PKB is a key signaling molecule in higher eukaryotes and a crucial protein kinase in human health and disease. Phosphorylation, acetylation, and ubiquitylation have been reported as important regulatory post-translational modifications of this kinase. We describe here that Akt is modified by SUMO conjugation, and show that lysine residues 276 and 301 are the major SUMO attachment sites within this protein. We found that phosphorylation and SUMOylation of Akt appear as independent events. However, decreasing Akt SUMOylation levels severely affects the role of this kinase as a regulator of fibronectin and Bcl-x alternative splicing. Moreover, we observed that the Akt mutant (Akt E17K) found in several human tumors displays increased levels of SUMOylation and also an enhanced capacity to regulate fibronectin splicing patterns. This splicing regulatory activity is completely abolished by decreasing Akt E17K SUMO conjugation levels. Additionally, we found that SUMOylation controls Akt regulatory function at G₁/S transition during cell cycle progression. These findings reveal SUMO conjugation as a novel level of regulation for Akt activity, opening new areas of exploration related to the molecular mechanisms involved in the diverse cellular functions of this kinase. PMCID: PMC3865012  PMID: 24013425 signal transduction; post-translational modification; SUMO; Akt/PKB; alternative splicing; cell cycle 2.  Messages Do Diffuse Faster than Messengers: Reconciling Disparate Estimates of the Morphogen Bicoid Diffusion Coefficient  PLoS Computational Biology  2014;10(6):e1003629. The gradient of Bicoid (Bcd) is key for the establishment of the anterior-posterior axis in Drosophila embryos. The gradient properties are compatible with the SDD model in which Bcd is synthesized at the anterior pole and then diffuses into the embryo and is degraded with a characteristic time. Within this model, the Bcd diffusion coefficient is critical to set the timescale of gradient formation. This coefficient has been measured using two optical techniques, Fluorescence Recovery After Photobleaching (FRAP) and Fluorescence Correlation Spectroscopy (FCS), obtaining estimates in which the FCS value is an order of magnitude larger than the FRAP one. This discrepancy raises the following questions: which estimate is "correct''; what is the reason for the disparity; and can the SDD model explain Bcd gradient formation within the experimentally observed times? In this paper, we use a simple biophysical model in which Bcd diffuses and interacts with binding sites to show that both the FRAP and the FCS estimates may be correct and compatible with the observed timescale of gradient formation. The discrepancy arises from the fact that FCS and FRAP report on different effective (concentration dependent) diffusion coefficients, one of which describes the spreading rate of the individual Bcd molecules (the messengers) and the other one that of their concentration (the message). The latter is the one that is more relevant for the gradient establishment and is compatible with its formation within the experimentally observed times. Author Summary Understanding the mechanisms by which equivalent cells develop into different body parts is a fundamental question in biology. One well-studied example is the patterning along the anterior-posterior axis of Drosophila melanogaster embryos for which the spatial gradient of the protein Bicoid is determinant. The localized production of Bicoid is implicated in its inhomogeneous distribution. Diffusion then determines the time and spatial scales of the gradient as it is formed. Estimates of Bicoid diffusion coefficients made with the optical techniques, FRAP and FCS resulted in largely different values, one of which was too slow to account for the observed time of gradient formation. In this paper, we present a model in which Bicoid diffuses and interacts with binding sites so that its transport is described by a "single molecule'' and a "collective'' diffusion coefficient. The latter can be arbitrarily larger than the former coefficient and sets the rate for bulk processes such as the formation of the gradient. In this way we obtain a self-consistent picture in which the FRAP and FCS estimates are accurate and where the gradient can be established within the experimentally observed times. PMCID: PMC4046929  PMID: 24901638 3.  Pheromone-Induced Morphogenesis Improves Osmoadaptation Capacity by Activating the HOG MAPK Pathway**  Science signaling  2013;6(272):ra26. Environmental and internal conditions expose cells to a multiplicity of stimuli whose consequences are difficult to predict. Here, we investigate the response to mating pheromone of yeast cells adapted to high osmolarity. Events downstream of pheromone binding involve two mitogen-activated protein kinase (MAPK) cascades: the pheromone response (PR) and the cell-wall integrity response (CWI). Although these MAPK pathways share components with each and a third MAPK pathway, the high osmolarity response (HOG), they are normally only activated by distinct stimuli, a phenomenon called insulation. We found that in cells adapted to high osmolarity, PR activated the HOG pathway in a pheromone- and osmolarity- dependent manner. Activation of HOG by the PR was not due to loss of insulation, but rather a response to a reduction in internal osmolarity, which resulted from an increase in glycerol release caused by the PR. By analyzing single-cell time courses, we found that stimulation of HOG occurred in discrete bursts that coincided with the “shmooing” morphogenetic process. Activation required the polarisome, the cell wall integrity MAPK Slt2, and the aquaglyceroporin Fps1. HOG activation resulted in high glycerol turnover that improved adaptability to rapid changes in osmolarity. Our work shows how a differentiation signal can recruit a second, unrelated sensory pathway to enable responses to yeast to multiple stimuli. PMCID: PMC3701258  PMID: 23612707 4.  Using Cell-ID 1.4 with R for Microscope-Based Cytometry  This unit describes a method for quantifying various cellular features (e.g., volume, total and subcellular fluorescence localization) from sets of microscope images of individual cells. It includes procedures for tracking cells over time. One purposefully defocused transmission image (sometimes referred to as bright-field or BF) is acquired to segment the image and locate each cell. Fluorescent images (one for each of the color channels to be analyzed) are then acquired by conventional wide-field epifluorescence or confocal microscopy. This method uses the image processing capabilities of Cell-ID (Gordon et al., 2007, as updated here) and data analysis by the statistical programming framework R (R-Development-Team, 2008), which we have supplemented with a package of routines for analyzing Cell-ID output. Both Cell-ID and the analysis package are open-source. PMCID: PMC3485637  PMID: 23026908 image processing; fluorescence microscopy; Cell-ID; R 5.  Modulation of the Akt Pathway Reveals a Novel Link with PERK/eIF2α, which Is Relevant during Hypoxia  PLoS ONE  2013;8(7):e69668. The unfolded protein response (UPR) and the Akt signaling pathway share several regulatory functions and have the capacity to determine cell outcome under specific conditions. However, both pathways have largely been studied independently. Here, we asked whether the Akt pathway regulates the UPR. To this end, we used a series of chemical compounds that modulate PI3K/Akt pathway and monitored the activity of the three UPR branches: PERK, IRE1 and ATF6. The antiproliferative and antiviral drug Akt-IV strongly and persistently activated all three branches of the UPR. We present evidence that activation of PERK/eIF2α requires Akt and that PERK is a direct Akt target. Chemical activation of this novel Akt/PERK pathway by Akt-IV leads to cell death, which was largely dependent on the presence of PERK and IRE1. Finally, we show that hypoxia-induced activation of eIF2α requires Akt, providing a physiologically relevant condition for the interaction between Akt and the PERK branch of the UPR. These data suggest the UPR and the Akt pathway signal to one another as a means of controlling cell fate. PMCID: PMC3726764  PMID: 23922774 6.  Modelling reveals novel roles of two parallel signalling pathways and homeostatic feedbacks in yeast  Ensemble modelling is used to study the yeast high osmolarity glycerol (HOG) pathway, a prototype for eukaryotic mitogen-activated kinase signalling systems. The best fit model provides new insights into the function of this system, some of which are then experimentally validated. The main mechanism for osmo-adaptation is a fast and transient non-transcriptional Hog1-mediated activation of glycerol production.The transcriptional response rather serves to maintain an increased steady-state glycerol production with low steady-state Hog1 activity after adaptation.A fast negative feedback of activated Hog1 on the upstream signalling branches serves to stabilise the adaptation response by preventing oscillatory behaviour.Two parallel redundant signalling branches elicit a more robust and swifter adaptation than a single branch alone, at least for low osmotic shock. This notion could be corroborated by dedicated measurements of single-cell volume recovery for the wild-type and single-branch mutants. The high osmolarity glycerol (HOG) pathway in yeast serves as a prototype signalling system for eukaryotes. We used an unprecedented amount of data to parameterise 192 models capturing different hypotheses about molecular mechanisms underlying osmo-adaptation and selected a best approximating model. This model implied novel mechanisms regulating osmo-adaptation in yeast. The model suggested that (i) the main mechanism for osmo-adaptation is a fast and transient non-transcriptional Hog1-mediated activation of glycerol production, (ii) the transcriptional response serves to maintain an increased steady-state glycerol production with low steady-state Hog1 activity, and (iii) fast negative feedbacks of activated Hog1 on upstream signalling branches serves to stabilise adaptation response. The best approximating model also indicated that homoeostatic adaptive systems with two parallel redundant signalling branches show a more robust and faster response than single-branch systems. We corroborated this notion to a large extent by dedicated measurements of volume recovery in single cells. Our study also demonstrates that systematically testing a model ensemble against data has the potential to achieve a better and unbiased understanding of molecular mechanisms. PMCID: PMC3531907  PMID: 23149687 adaptation; ensemble modeling; Hopf bifurcation; model discrimination; osmotic stress 7.  Phosphoproteomic Analysis Reveals Interconnected System-Wide Responses to Perturbations of Kinases and Phosphatases in Yeast  Science signaling  2010;3(153):rs4. PMCID: PMC3072779  PMID: 21177495 8.  The Alpha Project, a model system for systems biology research  IET systems biology  2008;2(5):222-233. One goal of systems biology is to understand how genome-encoded parts interact to produce quantitative phenotypes. The Alpha Project is a medium-scale, interdisciplinary systems biology effort that aims to achieve this goal by understanding fundamental quantitative behaviors of a prototypic signal transduction pathway, the yeast pheromone response system from Saccharomyces cerevisiae. The Alpha Project distinguishes itself from many other systems biology projects by studying a tightly-bounded and well-characterized system that is easily modified by genetic means, and by focusing on deep understanding of a discrete number of important and accessible quantitative behaviors. During the project, we have developed tools to measure the appropriate data and develop models at appropriate levels of detail for studying a number of these quantitative behaviors. We also have developed transportable experimental tools and conceptual frameworks for understanding other signaling systems. In particular, we have begun to interpret system behaviors and their underlying molecular mechanisms through the lens of information transmission, a principal function of signaling systems. The Alpha Project demonstrates that interdisciplinary studies that identify key quantitative behaviors and measure important quantities, in the context of well-articulated abstractions of system function and appropriate analytical frameworks, can lead to deeper biological understanding. Our experience may provide a productive template for system biology investigations of other cellular systems. PMCID: PMC2806158  PMID: 19045818 9.  Cell-ID Software for Microscope-Based Cytometry  This unit describes a method to quantify, from sets of microscope images, various cellular parameters from individual cells, and includes procedures to track cells over time. For example, the user can measure cell volume, total and subcellular localization (nuclear, plasma membrane) of fluorescence for multiple fluorescence channels. This method uses the image processing capabilities of Cell-ID (Gordon et al., 2007) and data analysis by the statistical programming framework R, both open source software packages. The first step for successful cytometry entails acquiring at least one set of images for each field of cells. Each set is composed of one purposefully defocused transmission image (sometimes referred to as brightfield, or BF) that will be used to locate each cell, and one fluorescence image for each of the color channels to be analyzed. Images may be conventional wide-field epifluorescence or confocal microscopy images. Cell-ID processes the images and outputs a tab-delimited file with information extracted from each cell, for each time point and each fluorescence channel. Finally, the user analyzes the data using R (R-Development-Team, 2008), which we have supplemented with a package tailored to analyze Cell-ID output. PMCID: PMC2784696  PMID: 18972382 image processing; fluorescence microscopy; Cell-ID; R 10.  Fus3 generates negative feedback that improves information transmission in yeast pheromone response  Nature  2008;456(7223):755-761. Haploid Saccharomyces cerevisiae yeast cells use a prototypic cell signaling system to transmit information about the extracellular concentration of mating pheromone secreted by potential mating partners. The ability for cells to respond distinguishably to different pheromone concentrations depends on how much information about pheromone concentration the system can transmit. Here we show that the MAPK Fus3 mediates fast-acting negative feedback that adjusts the dose-response of downstream system response to match that of receptor-ligand binding. This “dose-response alignment”, defined by a linear relationship between receptor occupancy and downstream response, can improve the fidelity of information transmission by making downstream responses corresponding to different receptor occupancies more distinguishable and reducing amplification of stochastic noise during signal transmission. We also show that one target of the feedback is a novel signal-promoting function of the RGS protein Sst2. Our work suggests that negative feedback is a general mechanism used in signaling systems to align dose-responses and thereby increase the fidelity of information transmission. PMCID: PMC2716709  PMID: 19079053 Results 1-10 (10)
__label__1
0.78669
Blog # 88 (poesie) A LITTLE NIGHT MUSIC* The velvet cloak of Night descends, But slower now in solstice times. Day describes a respectful bow, Since Dark’s far older than the Light. Night tucks Day into its bed, With softness and with loving care. As if responsive to a cue, The insect choirs take up their song The bullfrogs croak their hoarse refrain. Warm vapors from the day before, Exude from all the plants and trees. The Moon, far weaker than the sun, Salutes the rocks with its pale rays. Critters mostly sleep when dark, Yet some set out in search for food. Predators begin a nocturnal stalk; Tall grasses dance the rhythmic breeze. As vital is the Day to most, I often do prefer the night Its then I cry out to the skies, Please, do now turn on the dark! -p. (attributable to Leonard N. Shapiro, August, 2016) *( title attributable to W.A. Mozart, 18th Century) The 1980’s hit musical, “Cabaret” not only was excellent entertainment; but in addition, possessed great significance as a telling statement, a sermon on the subject of an important flaw in the human character. In the presentation, the cabaret patrons, evidently symbolizing the Berlin population, which in the 1930’s, mesmerized by the exotic entertainment,  hosted by a demonic master of ceremonies (brilliantly played by Joel Grey) were able to irresponsibly turn a blind eye and a deaf ear to the horrific Nazi atrocities( particularly against the Jews) then concurrently taking place in the streets. The flaw exemplified was the unfortunate human tendency to avoid dealing with unpleasant occurrences by looking the other way. To come at once to the main point, it is frustrating and painful to observe that over the great many decades we have all been patrons in attendance at a real-life cabaret, averting our eyes and attention from the cruel, atavistic and barbaric atrocity that is the sport of boxing. America was rightfully outraged and disgusted at news that a certain NFL player was engaged in the business of the public staging of brutal dogfights. He was found out, fined and suspended from play (at least for a period of time to allow our dog loving public to cool off). Cock fighting, a traditional Latino event, as well as all bloody animal contests are illegal; a federal statute makes it no less than a felony, punishable up to five years in prison plus a fine of $250,000. It is well known that there are several agencies and foundations established to prevent cruelty to animals, such as the ASPCA, PETA and, WWF. (See: Blog # 37) .Movies that use animal actors uniformly display a notice, together with the film credits, that no animal was mistreated in the production of the film as an assurance to the moviegoer. The controversial subject of school sport injuries, especially concussions, is gaining ever increasing attention. Concussions, we are advised, are incurred either by a significant blow to the head or by the cumulative repetition of lesser blows. Customized helmets are being developed in an attempt to lessen the occurrence of concussions in football and other contact sports. In some jurisdictions, the wearing of a protective helmet is mandatory for motorcycle and bicycle riders. It should be noted that there is a great deal of ongoing and heated debate concerning the issue of eliminating football and other contact sports altogether from school, notably, high school. In fairness, it must be noted that football is not a “blood sport.”  The theme of the sport is to earn goals and not to cause injury.  However, the all too frequent collateral injuries during football play, provide  many a cogent rationalization for the banning of the sport. In this context, what does the legally approved activity of boxing (and even worse, cage fighting) reveal about the nature of our civilized society? In ancient Rome, considered a brutish society, unfortunate gladiators (often slaves or prisoners) engaged in deadly combat for their life, for the primitive amusement of the bloodthirsty spectators. This barbaric travesty went so far as to feature, historians reveal, deadly combat between dwarfs and women for public entertainment. Unlike football, where injuries are unintended and are collateral to the play, the express, sole theme of boxing is the causation of disabling injury to the opponent. The more lethal the punch, the more points are awarded by referees, avid and enthusiastic experts in discerning high scoring serious injury. Ultimate success is attained by rendering the opponent unconscious, the ultimate act of victory; usually met with howls of approval from a highly stimulated audience. We are not professionally competent to diagnose the mental and physical health of professional boxers over the course of their career, but it is safe to expect that it is not salutary. For some time, and presumably, for the foreseeable future, we all seem to be patrons of our American cabaret and look the other way. Perhaps the contrasting interest in school sport injuries may be explainable by the fact that the young players are identifiable, often our own children. What quality of compassion, sanity and wisdom do we evince by expressly and properly outlawing cruelty to roosters and dogs and simultaneously providing  legal status and societal permission to this immoral travesty regarding humans? Willkommen zum Kabaret For illustrative purposes, we will, initially, conjure up the following bit of fiction. A 29 year old white mother is escorting her five year old daughter to school; they are walking at a slow pace, hand in hand. The mother, a Barnard College graduate (in American Studies) is a well- educated, forward  thinking person and an ardent supporter of civil rights, donating annually to the NAACP and the Urban League, and an outspoken opponent of racial prejudice .They notice another young mother, who it happens, is black, similarly accompanying her young child to school. The white mother’s  little daughter feels a very subtle squeeze of her hand, an act, virtually unconscious, of which her mother would be completely mortified if she were made aware of it; the explanation for this dynamic is no less than the fundamental and universal basis for all racial and ethnic prejudice. Historians will readily identify past episodes of strife and injustice and suggest that such events are the root causes of today’s many ills, offering factual accounts which underlay their theories; we do emphatically disagree. The horrible bloodshed between Sunni and Shia Muslims as they say, can be traced back to a 7th Century dispute as to the proper method of succession to Prophet Mohammad, either by familial inheritance or by democratic vote. The Greek-Turkish enmity, historians say, dates back to the military defeat of Greece by the Ottoman forces, under Kemal Ataturk. The Irish-English conflicts are fueled by past wars concerning religion and economics; the Protestant-Catholic troubles including the 30 year’s war dating back to Luther. Anti-Semitism, they will tell you is rooted in the scandalous claim that the Jews were responsible for the death of Jesus. These may, in general, be rather accurate depictions of historical events but are, in reality, not at all the dynamic basis for bigotry and prejudice in our modern times. To our point, as we have previously stated, it is only by the random accident of birth that we acquire our respective features, culture and belief systems. Some, undoubtedly well intentioned parent or other adult soon sows the pernicious seed of “we” and “they” (perhaps to give the child a sense of belonging) in the fertile and imaginative mind of the young person. From such unfortunate implantation, thereafter mythologies about the “other” are created, perhaps then evangelistic inclinations and thereafter, conflict and war. This problem is fundamental and timeless and not founded upon the recalled horrors of the past (undoubtedly due to similar causes). We must consciously and effectively amend our messages to our young concerning “we” and “they” and find a way to inculcate a more appropriate consciousness of an “us” and a respect and appreciation for diversity. Others may have differing belief systems and sometimes even look a little different from us but the young must be taught that we are all life tenants on the planet. This will take generations, much to the justified dismay and impatience of those who are the victims of discrimination, but it seems to be the only effective and enduring way. In the interim, we certainly can be good neighbors and friends to each other The mother, unaware that she subtly squeezed her daughter’s hand is a good person, but undoubtedly a product of an early “we” “they” upbringing; the daughter is now, unfortunately, another.       –p. The eminent British philosopher, John Locke, an empiricist, maintained that man is born with a blank slate, to be inscribed by him with knowledge acquired from experience. Those who choose to avail themselves of the sheer joy and profound experience of reading great literature, thereby acquire an in-depth understanding of the phenomenon of life, its potential and its challenges, and are empowered to relate such understanding to their own experience. One can comfortably sit at his favorite perch, armed only with a small light and a great book and, while sedentary, travel the planet and the cosmos; he can acquire enlightenment and valuable insight to himself and others by examining other lives and life situations as created and aesthetically portrayed by the great authors. A certain valuable category of books are labeled “Classics” since they portray man’s eternal plight on the human canvas. These exceptional works should be read and re-read (“reprised”) not only as a continuing source of great pleasure, but as instructive and comforting insight and perspective into the timeless, universally eternal issues which are implicit in the human condition. The young adult, albeit in possession of the requisite intellectual and aesthetic ability to comprehend and enjoy such literary works, nonetheless, is at a relatively prerequisite stage of maturity and potential development ; his future course of life will instruct him, experientially and developmentally, in levels of ever  increasing sophisticated insight. This maturity will lead him, ultimately, to the fuller appreciation and comprehension of the author’s intended message. The mature reader, with a lifetime of accumulated experience, has thus acquired the in- depth capacity to identify with the life and characters portrayed and is equipped to appreciate the full extent of  the author’s message and intention; his own  experiences, joyous and tragic,  empowers him with the ability to spiritually identify and  communicate with the classical author. Reprising the great classics of literature at a later stage of life is a satisfying life-enhancing experience .A little dust never hurt anyone. Blog # 84         OUR PLANET SPEAKS… A  possible theory which might explain the difference between the plethora of writings seeking the abolition of the death penalty, as compared with a relative dearth of the same in favor of its continuance. It may be suggested that the arguments in its opposition are essentially based upon rational premises and therefore readily amenable to their literate expression, while the contrary position is not. There are but few enlightened, democratic countries (unfortunately, including the U.S.) that still feature the atavistic and barbaric practice of the death penalty. It would seem far more emblematic of theocracies and other tyrannical regimes which have need of it to intimidate and control their respective populations. Many logical and ethical arguments have been repeatedly made in opposition to the death penalty and it would seem sufficient merely to refer to them: It is cruel, barbaric and antithetical to an enlightened society, It has completely failed to serve as a deterrent, It is grossly unfair in its racial application, Following execution, mistakes in justice cannot be rectified, Legal defense provided for the indigent defendant has been inadequate, Its application has been unacceptably arbitrary, It has proven by experience to be unfair to the mentally handicapped, Its administration has often been incompetent causing horrific suffering, It violates Natural Law (as stated by many eminent philosophers, including John Stuart Mill), The long period between sentence and execution is great torture, and, It is illegal under the U.S, Constitution’s prohibition of “cruel and unusual punishments.” These are the cogent arguments seeking the abolition of the death penalty which have been eternally made and, as stated above, there would seem to be no utility in their restatement. It is telling, however, to point to the historic and expressive nature of the words, “capital punishment, themselves. The significant word, “capital” is directly referable to the identical verb origin as “decapitate” as used in such societal niceties as the guillotine and the chopping block (capital means “head”). The other euphemistic word is “execution,” referring to action such as, execution of a policy, a plan, the execution of an act. The employment of such banal euphemism is an admission of the undeniable, express and defensive recognition of the practice as a gross moral atrocity. A related topic may be the “Right to Life” organization and its over- zealous adherents, whose dedication is antithetical to its pirated name. Their sole purported goal is the protection of the fetus, in contrast to their actual goal, the denial of a mother’s right to an abortion. This cohort claims religion and ethics as their basis, but would, after the birth of the fetus, deny the poor child needed help, including food stamps, affordable health care, welfare or any other program of vital assistance. After the event of birth, their religion and morality seem to become non-existent. These so- called “right to lifers” have strategically committed premeditated murder of abortion providers, favor the free sale and distribution of guns, are supporters of the subject death penalty and seem to consistently advocate military action in preference to diplomacy in instances of international problems. This is all shamefully abhorrent to its misleading name. (See Blog #52). We would give expression to what appears to the sensitive observer, the subtle but articulated, concern for the continuation and preservation of “life” meaningfully expressed by our Planet in countless observable ways. The dynamics of planetary evolution over the countless millennia toward a sentient being, the self- healing of trees after a violent storm, the rebirth and replacement of flora following forest fires, the ingenious tactical dispersion of seeds by plants, are but a few examples of the seemingly infinite instances of a clear-cut planetary message that life’s continuance is mandated. This (no less than) imperial mandate of Nature is certainly not within the jurisdiction of any legislature, however august, nor capable of refutation by the ignorant and misguided advocates for” justice” (read “vengeance”). The horrific approved murder of individuals guilty of murder creates an undesirable equivalency and inconsistency from the standpoint of any professed civilized and humane society. Incidentally, what is being” killed” when a criminal’s (or any other) life is terminated? After death the body becomes completely useless and disposable; is it some species of electro-chemical power that is switched off when certain body parts are destroyed? The profound mystery of life and death ought not to be tampered with by the ignorant and unaware. Every living thing is the recipient of a generous planetary franchise: to function. Our current initiation of major programs to probe deep outer space, are all in essence, a search for other “life.” We must be ever aware that (no less than) our own Planet, through its communicated phenomena of Nature, demands that the gifted franchise of life is eternally to be gratefully maintained and nurtured. Blog # 83 (poesie) PANTHEIST SABBATH Cool Connecticut morning Bees buzzing the big hydrangea Wings flapping at pendant feeders Mental kisses to scampering chipmunks I blink at the leafy sunshine- Dare not miss the Sabbath devotion Forest woodland the iconostasis Choir sounds from green bushes Floral gifts from Nature’s rosary Holiness high up the treetops Resurrection in the sprouting soil. Could man but reject his idols And their belligerent servitude, All would be Connecticut morning Bees a-buzz and wings a- flap, for ‘ere. .p- (attributed to Leonard N. Shapiro, August 2016) Spoken words are the expressed formulation of the speaker’s thoughts. Those fortunate to be gifted with the skill to do so precisely have a greater likelihood of meaningful conversation. Spoken words stimulate a response on the part of the listener who, hopefully, is objective enough to apprehend the speaker’s meaning and intention. As noted previously (Blog#81), the optimum interaction is one in which all participants are consistently dedicated to the identical subject. Ethically and solicitously, the speaker should earnestly strive to maintain an awareness and sensitivity to the likely effect of his words upon the other party and not thoughtlessly speak merely to discharge the personal energy of a presenting thought. Ideally therefore, the parties would adhere to the same subject and context and maintain a conversation in the nature of a spontaneous exchange of thoughts on a common subject; this is all too often not the case. Too many conversations take place between parties who pay little, or no attention, to the specific subject and engage in the formal “dance” of a conversation as occurs in the automatic exchange of a handshake. There are particular instances in which the choice of vocabulary has the potential to evoke an emotional response on another, in contrast to those exchanged in meaningless banter. In this brief note, we specifically exclude the subject of verbal exchange in the romantic context. In any case, most participants, particularly of the female gender, it seems, are well schooled in the impactful significance of such words as, “love,” “commitment,” “relationship,” “intended,” and the like. Most of such words are accorded an exalted status, above mere vocabulary, and enjoy the experienced status of a professionally competent diagnosis of the presenting facts. In other contexts, the use of certain words have profound impact, examples are,” love,” “ hate,” “fear,” “peace,” ”friend,” “trouble,” and many, many more. We would however, limit the scope of this note to two words, “probability” and “possibility.” These concern distinctly different predictive words which when used appropriately, summon markedly different reactions. “Probability,” in general, means likely to occur or is expected, i.e., a frequently experienced result. It is a mildly qualified expression concerned with empirical experience, percentage outcome, deductive or inductive reason. It is because it is empirical, and therefore, measurable, that there exists an entire discipline in mathematics devoted to the calculation of probable outcomes; the algebraic calculation of probability results in a statement of the percentage likelihood of occurrence of the selected event or result. “Possibility,” by contrast, connotes the occasional chance of occurrence (as opposed to its expectation, as in “probability). It is more ethereal and theoretical, a prospect which in the future may attain realization. It is not predictive in the same way as probability and therefore not readily capable of calculation. The possibility that an asteroid may collide with our planet is a matter of some scientific interest; a probability of such occurrence would justify global panic. Significantly, in the area of medical illness, the predictability, or probability of full recovery would evoke a feeling of relief; a statement of the possibility of recovery would encourage concern. In all factually important instances of any kind, the competent observer should be scrupulously assiduous in the selection of the appropriate description of the degree of realistic expectation. We would offer some comments regarding the appropriate and sensitive use of these two expectancy words in the hope that they may be of some interest: 1. In matters of illness or temporary disability, if possible and accurate, speak in the context of predictability or possibility. A demonstration of advance knowledge by the recitation of remote possibilities may cause unnecessary anxiety. That an individual might conceivably, pull out a nose hair, resulting in a fatal toxicity, is somewhat possible, but not at all probable. 2. Words connoting probability are preferable with regard to the initiation of new business ventures. The mere “possibility” of success would seem remote and not encouraging. 3. The probability that she (he) loves me back is a far better prospect than such possibility. 4. In general, the possibility of failure should not inhibit the new entrepreneur, the start of a scientific inquiry or an aspiration for the love of another person. There are, indeed, many possibilities whose contemplation are positive and hopeful; the possibility of enduring world peace, the possibility of finding a unified cure for cancer, the possibility of real and lasting brotherhood, the possibility of a clean and green planet. May we dare to hope that these possibilities evolve into probabilities and eventually to the reality?
__label__1
0.572971
We all instinctively recognise symmetry when we see it, but describing it in words is harder than you might think. Make your own climate prediction with this simple, but powerful, model! An insightful look at the climate models that predict our future. Our messy desk is proof of the second law of thermodynamics... Why the expected outcome of rolling a die is 3.5. Some general ideas in very few words and without equations. Why did physicists at the beginning of the 20th century feel they needed a new — and strange — theory?
__label__1
0.554525
10/21/2016 to 10/23/2016 Jeff Carreira The Practice of No Problem is a simple yet profoundly effective form of meditation. With this practice, you learn to embrace life as it already is rather than struggle against it. 10/21/2016 to 10/23/2016 Dzogchen Ponlop Rinpoche A Buddhist Approach to Finding Strength Through Right Action Suffering is inevitable in life—we all encounter it. But how we respond to it can be the difference between more suffering and happiness. 10/21/2016 to 10/23/2016 A Journey Into Asana, Meditation & Pranayama Yoga has long been recognized as a spiritual and physical journey to experience life directly and be at peace. 10/21/2016 to 10/23/2016 Michael Bernard Beckwith Evolutionary Technology for Finding Your Highest Purpose How will you use this singular treasure of life that you have been given? What is the purpose of the unique blend of skills, gifts, experiences, and perspectives that you alone possess? 10/21/2016 to 10/23/2016 Nicolas David Ngan Creating Bliss in Your Everyday Life Hidden inside your birth name is a blueprint for living a more blissful life. Unlock the truth of your Soul Contract and release your full potential with this ancient system of numerology. 10/21/2016 to 10/23/2016 Sil Reynolds
__label__1
0.874996
Web Results Optical microscope The optical microscope, often referred to as light microscope, is a type of microscope which ..... The optical components of a modern microscope are very complex and for a microscope to work well, t... How Light Microscopes Work | HowStuffWorks The human eye misses a lot -- enter the incredible world of the microscopic! Explore how a light microscope works. How does a light microscope work? | Reference.com A light microscope uses a focused beam of visible light to magnify objects for observation. It works in a similar way as a refracting telescope but with several ... The Compound Light Microscope How it works: Compound Light Microscope: The microscope pictured above is referred to as a compound light microscope. The term light refers to the method by ... www.ask.com/youtube?q=Light Microscopes Work&v=FvTrWridZss Feb 9, 2012 ... Physics - Optics: Optical Instruments (3 of 3) The Microscope - Duration: ... How To Use a Compound Light Microscope: Biology Lab Tutorial ... How does a Microscope work ? - Best Microscope Home Understanding these ideas is the first step to learning how a microscope works. The optical or light microscope uses visible light transmitted through, refracted ... Optical Microscope - How Optical Microscopes Works? Focusing Light in an Optical Microscope ... The Nikon AZ100 Multizoom Microscope - Combining ... How Does An Optical Microscope Work | DK Find Out Optical microscopes, also called light microscopes, are the most commonly used type of microscope. Most optical microscopes are compound microscopes, ... Compound Light Microscope: How It Works - Bright Field Microscope To comprehend how the compound light microscope (also called a "bright field" microscope) works we must first understand that convex lenses bend light rays ... Light microscopy - Rice University More Info The Basics - How Light Microscopes Work | HowStuffWorks The Basics - The human eye misses a lot -- enter the incredible world of the microscopic! Explore how a light microscope works. How does a microscope work? - Explain that Stuff Jul 17, 2016 ... Powerful microscopes shed new light on the teeny tiny and make the invisible, visible. They've played an enormous part in science by taking us ... Microscope World Blog: How Does a Light Microscope Work? May 18, 2015 ... Info on how a light microscope works and the different features and functions of the compound microscope.
__label__1
0.999945
It doesn't have to rhyme and there's no set formula. This poetry class is about as far away from Roses are Red as you can get. For the last few weeks students at St. Frances Elementary School in Saskatoon have been sitting down to write poetry with spoken word artists. The project was to create pieces of poetry about their history, culture and Cree language. A big part of this project is performance. At first, students were to shy to share their poems. "They would get the Facilitators to read it for them," explained project co-ordinator Desiree Macauley. "Since then, I've seen tremendous growth. They went from that to standing up and reading in front of the class. That's a huge accomplishment!" It goes beyond the confidence to perform. The students say poetry has also given them a chance to vent about their lives and deal with problems. "It helped me get my anger out." Madison Pahtayken said, "It helps me take it out of my mind and body and get it onto paper. Then I can either crumple that paper up and throw it away or share it." Jordan Schultz is a spoken word artist in Saskatoon. He said poetry gives young people a voice and another option. It gives them "the idea that they can use something so simple as the spoken word and create beautiful pieces, express themselves and use it as a teaching tool," Schultz explained. A few of the students will be making their big debut Wednesday night. They will be featured performers at the Oskayak High School Gala at the Broadway Theatre.
__label__1
0.794282
Curiosity Lands on Mars, Sends First Picture in Minutes In a much anticipated event, the most advanced Mars rover touched down on the red planet's surface on Sunday at 10:32 PM PST, one minute behind schedule and only 2.27 miles from the targeted location inside the Gale Crater. It was a landing of stunning precision, given the fact that the signal sent by the equipment takes 14 minutes to reach Earth, which means that, by the time NASA and the member of the Jet Propulsion Lab received confirmation of the beginning of the seven-minute landing process, Curiosity had successfully landed seven minutes before that and began sending data and first images, which reached NASA at 10:34 PM PST. In a live feed shown on NASA's page, people around the world followed the landing and it appeared to be taking place exactly as planned. "The Seven Minutes of Terror has turned into the Seven Minutes of Triumph," said NASA Associate Administrator for Science John Grunsfeld in a prepared statement. "My immense joy in the success of this mission is matched only by overwhelming pride I feel for the women and men of the mission's team." In fact, it was a much needed success for cash-strapped NASA, which is celebrating an immense success and gain in prestige. President Barack Obama previously set the goal for humans to be sent to Mars by 2030 and much more advanced landing of Curiosity, which included several phases of descent and slowdowns of the capsule and its car-sized rover, lend confidence that this goal is realistic and achievable. At this time, NASA is evaluating Curiosity's instruments and is analyzing the landing site. There are ten instruments on board that have 15 times the mass of the instruments of the payload previously carried by the now immobile Mars rovers Spirit and Opportunity, which landed on January 4 and January 25, 2004. Curiosity is about twice as long and five times as massive as the Spirit and Opportunity Mars exploration rovers. It weighs 1,982 lbs and is 9.8 ft in length. It can pass obstacles up to 30 inches in height and will travel at an anticipated average speed of 98 ft per hour. Its power is derived from a radioisotope thermoelectric generator (RTG), similar to the one used by the Viking 1 and Viking 2 Mars landers in 1976. The power output is 125 watts of electrical power extracted from about 2000 watts of thermal power, which will gradually decrease as the plutonium-238 decays over time. Scientists estimate that there will be about 100 watts of electrical power left in about 14 years. Curiosity can generate about 2.5 kWh per day, in comparison to only 0.6 kWh the smaller Spirit and Opportunity had available. Contact Us for News Tips, Corrections and Feedback
__label__1
0.95154
, Volume 51, Issue 3, pp 395–406 Existence and Non-existence in Sabzawari’s Ontology • Asia InstituteThe University of Melbourne DOI: 10.1007/s11841-011-0283-z Cite this article as: Kamal, M. SOPHIA (2012) 51: 395. doi:10.1007/s11841-011-0283-z Sabzawari is one of the greatest Muslim philosophers of the nineteenth century. He belongs to Sadrian Existentialism, which became a dominant philosophical tradition during the Qajar dynasty in Iran. This paper critically analyses Sabzawari’s ontological discussion on the dichotomy of existence and quiddity and the relation between existence and non-existence. It argues against Sabzawari by advocating the idea that ‘Existence’ rather than quiddity is the ground for identity as well as for diversity, and that non-existence, like existence, is able to produce an effect. ExistenceNon-existencePrincipality of existencePrincipality of essenceQuiddityPredicablesIdentity in differenceBecomingOntologyIntuitionMystic experience Mulla Hadi Sabzawari (1797–1878) is a profound Muslim philosopher who lived during the Qajar dynasty in Iran. He developed his ideas under the influence of Mulla Sadra’s Existentialism. He wrote a number of books; among them is Sharh-i munzumah (Commentary on a philosophical poem), where he discusses different ontological issues and reflects on the meaning of ‘Existence’. Among these ontological issues the dichotomy of existence and quiddity and the relation between existence and non-existence, and Sabzawari’s analysis of them, are the subject of this article. I wish to criticise his views and argue against Sabzawari that ‘Existence’ rather than quiddity is the ground for identity as well as for diversity, and that non-existence, or the non-presence of existence like existence, is able to produce an effect. Understanding Sabzawari’s analysis of these two ontological issues is not possible without reference to his intellectual background, in particular, his Sadrian Existentialism. This stood in opposition to Suhrawardi’s Essentialism, which advocated the doctrine of the principality of ‘Essence’ (and dominated the philosophical school in Isfahan during the Safavid dynasty). For Suhrawardi ‘Essence’ is the reality of everything. Something exists when its essence is manifest and known. Its existence is totally dependent on its essence. Besides, Suhrawardi argues that ‘existence’ is a concept that does not correspond to anything outside thinking.1 Mulla Sadra brought about a fundamental philosophical change by thinking ‘Being’ rather than ‘Essence’ as the sole reality. In this regard, and for the place of Mulla Sadra in history of Muslim philosophy, Seyyed Hossein Nasr states that ‘he founded a new intellectual school in Islam, which means that he was able to open up a new perspective’.2 This new perspective is inaugurated with a kind of ontology, in which ‘Being’ rather than ‘Essence’ is seen as the sole reality and the ground on which all beings stand.3 He also believes that ‘Being’ manifests itself in different modes. Every mode has a unique way of existence and is a grade of ‘Being’. The self-manifestation of ‘Being’ is called ‘gradation’, which gives rise to the realm of diversity where all beings undergo constant substantial and accidental changes in order to achieve perfection.4 Mulla Sadra’s ideas had a significant impact on the development of philosophy in Persia, not immediately after his death in 1640 but later, in the Qajar period.5 During that time, a philosophical school was established in Tehran, and Mulla Hadi Sabzawari emerged as one of the profound Sadrian philosophers, and the founder of a school in Khurasan.6 Using as his basis the complex ontological and epistemological structure of Mulla Sadra’s philosophy, Sabzawari revived the principality of ‘Existence’ and incorporated mystical knowledge into rationalistic discourse. For Sabzawari, reason is incapable of accessing the truth of ‘Existence’, which resists definition and description. In order to avoid a negative conclusion concerning our knowledge of ‘Existence’, it is necessary to search for a cognitive tool other than reason that is capable of apprehending the truth of ‘Existence’. Mulla Sadra, before Sabzawari, found that new cognitive tool and came to the belief that the truth of ‘Existence’ could be known in the mystic experience rather than in rationalistic discourse: ‘The knowledge of the reality of existence cannot be except through the illuminative presence and an intuition of the [immediate] determined [reality]; then there will be no doubt about its inner-nature’.7 For Mulla Sadra it followed that the only way to understand the truth of ‘Existence’ is through mystic experience. The problem of this cognitive tool was that not every individual is able to possess it; it is a spiritual capacity obtained and developed by the seekers of truth on the mystic path. The illuminative presence resembles the vision of the true philosophers in Plato’s Republic who obtain knowledge of the universal forms and finally of agathon the universal form of Good. For Plato, not every individual is capable of acquiring this knowledge as it is conditioned by self-emancipation from the dogma of the cave and reaching out to the source of light. This self-emancipation is seen as a philosophical task and the prerogative of a small number of individuals known as philosophers.8 The questions that arise here are: What is this peculiar nature of ‘Existence’? Why are we unable to access it rationally? If ‘Existence’ is the sole reality then can we think of the possibility of ‘non-existence’? In opposing Suhrawardi’s doctrine of the principality of ‘Essence’, Sabzawari expresses his agreement with Mulla Sadra, stating, ‘In fact, (the philosophers) have been divided by upholding two theories. The first asserts that the principle of the realization of anything is “existence” while “quiddity” is merely something posited, i.e., a mental counterpart to “existence” that is united with the latter. This is the doctrine held by the most authoritative of the Peripatetics. And this is also the doctrine chosen here, as is indicated by the following verse: “Existence, in our opinion is fundamentally real”.9 Existence rather than essence is the sole reality and the ground for all ‘existents’. It constitutes the reality in every concrete existent. Every existent in itself is an individual case of ‘Existence’. It should be remembered that the relationship between ‘Existence’ and its own modes is not like the relationship between universals and particulars, simply because ‘Existence’ is not a universal concept like ‘blackness’ but the reality and an ontological ground on which everything stands. Described in this way, the ontological difference becomes essential for understanding the meaning of ‘Existence’ and determines its nature. It is possible to apprehend an existent, yet our knowledge of it will not guarantee accessibility to ‘Existence’. In rationalistic discourse, an existent is seen as a polarized entity and is analyzed into existence and quiddity, but this is not the case with ‘Existence’. ‘Existence’ cannot be conceptualized and quiddity be affixed to it. Meanwhile, it is self-evident and cannot be reduced to anything: ‘Its notion is one of the best-known things, but its deepest reality is in the extremity of hiddenness’.10 The self-evidence of ‘Existence’ is based on its nature as something simple and intuitively apprehended without mediation or representation. Yet, its reality is concealed and we are not able to grasp its truth. We know what the term ‘existence’ means but this does not assist us to understand the reality of ‘Existence’ as they are different and not the same. We talk about the existence of a tree, for example, while ‘Existence’ as such is beyond it. We find ourselves confronted with the enigma of understanding the truth of ‘Existence’ since its reality is transcendental and defies definition and description. The inaccessibility of the truth of ‘Existence’ advocated by Sabzawari is based on Mulla Sadra’s interpretation of the meaning of ‘Existence’ in his major philosophical works, such as al-Asfar, a magnum opus that contains his entire philosophical system, and al-Masha’ir. Mulla Sadra holds the view that since ‘Existence’ has neither genus nor differentia it does not have a definition.11 A definition is applicable to the particular existents or to concrete instances with genus and differentia; it relies totally on the presence of these universal determinations. Whatever is defined should belong to a genus and have differences that distinguish it from other members of the same genus. Sabzawari emphasizes this by saying, ‘So, they, i.e., all (so-called) definitions, can neither be a “definition” in view of the fact that “existence” is (absolutely) simple, having neither specific difference nor genus, as we shall see presently; nor can it be a “description” because a “description” is obtainable only by an accidental property, which is part of the five universals whose division itself is based on the thing-ness of “quiddity”, while existence and its properties derive from an entirely different source from “quiddity”.10 It is worth mentioning also Mulla Sadra’s and Sabzawari’s view on this resembles that of Martin Heidegger, a Western thinker who maintained that ‘Existence’ is indefinable for the same reason. Heidegger commences his analysis in Being andTime (1927) with a discussion of three prejudices that arose in Western philosophy from the time of Plato against any inquiry into the meaning of ‘Existence’, and which led to the oblivion or ‘nothing-ness’ of ‘Existence’ as a philosophical standpoint and gave rise to metaphysical thinking and nihilism in the West. The three prejudices are: (i) existence is the most universal concept; (ii) it is indefinable; (iii) it is self-evident. Heidegger is in agreement with Mulla Sadra and Sabzawari in stating that ‘Existence’ is neither a concept nor a genus. Its universality encompasses the universality of every genus.12 Two factors support this claim that ‘Existence’ is indefinable: the first is related to the nature of ‘Existence’ as something simple; the second is a limitation in the traditional logic for defining ‘Existence’. These two factors become obstacles to any attempt to define ‘Existence’ because ‘Existence’ has no genus and is not a member of a higher class. Its simplicity and universality are unlike anything else. Describing ‘Existence’, like defining it, is also doomed to failure. Description is possible in the presence of universal predicables, such as genus, species, difference, property and contingent accidents, but ‘Existence’ has no predicable.13 One may attempt to apply these predicables in an attempt to give quiddity to ‘Existence’, making it compound and mixed, but ‘Existence’ is simple and has no quiddity. If quiddity is assigned to ‘Existence’, then quiddity becomes an addition to it.14 In that case, ‘Existence’ will suffer division, presuppose its own parts, and be unable to become a unitary ground and apriori principle for the multiplicity of things. By contrast, the concrete existents are multiple and different in one way or another. Animals, for example, are not plants, yet they share existence. Existence is an ontological ground not only for the presence of the existents in the world, but also for their unity: ‘If we adopt the view that it is “existence” that is fundamentally real we would recognise that here, running through all the scattered “quiddities”, is one single reality (i.e., “existence”). This is comparable to the unity that we observe in things having extensions, whether they be immobile or mobile, for their multiplicity is mere potential, not actual’.15 What makes them different from one another are the universal determinations assigned to them in thinking. These universal determinations are conceptual and not real. ‘Existence’, rather than these universal determinations or quiddity, guarantees the actual presence of the existents in the world, and is one and the same in all of them. The unmixed nature of ‘Existence’, which becomes a ground for the principle of identity or sameness, is the reality of every existent. At the same time, whatever is existent and present in the world has a quiddity, and can be analysed intellectually into universal determinations. Here, according to Sabzawari, the dichotomy of existence and quiddity is a product of thinking. Sabzawari is also critical of Muslim thinkers who believe that existence is added to quiddity mentally. He developed four arguments to justify his criticism. In his first argument he stated that existence and not quiddity is properly negated. For that reason, existence is neither the same as quiddity nor a part of it.16 Sabzawari’s second argument is based on Ibn Sina’s view that the predication of quiddity by existence requires a middle term; it needs ‘what is accompanied by “because”’. The reason for this goes back to the nature of predication. The truth of some propositions, such as ‘intellect is existent’, requires a proof. By contrast, the predication of quiddity and its essential properties does not require any proof because it is self-evident.17 The third argument is related to the distinction between existence and quiddity in thinking through rational analysis. We rationalise the quiddity of something, for example a triangle, without regarding its external existence. In doing so we realise that ‘what is not disregarded is other that what is disregarded’. This will make existence something that occurs to quiddity.17 This also indicates that existence is neither the same as quiddity nor a part of it. The fourth argument states that if we think of existence as the same as quiddity or a part of it, then quiddity will need another existent because it is impossible for an ‘existent’ to be constituted by ‘non-existence’. As a consequence of this, ‘existence’ will become a part of a part and so on to infinity.18 As we see, Sabzawari has developed these arguments for the principality of ‘Existence’ and believes that ‘Existence’ is the unity, while multiplicity or diversity in the world is due to our apprehension of quiddity. This happens when the possible beings are conceptually analysed into the universal determinations. Whatever is found in the world is then the combination of existence and quiddity (zawj tarkibi). The latter is that by which each possible being is differentiated from all others. The quiddity of a horse, for example, is different from the quiddity of a table. Toshihiko Izutsu also confirms this kind of duality and states that ‘This fundamental fact about the two ontological factors is what Sabzawari refers to when he says that “existence” is the principle of unity, while “quiddities” raise only the dust of multiplicity’.18 He states clearly that quiddity, unlike existence, has given rise to diversity in the world. Existence determines the identity of all existents while quiddity becomes the source of their differences: ‘If “existence” were not fundamentally real there would be no unity actualised, because all other things raise only the dust of multiplicity’.19 Here, ‘all other things’ with the characteristic of multiplicity are quiddities of the existents. This distinction between identity and difference is the outcome of the ontological dichotomy of existence and quiddity in the realm of ‘Becoming’, for the reason that, as mentioned earlier, ‘Existence’ does not have quiddity. The unity of existence in intuition is accompanied by diversity assigned to the existents in thinking. Here, the problem is not whether this dichotomy is real or not. The question is concerned about the genesis of diversity. It is important to know whether ‘quiddity’ becomes an ontological ground for diversity or not. The distinction between existence and quiddity is not found through intuition, for whatever is experienced in intuition is an existent and not quiddity. Quiddity is the product of the rationalistic analysis of an existent into universal determinations. These universal determinations are conceptual and have no reality of their own outside the domain of the human mind. Quiddity, in this way, remains in thinking and will never see the light of day. There will be no distinction between existence and quiddity in the external world and will have no ontological reality. Whatever is real and apprehended is existence. When we look at the non-ontological position of quiddity as something unreal, and if this unreality becomes the foundation for diversity, then diversity becomes fictitious. How can something unreal (not existent) become an ontological ground for diversity in the world? Quiddity corresponds to nothing outside thinking. It is unthinkable by itself without being attached to an existent in mind. By contrast, an existence stands by itself and does not depend on quiddity to sustain it. When we analyse an existent into universal determinations in order to grasp its quiddity, we counter nothing in the external world to represent it except an existent. All universal determinations are mere concepts. Their application is also intellectual. The reality, which is present to our experience, is an existent, a mode of ‘Existence’ in a concrete form, not quiddity. Up to this point our analysis has not contradicted Sabzawari’s, but a problem arises when the genesis of diversity is investigated and its reality is examined. In my opinion, the principality of ‘Existence’ is all-embracing. All distinctions and all diversity are but apparent facets of the reality of ‘Existence’. Diversity is nothing more than the process of self-gradation of ‘Existence’ (tashkik al-wujud), in which ‘Existence’ manifests itself in a number of ways and gives rise to multiplicity. The difference between the multiple modes of ‘Existence’ is not inherited from quiddity in thinking but in the degree of intensity of ‘Existence’. Here, ‘Existence’ may be seen as the principle of identity as well as difference. All existents are essentially different modes of ‘Existence’. The differences among them are based on their own unique ways of existence: every concrete existent, in itself, is an individual case of ‘Existence’. Differences become real with their existential instances in the process of becoming, for example with priority and posterity, perfection and imperfection, strength and weakness resting in their existence not in their quiddity. If we rely on quiddity for bringing forth differences, then there will be no real diversity in the world, or, rather, diversity becomes an illusion because quiddity is only conceptual and is in thinking. The modes of ‘Existence’ are also different from ‘Existence’. The distinction between ‘Existence’ and its modes in the philosophy of Mulla Sadra and Heidegger is the foundation for the ‘ontological difference’ and the idea of identity in difference in the world.20 Hence, the description of ‘Existence’ as the principle of identity and of ‘quiddity’ as difference is problematic, for whatever exists in and outside the human thinking is ‘Existence’ in different modes and ‘Existence’ neither is nor has a quiddity. An assertion of the distinction between two individual instances of the modes of ‘Existence’ is not dependent on something unreal, because the difference is real. Thinking of quiddity as the birthplace of diversity renders the ‘ontological difference’ between ‘Existence’ and ‘existents’ fictitious. Total unity and identity without difference will remain outside the domain of human thinking; it will no longer be possible to talk about the gradation of ‘Existence’. We also know that the modes of ‘Existence’ that have come into being are different from ‘Existence’, and the relation between them is not like the relation between genus and species because ‘Existence’ is not a genus or a universal determination, but the reality on which every existent stands. There is no existent without ‘Existence’. At the same time an existent is not ‘Existence’ and vice versa. The ontological difference becomes a ground for a new type of relationship between ‘Existence’ and existents. On one side, it indicates their sameness or identity (not in the Aristotelian sense), and on the other it indicates their difference. This makes them identical in difference. As long as identity belongs to existence and all individual instances of the modes belong to ‘Existence’, difference is also real. It is based on the degrees of the manifestation of ‘Existence’ rather than quiddity. The concept of ‘Existence’, like all other concepts, appears to be universal, but its universality transcends all other universal concepts. It is a unique universality and is distinct from the rest of the universal concepts. For example, the universal concept of ‘horse’ is applicable to all individual horses, and at the same time it is limited to the members of the class of ‘horse’ and does not include other kinds of animals. This universality is limited and exclusionary. By contrast, the universality of the concept of ‘Existence’ is unlimited and includes whatever exists regardless of differences. There is still another distinctive characteristic of the concept of ‘Existence’ that is not found in other universal concepts. This characteristic is revealed when ‘existence’, like a universal determination, is predicated to a subject in a proposition. It is true that, in a proposition such as (X exists), the predicate does not add anything new to the meaning of the subject (X). At the same time, the negation of this predicate will bring about a fundamental change in the reality of the subject. ‘Existence’ in this proposition is not an accidental determination like other predicables. It is the inner reality of the subject and an ontological ground for its presence in the world. Nothing is more evident and clearer than the concept of ‘existence’ in this proposition. What Sabzawari intends to explain here is that the reality of ‘Existence’ and its concept are both self-evident and do not need a definition or description.10 On the other hand, quiddity remains intellectual and has no external reality. Its distinction from existence belongs to the domain of thinking and a rationalistic apprehension of a concrete existent. Sabzawari went further in dealing with the reality of ‘Existence’ by making a distinction between external and mental existents. For him, the mental concepts are nothing but intellectual existents: ‘A thing besides “existence” in the external world has an “existence” by itself at the mind’.21 The difference between an external existent and an intellectual existent relies on the way each one of them produces its effect.21 Real fire as an external existent is hot and burns, while the intellectual existent of fire is not capable of producing similar effects. The existence of fire is self-evident and is experienced by everyone regardless of his or her intellectual capacity; but we know that this particular existence is different from ‘Existence’. The problem arises with our knowledge of the hidden nature of this reality because it is neither accessible to reason nor can it be grasped in a definition. The ontological difference draws a line between two realms: the realm of identity and the realm of diversity. The former is sameness and stability. It is not infected by generation and degeneration. This realm resembles the realm of ‘Being’ in Parmenides’ poem, which is described as ‘ungenerable and absolutely indestructible, unwavering, endless, ever-limb-one-whole; No was nor will: all past and future null; Since Being subsists in one ubiquitous. Now unitary and continuous’.22 It is not possible for ‘Existence’ not to be or to change into something else. In coming to be, an existent, which goes through change, should come into being out of either existence or non-existence. If it comes out of existence, it will not change because it already is. On the other hand, it cannot come out of non-existence or it would not exist because, as Parmenides believes, non-existence produces nothing and cannot become a cause for an existence. For both Sabzawari and Parmenides, there is no non-existence to stand in relation to ‘Existence’. If we think of ‘Existence’ as the sole reality and the foundation of all existents, then non-existence should not be ascribed to it. Existence has neither come to be nor will it cease to exist. At this ontological level, which is dominated by the principle of identity, ‘change’ does not take place; otherwise the identity and perfection of ‘Existence’ will be jeopardised. The denial of the reality of change is a logical consequence of the denial of ‘non-existence’ as the counterpart of ‘Existence’. Aristotle explained this point more clearly by saying that change takes place between contraries or between one contrary and an intermediate, which stands for another contrary, or between the contradictories.23 Following this belief, Aristotle denies change in ‘substance’ because it does not have a contrary. Here, there is a discrepancy between Aristotle and Sabzawari. Individual substances as particular instances of the modes of ‘Existence’ belong to the realm of identity in difference. The instances undergo change, and a process of coming to be and ceasing to be. The question that arises here is, how could an existent go through change if there is no ‘non-existence’? Sabzawari, unlike Parmenides, believes that the realm of identity in difference, which is the realm of the multiple modes of ‘Existence’ and ‘Becoming’, undergoes change due to non-existence. According to him, there are two kinds of non-existence that stand in relation to an existent. One of them annihilates its whole existence, and the other negates one, or more than one, aspect of its existence. The possibility of these two kinds of non-existence relies on the existence of the individual instances: ‘The absolute existence is a predicate used when a simple “whetherness” is in question, like “man is existent”, while the determined existence is a predicate used when a composite “whether-ness” is in question, like “man is a writer”. And the negation of these two is “absolute non-existence” and “determined non-existence”, and the negation of these two is “absolute non-existence” and “determined non-existence” respectively. The purpose of our specifying “non-existence” by the word “concept” is to indicate that this division, in the case of “existence”, is not confined to its concept, but extends to its reality’.24 When the predicate of a proposition such as ‘X is an existent’ is negated, and the whole existence of ‘x’ is annihilated, we deal with absolute non-existence of an existence known as ‘x’. The negation of one or more than one aspect of ‘x’, which is not the annihilation of ‘x’ in its entirety, will lead to determined non-existence. In the proposition ‘Socrates is not a Sicilian’, we negate one determination of Socrates, namely his being a Sicilian. The non-existence of this determination does not affect the whole being of Socrates. These two kinds of non-existence can also be called unlimited and limited. The unlimited non-existence is the total annihilation of an existent, whereas a limited non-existence is the negation of one or more than one of the determinations of an existent. Non-existence is not only logical and the property of negative propositions; it is also real because existence is real. At the same time, the reality of non-existence does not match the reality of existence because non-existence is not a ‘thing’ and a ‘thing’ is equal to an existence. Here, Sabzawari is critical of the views of some rationalist Muslim theologians, namely the Mu‘tazilas, who advocated the idea of an intermediary position between existence and non-existence for the ontological status of the divine attributes.25 This belief became part of their argument for the denial of the reality and eternity of these attributes, which were claimed by the traditionalist theologians, such as the Hanbelites. Al-Jubai (d. 933), one of the Mu‘tazila thinkers, believed that all existents were things before they came into being. Later, Abu Hashim, son of al-Jubai, developed this idea into the doctrine of Hal (state or subsistence), saying that the divine attributes were neither existents nor non-existents, but states in an intermediary position.26 The distinction between existence and non-existence lies not only in their reality, but also in the way they function causally. An existent can become a cause for another existent. Fire, for example, is a cause for burning a piece of wood. By contrast, a non-existence is a cause neither of another non-existence nor of an existence: ‘Likewise there is no real causal relationship between “non-existences”, even between two particular “non-existences”. If anyone asserts this, i.e., this causal relationship – as, for example, his assertion: the “non-existence” of a cause is the cause for the “non-existence” of the caused – the assertion is based on approximation and is but a figurative expression. For asserting their being causes is due to their resemblance to their positive counterparts’.27 Furthermore, Sabzawari insists that the distinction between the non-existences is a fiction or only in the imagination.27 The denial of the causal relationship between non-existences, or between non-existence and an existence, can be problematic and requires further attention. Sabzawari has explained this in an example of the causal relationship between clouds and rain. It is usually thought that the existence of the clouds in the sky would become a cause for rain. This determines the causal relationship between two existents, namely, the clouds and the rain. When there are no clouds in the sky we change our affirmative proposition into a negative one, and assert that the non-existence of clouds in the sky is the cause for the non-existence of rain or for the existence of a sunny day. Similarly, we assume that the non-existence of fire is a cause for the non-existence of heat in the room. This is based, according to Sabzawari, on the conversion of an affirmative proposition into a negative one and has nothing to do with reality. Here, we may think of non-existence as having no ontological significance because it is not a thing and cannot become a cause for anything in the world: but how could an unreal non-existence in its absolute and determined forms become a vehicle for change in the realm of ‘Becoming’? How can we deny the distinction between them? The non-existence of Socrates is the negation of his existence; at the same time this non-existence is real because Socrates is no longer an existent. Besides, non-existence, like existence, can become a cause and produce its own effect. Otherwise, there would be no coming to be and ceasing to be. If non-existence is not a cause for an effect and produces nothing, then we stand with Parmenides and reject the reality of change. Sabzawari accepts Parmenides’ paradigm at the ontological level of ‘Existence’, but he believes in the reality of change in the realm of ‘Becoming’. The non-existence of clouds is, therefore, not a mere conceptual expression and a property of a negative proposition. It is an existentialist reality of the non-existence of clouds, which becomes a cause for the non-existence of rain and the existence of a sunny day. This does not mean that whenever there are clouds there would be rain, but that there is no rain without clouds. In the same manner, the non-existence of oxygen in a room will be fatal. This non-existence cannot be seen as something unreal or only in the imagination because its effect is real. After describing two kinds of non-existence, namely, absolute and determined, Sabzawari accepts no distinction between the non-existences as far as they are non-existences. According to him, one cannot determine the difference between the non-existence of Socrates and that of a chair, or the non-existence of Socrates and the non-existence of being Sicilian. But can a distinction be made between two existences? As explained earlier, Sabzawari believes that existence is univocal and homogeneous. It is shared by all existents. This will make the existence of Socrates and that of a chair identical and the same. The differences and diversity arise, as Sabzawari reckons, when their universal determinations or quiddities are analysed. There is no real distinction between them as far as existence is concerned. Their distinction is in their quiddity, which is conceptual and corresponds to nothing outside human thinking: ‘The “quiddities” by themselves are different from each other, and multiple, and spread the dust of multiplicity throughout “existence”, for “existence” becomes multiple in a certain way through the multiplication of its subjects, just as “existence” is the very centre about which turns the sphere of unity’.19 Sabzawari also advocates the idea that non-existence came to the world with human intellect. This idea is interesting and reminiscent of Jean-Paul Sartre’s analysis of nothingness in his major philosophical work Being and Nothingness (1943), where he describes nothingness as an aspect of consciousness, which is born with it and can also be experienced.28 In the same manner, Sabzawari states, ‘Our intellect has the power to represent the “non-existence” of itself. Thus the intellect is necessarily qualified by “existence” and “non-existence”. And also it has the power of representing the “non-existence” of others, namely other external “existents” so that the latter must likewise be qualified by “existence” and “non-existence”’.29 The human intellect, unlike other kinds of existents such as a table, is aware of its own deficiency. This awareness refers to the non-existence of some aspects of this being. The human intellect is born with this deficiency, and endeavours constantly to accomplish itself. Sartre also holds the view that nothingness is made to be and ‘appears within the limits of a human expectation’.30 In this way the non-existence of a table in the room is a factor experienced by the human intellect. I experience this non-existence in the world. Without my presence there will be no non-existence. I also understand my own existence as lacking, or, as Sartre says, ‘I am not what I am and I am what I am not’.31 It should be remembered that the apprehension of non-existence by the human intellect does not make non-existence subjective or only conceptual. Sabzawari’s claim is also true for existence, which, like non-existence, is revealed to the human intellect. The existence as well as the non-existence of a table is revealed to my consciousness equally, but that does not mean that they cannot be without my existence in the room. This is not solipsism; the presence of the human intellect is a condition for the revelation of both existence and non-existence. It is I myself not a non-conscious entity like the chair that experiences the existence and non-existence of the table in the room. Moreover, since the division of non-existence is made on the division of existence and non-existence is understood in relation to existence, non-existence should be prior or posterior to existence. In the case of something coming into being, non-existence precedes existence and becomes prior to it, whereas in ceasing to be, non-existence becomes posterior to existence. In both cases, in coming into existence and ceasing to be, non-existence has an ontological significance and impact on existence and its dynamic character. In addition to this, we conclude that ‘existence’, which is seen as the principle of unity, will also become the ground for diversity in the realm of ‘Becoming’. For Suhrawardi’s arguments against the principality of Existence see: Shahab al-Din Suhrawardi, Hikmat al-Ishraq (The Philosophy of Illumination), translated by John Walbridge and Hossein Ziai, Provo, Utah: Brigham Young University Press, 1999, pp. 44–46. Seyyed Hossein Nasr, Sadr al-Din Shirazi and his Transcendent Theosophy, Tehran: Institute for Humanities and Cultural Studies, 1997, p. 69. Mulla Sadra, al-Asfar al-Arba'a, vol.1, Beirut: Dar Ihya' al-Turath al-'Arabi, 1999, pp. 68–69. Mulla Sadra, al-Asfar, vol. 1, p. 432. Mulla Ali Jamshid Nuri (d. 1830) was a great scholar and profound Sadrian philosopher who studied at Mazandaran and Qazvin, and settled in Isfahan. He taught the philosophy of Mulla Sadra and the mystic ideas of Shaykh Ahmad Ahsa‘i, the founder of the Shaykhi Sufi order. He was also the teacher of Mulla Ismail Isfahani, Mulla Abdullah Zunuzi, Mulla Agha Qazvini, Muhammad Rida Qumshahi and Mulla Hadi Sabzawari. The Qajar ruler, Fath Ali Qajar, founded Khan Marvi’s school and invited Mulla Ali Nuri, a profound Sadrian scholar and philosopher, to teach philosophy in Tehran. It seems that Mulla Ali Nuri did not accept the offer but sent Mulla Abdullah Zunuzi, one of his students, to undertake this responsibility while he himself stayed in Isfahan. Mulla Hadi Sabzawari also studied philosophy with Mulla Ali Nuri for ten years. Henry Corbin, History of Islamic Philosophy, London and New York, in association with Islamic Publications for the Institute of Islamic Studies, 1993, p. 351. See also: Seyyed Hossein Nasr, ‘The Metaphysics of Sadr al-Din Shirazi and Islamic Philosophy in Qajar Iran’, in Qajar Iran: Political, Social and Cultural Change 1800–1925, (eds.) Edmund Bosworth and Carole Hillenbrand, Edinburgh: Edinburgh University Press, 1983, pp. 190–91. Mulla Sadra, al-Masha‘ir, translated by Parviz Morewedge, New York: SSIP, 1992, section 57, p. 30. Shahab al-Din Suhravardi (1171–1208), another Muslim philosopher, who advocated the principality of essence and is known as the founder of Illuminationism, believed in the illuminative presence of knowledge by presence. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, translated from the Arabic by Mehdi Mohaghegh and Toshihiko Izutsu, New York: Caravan Books, 1977, p. 33. This book is commonly known as Sharh-i manzumah (Commentary on a Philosophical Poem). The commentary, entitled Ghurar al-fara’id, is divided into seven headings. Each heading deals with one aspect of Sabzawari’s philosophy. They are further divided into chapters and sections. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p. 31. Mulla Sadra, al-Masha‘ir, pp. 7–8. Martin Heidegger, Being and Time, translated by John Macquarie and Edward Robinson, London: Blackwell, 1992, p. 23. I have discussed the problem of the definition of ‘Existence’ in the philosophy of Mulla Sadra and Martin Heidegger in my book From Essence to Being: The Philosophy of Mulla Sadra and Martin Heidegger, London: ICAS Press, 2010, pp. 101–6. These predicables are relations between a universal term and a subject in a proposition. Aristotle mentioned four of them: genus, specific difference, property and contingent accident. Later, Porphyry (234–c. 305 AD) added Species as the fifth predicable. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p. 32. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p. 35. Toshihiko Izutsu, The Fundamental Structure of Sabzawari’s Metaphysics, Tokyo: Keio University Press, 1971, p. 51. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p.43. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p. 44. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p. 37. Mulla Sadra, Al-Asfar al-Arba‘a, vol. 1, introduction by Shaykh Muhammad Rida al-Muzafar, Beirut: Dar al-Ahya’ al-Turath al-‘Arabi, 1999, p. 59. Martin Heidegger defined the ontological difference as the differentiation between Being and beings in The Basic Problems of Phenomenology, translated by Albert Hofstadter, printed by Indiana University Press in 1982 (see pages 17, 72 and 78). Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p. 54. Martin, J. Henn, Parmenides of Elea, a verse translation with interpretative essays and commentary to the text, London: Praeger, 2003, p. 26. Aristotle, ‘Metaphysics’, Book Kappa, 12: 10–20, and ‘Categories’, 3b 24–32, in The Complete Works of Aristotle. Aristotle also explained his ideas on change and movement in ‘Physics’ Book 5, 1 and 2. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, pp. 69–70. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, pp. 75–77. These rationalist Muslim theologians were the members of the Mu‘tazila school founded in Basrah and Baghdad by some pupils of Hasan al-Basri (642–728) who had seceded from him. The Mu‘tazila attempted to interpret religion in the light of human reason. They strongly advocated the Unity of God (al-tawhid) by denying the reality and eternity of the Divine attributes, and believed in Divine Justice. For that they were called the people of Justice and Unity of God (Ahl al-‘Adl wa al-Tawhid). They also advocated the doctrine of free will. Al-Shahrastani, Nihayat, Cairo, 1960. P. 132. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p. 79. Jean-Paul Sartre, Being and Nothingness: An Essay on Phenomenological Ontology, translated by Hazel E. Barnes, Routledge: London, reprinted 1996, p. 7. Mulla Hadi Sabzawari, The Metaphysics of Sabzawari, p. 84. Jean-Paul Sartre, Being and Nothingness, p. 7. (Sartre also believes that nothingness provides the ground for negation and negative propositions and not vice versa (see p. 19). Jean-Paul Sartre, Being and Nothingness, p. 79. Copyright information © Springer Science+Business Media B.V. 2011
__label__1
0.917319
Giving customers a glimpse into a food truck owner’s life and mindset can allow them to cultivate a deeper relationship with customers separately from the brand. The goal, of course, is to increase customers’ loyalty to the brand. Here are a few things every food truck owner (new or old) can tweet about, which can allow your customers to see another side of you: Personal News Share the big events of your personal life (vacations, weddings, births) you, the type of information you’d share with close friends. It can help keep your followers feeling like they’re “in the loop.” You’re also more likely to make a connection with followers who have experienced something similar. They key to sharing articles is to also include your unique opinion. Let others know why you liked or didn’t like the specific article. Just remember that your opinion paints a public image, which means you should be cautious about which opinions you choose to share. Wisdom from the book you’re reading Sharing your recent purchases, such as music or home cooking equipment can stir up comments and conversations from others that have bought similar items and enjoy the same types of things. Share Wisdom The greatest information you can share is from the work you do in your life, but don’t limit that to what you do in your food truck only. If you have a hobby or passion for something creative outside of the food you serve, share it. [poll id=”39″] So what types of tweets would you add to this list? Feel free to let us know in the comment section below.
__label__1
0.976097
Search tips Search criteria Results 1-25 (1211264) Clipboard (0) Related Articles 1.  Telomeric NAP1L4 and OSBPL5 of the KCNQ1 Cluster, and the DECORIN Gene Are Not Imprinted in Human Trophoblast Stem Cells  PLoS ONE  2010;5(7):e11595. Genomic imprinting of the largest known cluster, the Kcnq1/KCNQ1 domain on mChr7/hChr11, displays significant differences between mouse and man. Of the fourteen transcripts in this cluster, imprinting of six is ubiquitous in mice and humans, however, imprinted expression of the other eight transcripts is only found in the mouse placenta. The human orthologues of the latter eight transcripts are biallelically expressed, at least from the first trimester onwards. However, as early development is less divergent between species, placental specific imprinting may be present in very early gestation in both mice and humans. Methodology/Principal Findings Human embryonic stem (hES) cells can be differentiated to embryoid bodies and then to trophoblast stem (EB-TS) cells. Using EB-TS cells as a model of post-implantation invading cytotrophoblast, we analysed allelic expression of two telomeric transcripts whose imprinting is placental specific in the mouse, as well as the ncRNA KCNQ1OT1, whose imprinted expression is ubiquitous in early human and mouse development. KCNQ1OT1 expression was monoallelic in all samples but OSBPL5 and NAP1L4 expression was biallelic in EB-TS cells, as well as undifferentiated hES cells and first trimester human fetal placenta. DCN on hChr12, another gene imprinted in the mouse placenta only, was also biallelically expressed in EB-TS cells. The germline maternal methylation imprint at the KvDMR was maintained in both undifferentiated hES cells and EB-TS cells. The question of placental specific imprinting in the human has not been answered fully. Using a model of human trophoblast very early in gestation we show a lack of imprinting of two telomeric genes in the KCNQ1 region and of DCN, whose imprinted expression is placental specific in mice, providing further evidence to suggest that humans do not exhibit placental specific imprinting. The maintenance of both differential methylation of the KvDMR and monoallelic expression of KCNQ1OT1 indicates that the region is appropriately regulated epigenetically in vitro. Human gestational load is less than in the mouse, resulting in reduced need for maternal resource competition, and therefore maybe also a lack of placental specific imprinting. If genomic imprinting exists to control fetal acquisition of maternal resources driven by the placenta, placenta-specific imprinting may be less important in the human than the mouse. PMCID: PMC2904374  PMID: 20644730 2.  Bisphenol A Exposure Disrupts Genomic Imprinting in the Mouse  PLoS Genetics  2013;9(4):e1003401. Author Summary BPA is a widely used compound to which humans are exposed, and recent studies have demonstrated the association between exposure and adverse developmental outcomes in both animal models and humans. Unfortunately, exact mechanisms of BPA–induced health abnormalities are unclear, and elucidation of these relevant biological pathways is critical for understanding the public health implication of exposure. Recently, increasing data have demonstrated the ability of BPA to induce changes in DNA methylation, suggesting that epigenetic mechanisms are relevant. In this work, we study effects of BPA exposure on expression and regulation of imprinted genes in the mouse. Imprinted genes are regulated by differential DNA methylation, and they play critical roles during fetal, placental, and postnatal development. We have found that fetal exposure to BPA at physiologically relevant doses alters expression and methylation status of imprinted genes in the mouse embryo and placenta, with the latter tissue exhibiting the more significant changes. Additionally, abnormal imprinting is associated with defective placental development. Our data demonstrate that BPA exposure may perturb fetal and postnatal health through epigenetic changes in the embryo as well as through alterations in placental development. PMCID: PMC3616904  PMID: 23593014 Genomic imprinting is an important epigenetic process involved in regulating placental and foetal growth. Imprinted genes are typically associated with differentially methylated regions (DMRs) whereby one of the two alleles is DNA methylated depending on the parent of origin. Identifying imprinted DMRs in humans is complicated by species- and tissue-specific differences in imprinting status and the presence of multiple regulatory regions associated with a particular gene, only some of which may be imprinted. In this study, we have taken advantage of the unbalanced parental genomic constitutions in triploidies to further characterize human DMRs associated with known imprinted genes and identify novel imprinted DMRs. By comparing the promoter methylation status of over 14,000 genes in human placentas from ten diandries (extra paternal haploid set) and ten digynies (extra maternal haploid set) and using 6 complete hydatidiform moles (paternal origin) and ten chromosomally normal placentas for comparison, we identified 62 genes with apparently imprinted DMRs (false discovery rate <0.1%). Of these 62 genes, 11 have been reported previously as DMRs that act as imprinting control regions, and the observed parental methylation patterns were concordant with those previously reported. We demonstrated that novel imprinted genes, such as FAM50B, as well as novel imprinted DMRs associated with known imprinted genes (for example, CDKN1C and RASGRF1) can be identified by using this approach. Furthermore, we have demonstrated how comparison of DNA methylation for known imprinted genes (for example, GNAS and CDKN1C) between placentas of different gestations and other somatic tissues (brain, kidney, muscle and blood) provides a detailed analysis of specific CpG sites associated with tissue-specific imprinting and gestational age-specific methylation. DNA methylation profiling of triploidies in different tissues and developmental ages can be a powerful and effective way to map and characterize imprinted regions in the genome. PMCID: PMC3154142  PMID: 21749726 Developmental biology  2011;353(2):420-431. A subset of imprinted genes in the mouse have been reported to show imprinted expression that is restricted to the placenta, a short-lived extra-embryonic organ. Notably these so-called 'placental-specific' imprinted genes are expressed from both parental alleles in embryo and adult tissues. The placenta is an embryonic-derived organ that is closely associated with maternal tissue and as a consequence, maternal contamination can be mistaken for maternal-specific imprinted expression. The complexity of the placenta, which arises from multiple embryonic lineages, poses additional problems in accurately assessing allele-specific repressive epigenetic modifications in genes that also show lineage-specific silencing in this organ. These problems require that extra evidence be obtained to support the imprinted status of genes whose imprinted expression is restricted to the placenta. We show here that the extra-embryonic visceral yolk sac (VYS), a nutritive membrane surrounding the developing embryo, shows a similar 'extra-embryonic-lineage-specific' pattern of imprinted expression. We present an improved enzymatic technique for separating the bilaminar VYS and show that this pattern of imprinted expression is restricted to the endoderm layer. Finally, we show that VYS 'extra-embryonic-lineage-specific' imprinted expression is regulated by DNA methylation in a similar manner as shown for genes showing multi-lineage imprinted expression in extra-embryonic, embryonic and adult tissues. These results show that the VYS is an improved model for studying the epigenetic mechanisms regulating extra-embryonic-lineage-specific imprinted expression. PMCID: PMC3081948  PMID: 21354127 genomic imprinting; placenta; yolk sac; non-coding RNA; insulator 5.  Identification of the Imprinted KLF14 Transcription Factor Undergoing Human-Specific Accelerated Evolution   PLoS Genetics  2007;3(5):e65. Imprinted genes are expressed in a parent-of-origin manner and are located in clusters throughout the genome. Aberrations in the expression of imprinted genes on human Chromosome 7 have been suggested to play a role in the etiologies of Russell-Silver Syndrome and autism. We describe the imprinting of KLF14, an intronless member of the Krüppel-like family of transcription factors located at Chromosome 7q32. We show that it has monoallelic maternal expression in all embryonic and extra-embryonic tissues studied, in both human and mouse. We examine epigenetic modifications in the KLF14 CpG island in both species and find this region to be hypomethylated. In addition, we perform chromatin immunoprecipitation and find that the murine Klf14 CpG island lacks allele-specific histone modifications. Despite the absence of these defining features, our analysis of Klf14 in offspring from DNA methyltransferase 3a conditional knockout mice reveals that the gene's expression is dependent upon a maternally methylated region. Due to the intronless nature of Klf14 and its homology to Klf16, we suggest that the gene is an ancient retrotransposed copy of Klf16. By sequence analysis of numerous species, we place the timing of this event after the divergence of Marsupialia, yet prior to the divergence of the Xenarthra superclade. We identify a large number of sequence variants in KLF14 and, using several measures of diversity, we determine that there is greater variability in the human lineage with a significantly increased number of nonsynonymous changes, suggesting human-specific accelerated evolution. Thus, KLF14 may be the first example of an imprinted transcript undergoing accelerated evolution in the human lineage. Author Summary Imprinted genes are expressed in a parent-of-origin manner, where one of the two inherited copies of the imprinted gene is silenced. Aberrations in the expression of these genes, which generally regulate growth, are associated with various developmental disorders, emphasizing the importance of their discovery and analysis. In this study, we identify a novel imprinted gene, named KLF14, on human Chromosome 7. It is predicted to bind DNA and regulate transcription and was shown to be expressed from the maternally inherited chromosome in all human and mouse tissues examined. Surprisingly, we did not identify molecular signatures generally associated with imprinted regions, such as DNA methylation. Additionally, the identification of numerous DNA sequence variants led to an in-depth analysis of the gene's evolution. It was determined that there is greater variability in KLF14 in the human lineage, when compared to other primates, with a significantly increased number of polymorphisms encoding for changes at the protein level, suggesting human-specific accelerated evolution. As the first example of an imprinted transcript undergoing accelerated evolution in the human lineage, we propose that the accumulation of polymorphisms in KLF14 may be aided by the silencing of the inactive allele, allowing for stronger selection. PMCID: PMC1865561  PMID: 17480121 6.  DNMT1 and AIM1 Imprinting in human placenta revealed through a genome-wide screen for allele-specific DNA methylation  BMC Genomics  2013;14:685. PMCID: PMC3829101  PMID: 24094292 Genomic imprinting; Placenta; Next generation sequencing; Differentially Methylated Region (DMR); DNMT1; AIM1 7.  Characterisation of marsupial PHLDA2 reveals eutherian specific acquisition of imprinting  PMCID: PMC3170258  PMID: 21854573 BMC Genetics  2010;11:25. PMCID: PMC2871261  PMID: 20403199 9.  The Importance of Imprinting in the Human Placenta  PLoS Genetics  2010;6(7):e1001015. PMCID: PMC2895656  PMID: 20617174 10.  Inter- and Intra-Individual Variation in Allele-Specific DNA Methylation and Gene Expression in Children Conceived using Assisted Reproductive Technology  PLoS Genetics  2010;6(7):e1001033. Epidemiological studies have reported a higher incidence of rare disorders involving imprinted genes among children conceived using assisted reproductive technology (ART), suggesting that ART procedures may be disruptive to imprinted gene methylation patterns. We examined intra- and inter-individual variation in DNA methylation at the differentially methylated regions (DMRs) of the IGF2/H19 and IGF2R loci in a population of children conceived in vitro or in vivo. We found substantial variation in allele-specific methylation at both loci in both groups. Aberrant methylation of the maternal IGF2/H19 DMR was more common in the in vitro group, and the overall variance was also significantly greater in the in vitro group. We estimated the number of trophoblast stem cells in each group based on approximation of the variance of the binomial distribution of IGF2/H19 methylation ratios, as well as the distribution of X chromosome inactivation scores in placenta. Both of these independent measures indicated that placentas of the in vitro group were derived from fewer stem cells than the in vivo conceived group. Both IGF2 and H19 mRNAs were significantly lower in placenta from the in vitro group. Although average birth weight was lower in the in vitro group, we found no correlation between birth weight and IGF2 or IGF2R transcript levels or the ratio of IGF2/IGF2R transcript levels. Our results show that in vitro conception is associated with aberrant methylation patterns at the IGF2/H19 locus. However, very little of the inter- or intra-individual variation in H19 or IGF2 mRNA levels can be explained by differences in maternal DMR DNA methylation, in contrast to the expectations of current transcriptional imprinting models. Extraembryonic tissues of embryos cultured in vitro appear to be derived from fewer trophoblast stem cells. It is possible that this developmental difference has an effect on placental and fetal growth. Author Summary We have screened a population of children conceived in vitro for epigenetic alterations at two loci that carry parent-of-origin specific methylation marks. We made the observation that epigenetic variability was greater in extraembryonic tissues than embryonic tissues in both groups, as has also been demonstrated in the mouse. The greater level of intra-individual variation in extraembryonic tissues of the in vitro group appears to result from these embryos having fewer trophoblast stem cells. We also made the unexpected observation that variability in parental origin-dependent epigenetic marking was poorly correlated with gene expression. In fact, there is such a high level of inter-individual variation in IGF2 transcript level that the presumed half-fold reduction in IGF2 mRNA accounted for by proper transcriptional imprinting versus complete loss of imprinting would account for less than 5% of the total population variance. Given this level of variability in the expression of an imprinted gene, the presumed operation of “parental conflict” as the selective force acting to maintain imprinted gene expression at the IGF2/H19 locus in the human should be revisited. PMCID: PMC2908687  PMID: 20661447 11.  Assessment of genomic imprinting of SLC38A4, NNAT, NAP1L5, and H19 in cattle  BMC Genetics  2006;7:49. At present, few imprinted genes have been reported in cattle compared to human and mouse. Comparative expression analysis and imprinting status are powerful tools for investigating the biological significance of genomic imprinting and studying the regulation mechanisms of imprinted genes. The objective of this study was to assess the imprinting status and pattern of expression of the SLC38A4, NNAT, NAP1L5, and H19 genes in bovine tissues. A polymorphism-based approach was used to assess the imprinting status of four bovine genes in a total of 75 tissue types obtained from 12 fetuses and their dams. In contrast to mouse Slc38a4, which is imprinted in a tissue-specific manner, we found that SLC38A4 is not imprinted in cattle, and we found it expressed in all adult tissues examined. Two single nucleotide polymorphisms (SNPs) were identified in NNAT and used to distinguish between monoallelic and biallelic expression in fetal and adult tissues. The two transcripts of NNAT showed paternal expression like their orthologues in human and mouse. However, in contrast to human and mouse, NNAT was expressed in a wide range of tissues, both fetal and adult. Expression analysis of NAP1L5 in five heterozygous fetuses showed that the gene was paternally expressed in all examined tissues, in contrast to mouse where imprinting is tissue-specific. H19 was found to be maternally expressed like its orthologues in human, sheep, and mouse. This is the first report on the imprinting status of SLC38A4, NAP1L5, and on the expression patterns of the two transcripts of NNAT in cattle. It is of interest that the imprinting of NAP1L5, NNAT, and H19 appears to be conserved between mouse and cow, although the tissue distribution of expression differs. In contrast, the imprinting of SLC38A4 appears to be species-specific. PMCID: PMC1629023  PMID: 17064418 12.  At Least Ten Genes Define the Imprinted Dlk1-Dio3 Cluster on Mouse Chromosome 12qF1  PLoS ONE  2009;4(2):e4352. Genomic imprinting is an exception to Mendelian genetics in that imprinted genes are expressed monoallelically, dependent on parental origin. In mammals, imprinted genes are critical in numerous developmental and physiological processes. Aberrant imprinted gene expression is implicated in several diseases including Prader-Willi/Angelman syndromes and cancer. Methodology/Principal Findings To identify novel imprinted genes, transcription profiling was performed on two uniparentally derived cell lines, androgenetic and parthenogenetic primary mouse embryonic fibroblasts. A maternally expressed transcript termed Imprinted RNA near Meg3/Gtl2 (Irm) was identified and its expression studied by Northern blotting and whole mounts in situ hybridization. The imprinted region that contains Irm has a parent of origin effect in three mammalian species, including the sheep callipyge locus. In mice and humans, both maternal and paternal uniparental disomies (UPD) cause embryonic growth and musculoskeletal abnormalities, indicating that both alleles likely express essential genes. To catalog all imprinted genes in this chromosomal region, twenty-five mouse mRNAs in a 1.96Mb span were investigated for allele specific expression. Ten imprinted genes were elucidated. The imprinting of three paternally expressed protein coding genes (Dlk1, Peg11, and Dio3) was confirmed. Seven noncoding RNAs (Meg3/Gtl2, Anti-Peg11, Meg8, Irm/“Rian”, AK050713, AK053394, and Meg9/Mirg) are characterized by exclusive maternal expression. Intriguingly, the majority of these noncoding RNA genes contain microRNAs and/or snoRNAs within their introns, as do their human orthologs. Of the 52 identified microRNAs that map to this region, six are predicted to regulate negatively Dlk1, suggesting an additional mechanism for interactions between allelic gene products. Since several previous studies relied heavily on in silico analysis and RT-PCR, our findings from Northerns and cDNA cloning clarify the genomic organization of this region. Our results expand the number of maternally expressed noncoding RNAs whose loss may be responsible for the phenotypes associated with mouse pUPD12 and human pUPD14 syndromes. PMCID: PMC2632752  PMID: 19194500 13.  The Parental Non-Equivalence of Imprinting Control Regions during Mammalian Development and Evolution  PLoS Genetics  2010;6(11):e1001214. Author Summary PMCID: PMC2987832  PMID: 21124941 14.  The Evolution of the DLK1-DIO3 Imprinted Domain in Mammals  PLoS Biology  2008;6(6):e135. A comprehensive, domain-wide comparative analysis of genomic imprinting between mammals that imprint and those that do not can provide valuable information about how and why imprinting evolved. The imprinting status, DNA methylation, and genomic landscape of the Dlk1-Dio3 cluster were determined in eutherian, metatherian, and prototherian mammals including tammar wallaby and platypus. Imprinting across the whole domain evolved after the divergence of eutherian from marsupial mammals and in eutherians is under strong purifying selection. The marsupial locus at 1.6 megabases, is double that of eutherians due to the accumulation of LINE repeats. Comparative sequence analysis of the domain in seven vertebrates determined evolutionary conserved regions common to particular sub-groups and to all vertebrates. The emergence of Dlk1-Dio3 imprinting in eutherians has occurred on the maternally inherited chromosome and is associated with region-specific resistance to expansion by repetitive elements and the local introduction of noncoding transcripts including microRNAs and C/D small nucleolar RNAs. A recent mammal-specific retrotransposition event led to the formation of a completely new gene only in the eutherian domain, which may have driven imprinting at the cluster. Author Summary Mammals have two copies of each gene in their somatic cells, and most of these gene pairs are regulated and expressed simultaneously. A fraction of mammalian genes, however, is subject to imprinting—a chemical modification that marks a gene according to its parental origin, so that one parent's copy is expressed while the other parent's copy is silenced. How and why this process evolved is the subject of much speculation. Here we have shown that all the genes in one genomic region, Dlk1-Dio3, which are imprinted in placental mammals such as mouse and human, are not imprinted in marsupial (wallaby) or monotreme (platypus) mammals. This is in contrast to a small number of other imprinted genes that are imprinted in marsupials and other therian mammals and indicates that imprinting arose at each genomic domain at different stages of mammalian evolution. We have compared the sequence of the Dlk1-Dio3 region between seven vertebrate species and identified sequences that are differentially represented in mammals that imprint compared to those that do not. Our data indicate that once imprinted gene regulation is acquired in a domain, it becomes evolutionarily constrained to remain unchanged. A comparative analysis of genomic imprinting between mammals that imprint and those that don't has provided insights into how and why imprinting evolved. PMCID: PMC2408620  PMID: 18532878 15.  Epigenetic and transcriptional features of the novel human imprinted lncRNA GPR1AS suggest it is a functional ortholog to mouse Zdbf2linc  Epigenetics  2013;8(6):635-645. Long non-coding RNAs (lncRNAs), transcribed from the intergenic regions of animal genomes, play important roles in key biological processes. In mice, Zdbf2linc was recently identified as an lncRNA isoform of the paternally expressed imprinted Zdbf2 gene. The functional role of Zdbf2linc remains undefined, but it may control parent-of-origin-specific expression of protein-coding neighbors through epigenetic modification in cis, similar to imprinted Nespas, Kcnq1ot1 and Airn lncRNAs. Here, we identified a novel imprinted long-range non-coding RNA, termed GPR1AS, in the human GPR1-ZDBF2 intergenic region. Although GPR1AS contains no human ZDBF2 exons, this lncRNA is transcribed in the antisense orientation from the GPR1 intron to a secondary, differentially methylated region upstream of the ZDBF2 gene (ZDBF2 DMR), similar to mouse Zdbf2linc. Interestingly, GPR1AS/Zdbf2linc is exclusively expressed in human/mouse placenta with paternal-allele-specific expression and maternal-allele-specific promoter methylation (GPR1/Gpr1 DMR). The paternal-allele specific methylation of the secondary ZDBF2 DMR was established in human placentas as well as somatic lineage. Meanwhile, the ZDBF2 gene showed stochastic paternal-allele-specific expression, possibly methylation-independent, in placental tissues. Overall, we demonstrated that epigenetic regulation mechanisms in the imprinted GPR1-GPR1AS-ZDBF2 region were well-conserved between human and mouse genomes without the high sequence conservation of the intergenic lncRNAs. Our findings also suggest that lncRNAs with highly conserved epigenetic and transcriptional regulation across species arose by divergent evolution from a common ancestor, if they do not have identical exon structures. PMCID: PMC3857343  PMID: 23764515 lncRNA; genomic imprinting; DNA methylation; antisense RNA; placenta 16.  The IG-DMR and the MEG3-DMR at Human Chromosome 14q32.2: Hierarchical Interaction and Distinct Functional Properties as Imprinting Control Centers  PLoS Genetics  2010;6(6):e1000992. Human chromosome 14q32.2 harbors the germline-derived primary DLK1-MEG3 intergenic differentially methylated region (IG-DMR) and the postfertilization-derived secondary MEG3-DMR, together with multiple imprinted genes. Although previous studies in cases with microdeletions and epimutations affecting both DMRs and paternal/maternal uniparental disomy 14-like phenotypes argue for a critical regulatory function of the two DMRs for the 14q32.2 imprinted region, the precise role of the individual DMR remains to be clarified. We studied an infant with upd(14)pat body and placental phenotypes and a heterozygous microdeletion involving the IG-DMR alone (patient 1) and a neonate with upd(14)pat body, but no placental phenotype and a heterozygous microdeletion involving the MEG3-DMR alone (patient 2). The results generated from the analysis of these two patients imply that the IG-DMR and the MEG3-DMR function as imprinting control centers in the placenta and the body, respectively, with a hierarchical interaction for the methylation pattern in the body governed by the IG-DMR. To our knowledge, this is the first study demonstrating an essential long-range imprinting regulatory function for the secondary DMR. Author Summary Genomic imprinting is a process causing genes to be expressed in a parent-of-origin specific manner—some imprinted genes are expressed from maternally inherited chromosomes and others from paternally inherited chromosomes. Imprinted genes are often located in clusters regulated by regions that are differentially methylated according to their parental origin. The human chromosome 14q32.2 imprinted region harbors the germline-derived primary DLK1-MEG3 intergenic differentially methylated region (IG-DMR) and the postfertilization-derived secondary MEG3-DMR, together with multiple imprinted genes. Perturbed dosage of these imprinted genes, for example in patients with paternal and maternal uniparental disomy 14, causes distinct phenotypes. Here, through analysis of patients with microdeletions recapitulating some or all of the uniparental disomy 14 phenotypes, we show that the IG-DMR acts as an upstream regulator for the methylation pattern of the MEG3-DMR in the body but not in the placenta. Importantly, in the body, the MEG3-DMR functions as an imprinting control center. To our knowledge, this is the first study demonstrating an essential function for the secondary DMR in the regulation of multiple imprinted genes. Thus, the results provide a significant advance in the clarification of underlying epigenetic features that can act to regulate imprinting. PMCID: PMC2887472  PMID: 20585555 17.  Retrotransposon Silencing by DNA Methylation Can Drive Mammalian Genomic Imprinting  PLoS Genetics  2007;3(4):e55. Among mammals, only eutherians and marsupials are viviparous and have genomic imprinting that leads to parent-of-origin-specific differential gene expression. We used comparative analysis to investigate the origin of genomic imprinting in mammals. PEG10 (paternally expressed 10) is a retrotransposon-derived imprinted gene that has an essential role for the formation of the placenta of the mouse. Here, we show that an orthologue of PEG10 exists in another therian mammal, the marsupial tammar wallaby (Macropus eugenii), but not in a prototherian mammal, the egg-laying platypus (Ornithorhynchus anatinus), suggesting its close relationship to the origin of placentation in therian mammals. We have discovered a hitherto missing link of the imprinting mechanism between eutherians and marsupials because tammar PEG10 is the first example of a differentially methylated region (DMR) associated with genomic imprinting in marsupials. Surprisingly, the marsupial DMR was strictly limited to the 5′ region of PEG10, unlike the eutherian DMR, which covers the promoter regions of both PEG10 and the adjacent imprinted gene SGCE. These results not only demonstrate a common origin of the DMR-associated imprinting mechanism in therian mammals but provide the first demonstration that DMR-associated genomic imprinting in eutherians can originate from the repression of exogenous DNA sequences and/or retrotransposons by DNA methylation. Author Summary Genomic imprinting is a gene regulatory mechanism controlling parent-of-origin-dependent expression of genes. In eutherians, imprinting is essential for fetal and placental development and defects in this mechanism are the cause of several genetic disorders. In eutherian mammals, genomic imprinting is controlled by differential methylation of the DNA. However, no such methylation-dependent mechanism had been previously identified in association with marsupial imprinting. By comparing the genome of all three extant classes of mammals (eutherians, marsupials, and monotremes), we have investigated the evolution of PEG10 (paternally expressed 10), a retrotransposon-derived imprinted gene that is essential for the formation of the placenta in the mouse. PEG10 was present in a marsupial species, the tammar wallaby, but absent from an egg-laying monotreme species, the platypus. Therefore, PEG10 was inserted into the genome at the time when the placenta and viviparity were evolving in therian mammals. This study has shown that PEG10 is not only imprinted in a marsupial, but that its imprint is regulated by differential methylation, suggesting a common origin for methylation in the therian ancestor. These results provide direct evidence that retrotransposon insertion can drive the evolution of genomic imprinting in mammals. PMCID: PMC1851980  PMID: 17432937 PLoS ONE  2013;8(4):e59564. PMCID: PMC3620161  PMID: 23593146 19.  Comparative Anatomy of Chromosomal Domains with Imprinted and Non-Imprinted Allele-Specific DNA Methylation  PLoS Genetics  2013;9(8):e1003622. Allele-specific DNA methylation (ASM) is well studied in imprinted domains, but this type of epigenetic asymmetry is actually found more commonly at non-imprinted loci, where the ASM is dictated not by parent-of-origin but instead by the local haplotype. We identified loci with strong ASM in human tissues from methylation-sensitive SNP array data. Two index regions (bisulfite PCR amplicons), one between the C3orf27 and RPN1 genes in chromosome band 3q21 and the other near the VTRNA2-1 vault RNA in band 5q31, proved to be new examples of imprinted DMRs (maternal alleles methylated) while a third, between STEAP3 and C2orf76 in chromosome band 2q14, showed non-imprinted haplotype-dependent ASM. Using long-read bisulfite sequencing (bis-seq) in 8 human tissues we found that in all 3 domains the ASM is restricted to single differentially methylated regions (DMRs), each less than 2kb. The ASM in the C3orf27-RPN1 intergenic region was placenta-specific and associated with allele-specific expression of a long non-coding RNA. Strikingly, the discrete DMRs in all 3 regions overlap with binding sites for the insulator protein CTCF, which we found selectively bound to the unmethylated allele of the STEAP3-C2orf76 DMR. Methylation mapping in two additional genes with non-imprinted haplotype-dependent ASM, ELK3 and CYP2A7, showed that the CYP2A7 DMR also overlaps a CTCF site. Thus, two features of imprinted domains, highly localized DMRs and allele-specific insulator occupancy by CTCF, can also be found in chromosomal domains with non-imprinted ASM. Arguing for biological importance, our analysis of published whole genome bis-seq data from hES cells revealed multiple genome-wide association study (GWAS) peaks near CTCF binding sites with ASM. Author Summary Allele-specific DNA methylation (ASM) is a central mechanism of gene regulation in humans, which can influence inter-individual differences in physical and mental traits and disease susceptibility. ASM is mediated either by parental imprinting, in which the repressed copy (allele) of the gene is determined by which type of parent (mother or father) transmitted it or, for a larger number of genes, by the local DNA sequence, independent of which parent transmitted it. Chromosomal regions with imprinted ASM have been well studied, and certain mechanistic principles, including the role of discrete differentially methylated regions (DMRs) and involvement of the insulator protein CTCF, have emerged. However, the molecular mechanisms underlying non-imprinted sequence-dependent ASM are not yet understood. Here we describe our detailed mapping of ASM across 5 gene regions, including two novel examples of imprinted ASM and three gene regions with non-imprinted, sequence-dependent ASM. Our data uncover shared molecular features – small discrete DMRs, and the binding of CTCF to these DMRs, in examples of both types of ASM. Combining ASM mapping with genetic association data suggests that sequence-dependent ASM at CTCF binding sites influences diverse human traits. PMCID: PMC3757050  PMID: 24009515 20.  Epigenetic states and expression of imprinted genes in human embryonic stem cells  World Journal of Stem Cells  2010;2(4):97-102. AIM: To investigate the epigenetic states and expression of imprinted genes in five human embryonic stem cell (hESC) lines derived in Taiwan. METHODS: The heterozygous alleles of single nucleotide polymorphisms (SNPs) at imprinted genes were analyzed by sequencing genomic DNAs of hESC lines and the monoallelic expression of the imprinted genes were confirmed by sequencing the cDNAs. The expression profiles of 32 known imprinted genes of five hESC lines were determined using Affymetrix human genome U133 plus 2.0 DNA microarray. RESULTS: The heterozygous alleles of SNPs at seven imprinted genes, IPW, PEG10, NESP55, KCNQ1, ATP10A, TCEB3C and IGF2, were identified and the monoallelic expression of these imprinted genes except IGF2 were confirmed. The IGF2 gene was found to be imprinted in hESC line T2 but partially imprinted in line T3 and not imprinted in line T4 embryoid bodies. Ten imprinted genes, namely GRB10, PEG10, SGCE, MEST, SDHD, SNRPN, SNURF, NDN, IPW and NESP55, were found to be highly expressed in the undifferentiated hESC lines and down-regulated in differentiated derivatives. The UBE3A gene abundantly expressed in undifferentiated hESC lines and further up-regulated in differentiated tissues. The expression levels of other 21 imprinted genes were relatively low in undifferentiated hESC lines and five of these genes (TP73, COPG2, OSBPL5, IGF2 and ATP10A) were found to be up-regulated in differentiated tissues. CONCLUSION: The epigenetic states and expression of imprinted genes in hESC lines should be thoroughly studied after extended culture and upon differentiation in order to understand epigenetic stability in hESC lines before their clinical applications. PMCID: PMC3097928  PMID: 21607126 DNA microarray; Imprinting; Single nucleotide polymorphism; Human embryonic stem cell 21.  DNA sequence polymorphisms within the bovine guanine nucleotide-binding protein Gs subunit alpha (Gsα)-encoding (GNAS) genomic imprinting domain are associated with performance traits  BMC Genetics  2011;12:4. Genes which are epigenetically regulated via genomic imprinting can be potential targets for artificial selection during animal breeding. Indeed, imprinted loci have been shown to underlie some important quantitative traits in domestic mammals, most notably muscle mass and fat deposition. In this candidate gene study, we have identified novel associations between six validated single nucleotide polymorphisms (SNPs) spanning a 97.6 kb region within the bovine guanine nucleotide-binding protein Gs subunit alpha gene (GNAS) domain on bovine chromosome 13 and genetic merit for a range of performance traits in 848 progeny-tested Holstein-Friesian sires. The mammalian GNAS domain consists of a number of reciprocally-imprinted, alternatively-spliced genes which can play a major role in growth, development and disease in mice and humans. Based on the current annotation of the bovine GNAS domain, four of the SNPs analysed (rs43101491, rs43101493, rs43101485 and rs43101486) were located upstream of the GNAS gene, while one SNP (rs41694646) was located in the second intron of the GNAS gene. The final SNP (rs41694656) was located in the first exon of transcripts encoding the putative bovine neuroendocrine-specific protein NESP55, resulting in an aspartic acid-to-asparagine amino acid substitution at amino acid position 192. SNP genotype-phenotype association analyses indicate that the single intronic GNAS SNP (rs41694646) is associated (P ≤ 0.05) with a range of performance traits including milk yield, milk protein yield, the content of fat and protein in milk, culled cow carcass weight and progeny carcass conformation, measures of animal body size, direct calving difficulty (i.e. difficulty in calving due to the size of the calf) and gestation length. Association (P ≤ 0.01) with direct calving difficulty (i.e. due to calf size) and maternal calving difficulty (i.e. due to the maternal pelvic width size) was also observed at the rs43101491 SNP. Following adjustment for multiple-testing, significant association (q ≤ 0.05) remained between the rs41694646 SNP and four traits (animal stature, body depth, direct calving difficulty and milk yield) only. Notably, the single SNP in the bovine NESP55 gene (rs41694656) was associated (P ≤ 0.01) with somatic cell count--an often-cited indicator of resistance to mastitis and overall health status of the mammary system--and previous studies have demonstrated that the chromosomal region to where the GNAS domain maps underlies an important quantitative trait locus for this trait. This association, however, was not significant after adjustment for multiple testing. The three remaining SNPs assayed were not associated with any of the performance traits analysed in this study. Analysis of all pairwise linkage disequilibrium (r2) values suggests that most allele substitution effects for the assayed SNPs observed are independent. Finally, the polymorphic coding SNP in the putative bovine NESP55 gene was used to test the imprinting status of this gene across a range of foetal bovine tissues. Previous studies in other mammalian species have shown that DNA sequence variation within the imprinted GNAS gene cluster contributes to several physiological and metabolic disorders, including obesity in humans and mice. Similarly, the results presented here indicate an important role for the imprinted GNAS cluster in underlying complex performance traits in cattle such as animal growth, calving, fertility and health. These findings suggest that GNAS domain-associated polymorphisms may serve as important genetic markers for future livestock breeding programs and support previous studies that candidate imprinted loci may act as molecular targets for the genetic improvement of agricultural populations. In addition, we present new evidence that the bovine NESP55 gene is epigenetically regulated as a maternally expressed imprinted gene in placental and intestinal tissues from 8-10 week old bovine foetuses. PMCID: PMC3025900  PMID: 21214909 PMCID: PMC3038880  PMID: 21281512 23.  Genomic Imprinting in the Arabidopsis Embryo Is Partly Regulated by PRC2  PLoS Genetics  2013;9(12):e1003862. Author Summary In most cells nuclear genes are present in two copies, with one maternal and one paternal allele. Usually, the two alleles share the same fate regarding their activity, with both copies being active or both being silent. An exception to this rule are genes that are regulated by genomic imprinting, where only one allele is expressed and the other one remains silent depending on the parent it was inherited from. The two alleles are equal in terms of their DNA sequence but carry different epigenetic marks distinguishing them. Genomic imprinting evolved independently in mammals and flowering plants. In mammals, genes regulated by genomic imprinting are expressed in a wide range of tissues including the embryo and the placenta. In plants, genomic imprinting has been primarily described for genes expressed in the endosperm, a nutritive tissue in the seed with a function similar to that of the mammalian placenta. Here, we describe that some genes are also regulated by genomic imprinting in the embryo of the model plant Arabidopsis thaliana. An epigenetic silencing complex, the Polycomb Repressive Complex 2 (PRC2), partly regulates genomic imprinting in the embryo. Interestingly, embryonic imprints seem to be erased during late embryo or early seedling development. PMCID: PMC3854695  PMID: 24339783 24.  High-Resolution Analysis of Parent-of-Origin Allelic Expression in the Arabidopsis Endosperm  PLoS Genetics  2011;7(6):e1002126. Genomic imprinting is an epigenetic phenomenon leading to parent-of-origin specific differential expression of maternally and paternally inherited alleles. In plants, genomic imprinting has mainly been observed in the endosperm, an ephemeral triploid tissue derived after fertilization of the diploid central cell with a haploid sperm cell. In an effort to identify novel imprinted genes in Arabidopsis thaliana, we generated deep sequencing RNA profiles of F1 hybrid seeds derived after reciprocal crosses of Arabidopsis Col-0 and Bur-0 accessions. Using polymorphic sites to quantify allele-specific expression levels, we could identify more than 60 genes with potential parent-of-origin specific expression. By analyzing the distribution of DNA methylation and epigenetic marks established by Polycomb group (PcG) proteins using publicly available datasets, we suggest that for maternally expressed genes (MEGs) repression of the paternally inherited alleles largely depends on DNA methylation or PcG-mediated repression, whereas repression of the maternal alleles of paternally expressed genes (PEGs) predominantly depends on PcG proteins. While maternal alleles of MEGs are also targeted by PcG proteins, such targeting does not cause complete repression. Candidate MEGs and PEGs are enriched for cis-proximal transposons, suggesting that transposons might be a driving force for the evolution of imprinted genes in Arabidopsis. In addition, we find that MEGs and PEGs are significantly faster evolving when compared to other genes in the genome. In contrast to the predominant location of mammalian imprinted genes in clusters, cluster formation was only detected for few MEGs and PEGs, suggesting that clustering is not a major requirement for imprinted gene regulation in Arabidopsis. Author Summary Genomic imprinting poses a violation to the Mendelian rules of inheritance, which state functional equality of maternally and paternally inherited alleles. Imprinted genes are expressed dependent on their parent-of-origin, implicating an epigenetic asymmetry of maternal and paternal alleles. Genomic imprinting occurs in mammals and flowering plants. In both groups of organisms, nourishing of the progeny depends on ephemeral tissues, the placenta and the endosperm, respectively. In plants, genomic imprinting predominantly occurs in the endosperm, which is derived after fertilization of the diploid central cell with a haploid sperm cell. In this study we identify more than 60 potentially imprinted genes and show that there are different epigenetic mechanisms causing maternal and paternal-specific gene expression. We show that maternally expressed genes are regulated by DNA methylation or Polycomb group (PcG)-mediated repression, while paternally expressed genes are predominantly regulated by PcG proteins. From an evolutionary perspective, we also show that imprinted genes are associated with transposons and are more rapidly evolving than other genes in the genome. Many MEGs and PEGs encode for transcriptional regulators, implicating important functional roles of imprinted genes for endosperm and seed development. PMCID: PMC3116908  PMID: 21698132 25.  Evolution of the CDKN1C-KCNQ1 imprinted domain  Genomic imprinting occurs in both marsupial and eutherian mammals. The CDKN1C and IGF2 genes are both imprinted and syntenic in the mouse and human, but in marsupials only IGF2 is imprinted. This study examines the evolution of features that, in eutherians, regulate CDKN1C imprinting. Despite the absence of imprinting, CDKN1C protein was present in the tammar wallaby placenta. Genomic analysis of the tammar region confirmed that CDKN1C is syntenic with IGF2. However, there are fewer LTR and DNA elements in the region and in intron 9 of KCNQ1. In addition there are fewer LINEs in the tammar compared with human and mouse. While the CpG island in intron 10 of KCNQ1 and promoter elements could not be detected, the antisense transcript KCNQ1OT1 that regulates CDKN1C imprinting in human and mouse is still expressed. CDKN1C has a conserved function, likely antagonistic to IGF2, in the mammalian placenta that preceded its acquisition of imprinting. CDKN1C resides in synteny with IGF2, demonstrating that imprinting of the two genes did not occur concurrently to balance maternal and paternal influences on the growth of the placenta. The expression of KCNQ1OT1 in the absence of CDKN1C imprinting suggests that antisense transcription at this locus preceded imprinting of this domain. These findings demonstrate the stepwise accumulation of control mechanisms within imprinted domains and show that CDKN1C imprinting cannot be due to its synteny with IGF2 or with its placental expression in mammals. PMCID: PMC2427030  PMID: 18510768 Results 1-25 (1211264)
__label__1
0.787183
Tai Chi & Taoism The cosmographic 'tai-chi'. The cosmographic 'tai-chi'. Lao Tsu, the founder of Taoism, wrote: Yield and overcome; Bend and be straight. -- Tao Te Ching (22) He who stands of tiptoe is not steady. He who strides cannot maintain the pace. -- Tao Te Ching (24) Returning is the motion of the Tao. Yielding is the way of the Tao. -- Tao Te Ching (40) What is firmly established cannot be uprooted. What is firmly grasped cannot slip away. -- Tao Te Ching (54) Stiff and unbending is the principle of death. Gentle and yielding is the principle of life. Thus an Army without flexibility never wins a battle. A tree that is unbending is easily broken. The hard and strong will fall. The soft and weak will overcome. -- Tao Te Ching (76) There are some interesting inspirations for the movement philosophy of Tai Chi within the writings of Chuang Tzu, for example: "The pure man of old slept without dreams and woke without anxiety. He ate without indulging in sweet tastes and breathed deep breaths. The pure man draws breaths from the depths of his heels, the multitude only from their throats." "[The sage] would not lean forward or backward to accomodate [things]. This is called tranquility on disturbance, (which means) that it is especially in the midst of disturbance that tranquility becomes perfect." Talisman of the Jade Lady. Talisman of the Jade Lady. This approach is reflected in the entire movement philosophy of Tai Chi Chuan. There is, moreover, a long tradition of Taoist monks practicing exercises. Some of these were referred to as tai-yin or Taoist Breathing. Exactly what these were and what their origins were is obscure but they are mentioned in Chinese chronicles as early as 122 B.C. Then in the sixth century A.D. Bodihdharma (called Ta Mo in Chinese) came to the Shao-Lin Monastery and, seeing that the monks were in poor physical condition from too much meditation and too little excersize, introduced his Eighteen Form Lohan Exercise. This approach gave rise to the Wei Chia or 'outer-extrinsic' forms of exercise. Later in the fifteenth century A.D. the purported founder of Tai Chi Chuan, the monk Chang San-feng, was honoured by the Emperor Ying- tsung with the title of chen-jen, or 'spiritual man who has attained the Tao and is no longer ruled by what he sees, hears or feels.' This indicates that already at this time there was a close association between the philosophy of Taoism and the practice of Tai Chi. In the Ming dynasty (14th to 17th centuries), Wang Yang-ming a leading philosopher preached a philosophy which was a mixture of Taoism and Ch'an Buddhism which had certain associations with movement systems. In any event the principles of yielding, softness, centeredness, slowness, balance, suppleness and rootedness are all elements of Taoist philosophy that Tai Chi has drawn upon in its understanding of movement, both in relation to health and also in its martial applications. One can see these influences (of softness and effortlessness) in the names of certain movements in the Tai Chi Form, such as: Moreover the contemplation and appreciation nature, which are central features of Taoist thought seem to have been reflected in the genesis of many Tai Chi movements such as: The story comes to us that Chang San-feng watched a fight between a bird and a snake and in this event saw how the soft and yielding could overcome the hard and inflexible. Particularly significant here is the reference to the White Crane (The Manchurian Crane, Grus japonensis), with its red crest an important symbol for Taoist alchemists. Certain features of Taoist alchemy and talismanic symbolism have also penetrated the Tai Chi forms. As part of their contemplation of nature the Taoists observed the heavens and were keen students of astronomy and astrology. Movements of the Tai Chi Form such as : Meditating Under the Protection of the Big Dipper. Meditating Under the Protection of the Big Dipper. Reflect this Taoist astrological concern. Symbolism was a potent force in Taoist thinking. Taoist magic diagrams were regarded as potent talismans having great command over spiritual forces. They invoked the harmonizing influence of yin-yang and Eternal Change; the Divine Order of Heaven, Earth and Mankind; and the workings of the Universe through the principal of the Five Elements. These were symbolized by the Five Sacred Mountains (Taishan, Hengshan [Hunan], Songshan, Huashan and Hengshan [Hopei]), central places of Taoist development and pilgrimage. Thus it is no surprise to find that the symbolism of names has, in important ways, infiltrated the forms of Tai Chi. There was a numerological component to this symbolism as well. The number '5' has a special mystical significance to Taoists (and to Chinese in general). There are the symbolic five mountains, five elements, five colours, five planets, five virtues, five emotions, five directions, etc. all of which have a mystic significance. Hence we see five Repulse Monkeys or Five Cloud Hands in the Tai Chi form. There are many instances where the numbers '1', '3', '5' and '7' figure prominently in the structure of Tai Chi. [Back to Home Page]
__label__1
0.991321
The Real Monuments Men February 07, 2014 With the arrival of Hollywood's "Monuments Men" movie we were intrigued about this story and did a little digging. Founded in 1943, the Monuments, Fine Arts, and Archives program under the Civil Affairs and Military Government departments of the Allied forces was established to protect cultural property in war zones during and post World War II. Made up of about 350 service members, their mission was to safeguard historic and cultural monuments from war damage, and Allied, Russian, and Nazi looting. Under Hitler's authority, the Nazis amassed hundreds of thousands of antiquities from the occupied nations and stored them in key locations, like the Musée Jeu de Paume in Paris and the Nazi headquarters in Munich. As the Allied forces advanced on the Axis power, Germany began storing the artworks in salt mines and caves for protection from bombing raids. Known as the Kunstschutz, was the Nazi force deemed responsible for the majority of art theft from 1933 until the end of World War II. Items stolen include, gold, currency, paintings, ceramics, books, and religious treasures. Many of these items were recovered by the Allies immediately following the war, but many are still missing, such as Raphael's Portrait of a Young Man. The second Roberts Commission tasked the Monuments Men to travel to previously Nazi-occupied territories to uncover the art caches. Toward the end of the war the Monuments Men were challenged with keeping Allied and Russian forces from plundering and taking artworks and sending them stateside to family. The Monuments Men even resorted marking artworks with white tape, normally used to mark unexploded land mines. The identifiable works of art were sent back to the countries from which they were taken, where the governments of each nation would assume the responsibility of returning the stolen artworks. via 607VISUAL
__label__1
0.56375
What is the Difference Between Domestic Battery and Battery in San Diego? California Penal Code § 242 defines a Battery as a willful and unlawful use of force by one person against another. California Penal Code § 243(e)(1) adds certain situations in which the battery will be enhanced and will yield a harsher range of potential consequences. One of the circumstances in which a battery charge would be enhanced is if it is against a person that section defines as a domestic relationship. A domestic battery includes a varying form of relationships, much more than domestic violence as it is defined under California Penal Code §273.5. The court’s take domestic relationships very seriously, as it is a relationship based on trust and vulnerability. When the person being charged is accused of injuring a person in which they share a domestic relationship, the court will consider the factors very carefully and if sentenced, it will be a higher penalty than battery under CPC §242. One relationship the Court will hold as needing special protection is between spouses. The statute also includes a former spouse. The person must be the current spouse. The domestic battery statute, also includes a cohabitant. A cohabitant can be anyone a person is residing with. This could be a family member, a roommate or a friend. If injury is caused to a person that lives with the person being charged, it is likely it will be a domestic battery charge. Unlike CPC §273.5, the domestic battery statute also includes a fiancé, or someone with whom the person being charged has previously had a dating or engagement relationship with. This goes beyond the relationships described in the domestic violence statute. It includes any person with whom the person being charged with may have had a romantic relationship with. Domestic battery also extends to the mother or father of the child shared with the person being charged. This is also the case under the domestic violence statute. Any charge that comes from a domestic relationship must be carefully considered. There is room for many false accusations, especially because oftentimes many emotions are involved. False accusations are common, and therefore each element must be proven beyond a reasonable doubt before the court of law will find anyone guilty of the charge. The consequences of a domestic battery is a fine of up to $2,000 and/or imprisonment up to one year in county jail. Whether it is a higher fine or a longer time in jail will depend on the specific facts of the case and the person’s prior criminal history. With such a high range, there is room for negotiation. A knowledgeable San Diego Domestic Violence attorney can prepare a powerful argument that assures the person faces the lowest possible sentence if the case is not reduced or dismissed.
__label__1
0.737189
Date : 11/03/2012 Duedate: 11/16/2012 DM-93 TURN-363 This Weeks Top Honors (93-9359) [6-1-0,64] Chartered Recognition Leader Unchartered Recognition Leader ZOE YATES LEAPING LILAC THE LOW ROAD 4 (1610) PARKS AND REC (1611) (93-9349) [4-6-0,46] (93-9359) [6-1-0,64] Popularity Leader This Weeks Favorite THE LOW ROAD 4 (1610) BASH BROS SUITCASE (1619) (93-9345) [7-3-0,24] (93-9410) [1-1-0,6] THE LOW ROAD 4 (1610) Team Name Point Gain Chartered Team 1. COLLEGE FOOTBALL (1618) 42 2. PARKS AND REC (1611) 35 THE LOW ROAD 4 (1610) 3. BASH BROS SUITCASE (1619) 23 Unchartered Team 4. DRAGON DISCIPLES (1608) 0 The Top Teams 1/ 1*BASH BROS SUITCAS (1619) 7 3 0 70.0 1/ 2 THE LOW ROAD 4 (1610) 8 5 0 2/ 6*COLLEGE FOOTBALL (1618) 6 4 1 60.0 2/ 3*BASH BROS SUITCAS (1619) 7 3 0 3/ 2*PARKS AND REC (1611) 17 14 3 54.8 3/ 4*COLLEGE FOOTBALL (1618) 6 4 1 4- 3*DRAGON DISCIPLES (1608) 4 4 0 50.0 4/ 1*PARKS AND REC (1611) 5 7 1 5- 4*OLYMPUS MAXIMUS (1612) 6 7 2 46.2 5- 5*OLYMPUS MAXIMUS (1612) 0 2 0 6/ 5 THE LOW ROAD 4 (1610) 19 28 0 40.4 6- 7*PASSIVE GAMERS II (1315) 0 1 0 7- 7*PASSIVE GAMERS II (1315) 0 1 0 0.0 7- 6*DRAGON DISCIPLES (1608) 0 1 0 TEAM SPOTLIGHT What the Style? should I make my warrior? all 10 fighting styles. This article hopes to demystify some of questions above. two. I list the styles in approximate ease to learn and succeed with. The 'big (weapons are also not equal) higher your warriors advance in that game, the more the game favors attack and defense skills. But during your warrior's career in basic, all of the skills are fairly well balanced. When I talk about tourney success, I am talking about winning it all! ABs, Ls, and STKs win more tourneys than the rest of the styles. Also realize that these hide their weaknesses. tourney styles in basic. However, because of their mostly one sided nature, they are than defense and parry are poor and have they the worst attack of any style. Moderate enduracne conservation and can use all of the 'big four' well suited. Usually considered a 2nd tier tourney style and the large numbers of ABs in tourneys these days makes life harder for TPs. However many managers have strategies and designs to use for TPs to minimize their vulnerabilities to the most popular style around. TPs have shown themselves competitive at virtually all levels of the game except the very top (Years away for a new warrior). They cannot be bonused in parry. the most popular pure offensive style to use for Tank Hybrids. They learn attack and are one of the weakest tourney styles at virtually all levels, but are better than average against ABs early on. Past graduation, their usefulness is limited. They cannot be bonused in decise or riposte. the dominant style past graduation. attack. They are tied with lungers for highest overall starting skills. They burn tourney levels but are 2nd tier. They can only be bonused in attack or parry. style. Somewhere between middle and high for endurance burn, they have access to all of the 'Big Four'. They have been successful at most tourney levels but at best can be considered a 2nd tier tourney style. parry and decise. They have very low starting skills, including attack. They conserve endurance perhaps better than any other style and can use all the 'Big Four.' They are shunned by tourney managers because of their poor learning in basic and their typical horrible favorites in ADM. I rank them higher than the bottom three because in basic, parry and decise are very, very useful and the other three are harder to make win early on. and riposte. With 21 deftness, they are second only to Ls in best attack/defense. They conserve endurance perhaps equal to or better than PSs and can use LO and SC. their tendency to learn so much attack and riposte (which without other skills are not a pure winning formula). aren't many good wind up strategies for them (every one seems to run optimally slightly differently). considered a poor man's version of the lunger. Assur and His Bashers 'add' to it are: is a great place to start. Profile of a Style this could be said to be "what comes naturally." gained from using a weapon unsuited to the style in question. Greataxe ST 13 SZ 5 WT 9 DF 11 Greatsword ST 15 SZ 9 WT 9 DF 11 Halberd ST 17 SZ 9 WT 9 DF 9 Large Shield ST 11 SZ 7 WT 5 DF 5 Mace ST 13 SZ any WT 3 DF 5 Maul ST 15 SZ 9 WT 5 DF 7 Medium Shield ST 9 SZ any WT 5 DF 5 Morningstar ST 13 SZ any WT 7 DF 13 Quarterstaff ST 11 SZ 9 WT 11 DF 11 Warflail ST 11 SZ any WT 7 DF 5 Warhammer ST 13 SZ any WT 5 DF 7 Assurnasirbanapal offers the following alternate requirements for some of these Halberd ST 17 SZ 9 WT 9 DF 11 Large Shield ST 11 SZ 7 WT 3 DF 5 Morningstar ST 13 SZ any WT 7 DF 11 Warflail ST 11 SZ any WT 5 DF 5 frustrating. Take my advice and DON'T DO IT. disadvantage of unsuited style. even worse than doomed. Madoc ST 11 SZ 11 WT 13 DF 9 Mace Wanda the Blonda ST 17 SZ 8 WT 9 DF 15 Warflail Kenda Teegue ST 19 SZ 7 WT 11 DF 11 Greatsword Al Kore ST 13 SZ 14 WT 11 DF 13 Quarterstaff Broken Nose ST 17 SZ 18 WT 11 DF 9 Quarterstaff Hogar ST 13 SZ 13 WT 21 DF 7 Maul Ski Mask ST 13 SZ 15 WT 15 DF 9 Maul Crabby Appleton ST 19 SZ 11 WT 13 DF 6 Warflail Lissette ST 14 SZ 9 WT 11 DF 15 Greataxe Velendeis ST 9 SZ 14 WT 13 DF 7 Maul Hoftalj ST 21 SZ 14 WT 11 DF 7 Mace Cadal ST 14 SZ 15 WT 15 DF 11 Mace Caramella ST 13 SZ 13 WT 14 DF 11 Quarterstaff Lulu ST 17 SZ 11 WT 15 DF 9 Morningstar Franklin ST 15 SZ 6 WT 5 DF 17 Morningstar Bam Bam ST 13 SZ 14 WT 11 DF 9 Medium Shield Dune ST 19 SZ 17 WT 4 DF 3 Halberd Old Maid ST 17 SZ 7 WT 15 DF 11 Halberd Joe ST 15 SZ 10 WT 15 DF 13 Medium Shield Claudius ST 19 SZ 13 WT 10 DF 7 Quarterstaff Ekkar ST 17 SZ 15 WT 13 DF 7 Mace Jomon ST 11 SZ 17 WT 11 DF 15 Greatsword Dijanna ST 17 SZ 12 WT 17 DF 7 Mace Khora the Small ST 15 SZ 9 WT 17 DF 11 Warhammer Sweet Jorja ST 13 SZ 13 WT 15 DF 7 Quarterstaff Tarok ST 7 SZ 18 WT 11 DF 9 Warflail Kelan ten Salth ST 14 SZ 15 WT 17 DF 11 Greatsword shield? Pile 'em on here. then don't. variety. No more. Favorite rhythms. Sometimes it's expedient to ignore a warrior's favorite Bashers favor a higher Offensive Effort than Activity Level, often considerably Kill Desire. There is no convincing evidence to support the hypothesis that warriors or unpromising beginning-- Profile of a Gladiator Turo the Brick Lord Protector and Pain in the Rear him straight to the Dark Arena. According to his overview, even after those stat raises, he Is not very bright told anyone right off Stands around making himself a target, which we noticed been running him lately as follows: 9 8 7 6 7 8 10 3 2 1 1 1 1 1 He graduated with no ratings at all. Yes, really. his kin. Khora the Small Lady Protector Contender for the Throne in Lirin Kiv (ADM 107) think. Maybe she should train that up.... Both warriors move with snake-like speed around each other. The weapons lock together in a test of strength. Phetmolge is moving constantly without pause! He sweeps his large shield in a sudden unexpected assault! A spectator exclaims, "Brilliant!" Phetmolge smiles briefly. Khora parries the blow with her war hammer. Phetmolge is struck in the belly. She launches a brilliant attack with her war hammer. Phetmolge stops the blow with his large shield. He sidesteps, trying to throw his opponent off balance. He launches a brilliant attack with his war hammer! Khora is hit on the right hip! It is a tremendous blow! Khora winces, obviously feeling great pain. She mutters a desperate prayer and is stopped by the herald. until she is Inducted into Primus. The Middle Way Question, turn 403: Answers, turn 404: -- Generalissimo Puerco I believe tactics grant benefits that can't always be translated into more use of appropriate tactics. -- Generalissimo Puerco and only six types of skills! -- Leeta Question, turn 404: you are using your favorite. -- Hanibal's Q.O.W. Answers, turn 405: Generalissimo Puerco gives me the impression it only helps attack. -- Adie course, criticals increase. -- Kennelworth the cycles. -- Leeta DM 9 ZUKAL (turn 708): VENTRIIE of DARK WARDS (Dark Warden, mgr.) DM 12 RIZTAB (turn 710): HERMIAS of DREAMERS (Sleepy, mgr.) DM 13 DULLENS (turn 651): FLOYDRIFFIC of ARCANE ROYALTY (Destitute Noble, mgr.) DM 15 MALCORN (turn 704): TONES OF HOME of BLIND MELON (Howlin' Wolf, mgr.) DM 16 WILLAF (turn 706): SNAKE of SERPENTS HOLD 114 (Khisanth, mgr.) DM 19 ZUWAYZA (turn 699): DONT PAY RENT of BASH BROS HOUSEMATE (Assurnasirbanipal) DM 28 MORYA (turn 351): GARROT of STEEL AND SILK (Michael Eldritch, mgr.) DM 29 LAPUR (turn 696): SOLWYN of MID EARTH (Visionist, mgr.) DM 31 CHIMLEVTAL (turn 349): BUMBLE of BAD NEWS BEES (Stinger, mgr.) DM 32 ARVAT (turn 692): INSIDIOUS of DEATH BY DESIGN (Bubba Ganoosh, mgr.) DM 33 NIATOLI ISLAND (turn 689): MULCHER of TOOL SHED (Leprechaun, mgr.) DM 35 MURSKA (turn 681): MAIRIE AP GAUR of CHILDREN OF LLYR (Jorja, mgr.) DM 43 VEASTIAN (turn 640): KHARIJANH of THE FAMILY (Jorja, mgr.) DM 45 STORMCROWE (turn 327): SAPPHIRA of DARQUE FORCES (Master Darque, mgr.) DM 47 NORTH FORK (turn 321): BIG MAC of FAST FOOD (Madwand, mgr.) DM 50 SNOWBOUND (turn 308): 4-H of LIFE IMPACTS (Coach, mgr.) DM 56 ROCANIS (turn 571): LINKIN LOGGER of INCONSISTENT FURY (Banthius, mgr.) DM 60 ARADI (turn 556): TICK ... TOCK of UNNAMEABLES (Howlin' Wolf, mgr.) DM 61 JURINE (turn 538): HORSE D'OEUVRE of SHEWISH BUFFET (One Armed Bandit, mgr.) DM 65 DAL SHANG (turn 533): LINDIS TYL SARRO of SAND DANCERS (Jorja, mgr.) DM 73 ERINIKA (turn 256): JERMOME BETTIS of SUE'S BOYS (Crip, mgr.) DM 74 DAYLA KIV (turn 494): KINVER TEN PALAN of SAND DANCERS (Jorja, mgr.) DM 78 LIN TIRIAN (turn 480): TIGER JONSALMIN of SAND DANCERS (Jorja, mgr.) ADM 103 FREE BLADES (turn 597): LORD FIREFLASH of LUROCIANS II (The Greek Guy, mgr.) Top Teams DM 9 ZUKAL (turn 708): DARK WARDS (Dark Warden, mgr.) DM 12 RIZTAB (turn 710): DREAMERS (Sleepy, mgr.) DM 13 DULLENS (turn 651): CARDOW HUNTERS (Jorja, mgr.) DM 15 MALCORN (turn 704): GREENWARDENS (Jorja, mgr.) DM 16 WILLAF (turn 706): GOLDEN GLADIATORS (Midas, mgr.) DM 17 ALJAFIR (turn 698): MIDDLE WAY (Jorja, mgr.) DM 19 ZUWAYZA (turn 699): VISIONS V2 (?, mgr.) DM 28 MORYA (turn 351): THE WOLF PACK (Wolf, mgr.) DM 29 LAPUR (turn 696): MID EARTH (Visionist, mgr.) DM 31 CHIMLEVTAL (turn 349): BAD NEWS BEES (Stinger, mgr.) DM 32 ARVAT (turn 692): DRAGON RIDERS (Daikkan, mgr.) DM 33 NIATOLI ISLAND (turn 689): SUMMERTEETH (Jugger, mgr.) DM 35 MURSKA (turn 681): CHILDREN OF LLYR (Jorja, mgr.) DM 43 VEASTIAN (turn 640): THE FAMILY (Jorja, mgr.) DM 45 STORMCROWE (turn 327): DARQUE FORCES (Master Darque, mgr.) DM 47 NORTH FORK (turn 321): BASH BROS PARK (Assurnasirbanipal, mgr.) DM 50 SNOWBOUND (turn 308): LIFE IMAPACTS (Coach, mgr.) DM 56 ROCANIS (turn 571): MIDDLE WAY 20 (Jorja, mgr.) DM 60 ARADI (turn 556): RED DOG GANG (Spot, mgr.) DM 61 JURINE (turn 538): STAR WARS (Sir Jessie Jest, mgr.) DM 65 DAL SHANG (turn 533): SAND DANCERS (Jorja, mgr.) DM 73 ERINIKA (turn 256): WORLDWIDE GORE (Crip, mgr.) DM 74 DAYLA KIV (turn 494): SAND DANCERS (Jorja, mgr.) DM 75 JADE MOUNTAIN (turn 488): SAND DANCERSS (Floyd, mgr.) DM 78 LIN TIRIAN (turn 480): SHEWISH BUFFET (One Armed Bandit, mgr.) ADM 103 FREE BLADES (turn 597): DIRT DEVILS et al (The Dark One, mgr.) Recent Graduates DM 9 ZUKAL (turn 708): KATANA of BATTEL BLADES (Madwand, mgr.) JED TARRAN DAPP of BLUE MOON (Jorja, mgr.) DM 15 MALCORN (turn 703): GOLDEN STAIRS of GREENWARDENS (Jorja, mgr.) DM 17 ALJAFIR (turn 697): LT. OLSEN of 82ND ILLINOIS (Otto X, mgr.) DAWN of ZOMBIE SQUAD 2 (Khisanth, mgr.) DM 28 MORYA (turn 351): MADELINE of DODGE BULLETS (Jorja, mgr.) (turn 350): COLGATE SMILE of MY GENERATION (Storm Lord, mgr.) DM 31 CHIMLEVTAL (turn 348): BUZZ-BUZZ of BAD NEWS BEES (Stinger, mgr.) (turn 326): ANDROMEDA of DARQUE FORCES (Master Darque, mgr.) (turn 320): QUEEN OF TARTS of MYSTIC ORC FEAST (slugbait, mgr.) BROOMSTICK of LAND OF OZ (Oz, mgr.) DM 50 SNOWBOUND (turn 307): 4-H of LIFE IMPACTS (Coach, mgr.) DM 60 ARADI (turn 556): BALKO DUNN of MIDDLE WAY 25 (Jorja, mgr.) DM 74 DAYLA KIV (turn 493): 867-5309 of NUMB-ERS (Sherlock, mgr.) SPY REPORT LOW ROAD 4 now holds the crown and they are betting they can keep it. It looks like the guys(?) at COLLEGE FOOTBALL had a good week as they went 4-1-0 to put them in 3rd place. I guess steroids help! Hey everybody, watch out for HOOSIERS, who flew up 16 points in the rankings after mashing TOMMY SNAITH like a melon. Keep your eye on this guy. And falling like a basher in the top ten was TOMMY SNAITH, who dropped 15 points after a disappointing (to say the least) bout with HOOSIERS. Well, everybody's pal LEAPING LILAC moved hers record to 6-1-0 by defeating KUNG-FU MASTER in the Duelmaster's Title Bout and gaining 19 recognition points. Advice to Here's a song for you: Who's afraid of the big, bad BASH BROS SUITCASE? Big bad bad BASH BROS SUITCASE. Tra la la la la! Okay, so I may not be funny, but catch COLLEGE FOOTBALL's act in the arena. Those acrobats seem to be fairly deft at running from BASH BROS SUITCASE. Well just about everybody wants a piece of QUICK HIT HARRY, who was this week's most challenged warrior. DUELMASTER W L K POINTS TEAM NAME LEAPING LILAC 9359 6 1 0 64 PARKS AND REC (1611) ADEPTS W L K POINTS TEAM NAME AVERAGE JOE 9360 5 2 3 47 PARKS AND REC (1611) ZOE YATES 9349 4 6 0 46 THE LOW ROAD 4 (1610) CHALLENGER INITIATES W L K POINTS TEAM NAME VINCE YARE 9367 4 2 0 31 THE LOW ROAD 4 (1610) OVERFILLED 9411 2 0 0 25 BASH BROS SUITCASE (1619) QUEENIE PAROE 9345 7 3 0 24 THE LOW ROAD 4 (1610) INITIATES W L K POINTS TEAM NAME HOOSIERS 9404 1 1 0 17 COLLEGE FOOTBALL (1618) ACES 9405 2 0 1 14 COLLEGE FOOTBALL (1618) JAYHAWKS 9402 2 0 0 14 COLLEGE FOOTBALL (1618) NEUTER NELL 9358 3 4 0 13 PARKS AND REC (1611) OVERTHEWEIGHTLIMIT 9408 2 0 0 13 BASH BROS SUITCASE (1619) -DOMITIUS 9362 3 1 2 12 OLYMPUS MAXIMUS (1612) TIGERS 9403 1 1 0 10 COLLEGE FOOTBALL (1618) RIPPED OPEN 9415 1 0 0 10 BASH BROS SUITCASE (1619) QUICK HIT HARRY 9356 1 6 0 9 PARKS AND REC (1611) -DUNCAN 9338 1 1 0 9 DRAGON DISCIPLES (1608) INITIATES W L K POINTS TEAM NAME BROKEN WHEEL 9410 1 1 0 6 BASH BROS SUITCASE (1619) TOMMY SNAITH 9412 1 1 0 5 THE LOW ROAD 4 (1610) HOLES 9407 1 1 0 4 BASH BROS SUITCASE (1619) GAMECOCKS 9401 0 2 0 2 COLLEGE FOOTBALL (1618) -HONORIA 9363 0 2 0 2 OLYMPUS MAXIMUS (1612) -VITRUVIUS 9406 0 1 0 1 OLYMPUS MAXIMUS (1612) -WILMA 7620 0 1 0 1 PASSIVE GAMERS III (1315) WILLA VARAN 9414 0 1 0 1 THE LOW ROAD 4 (1610) -METALLUS 9364 0 1 0 1 OLYMPUS MAXIMUS (1612) THE DEAD W L K TEAM NAME SLAIN BY TURN Revenge? BROKEN ZIPPER 9409 0 1 0 BASH BROS SUITC 1619 ACES 9405 362 ASHER 9337 1 1 0 DRAGON DISCIPLE 1608 AVERAGE JOE 9360 362 ANTONIUS 9361 1 1 0 OLYMPUS MAXIMUS 1612 ODIN THORSEN 9377 359 NONE TYR ODINSEN 9375 0 1 0 TEAM NIETZCHEAN 1614 DOMITIUS 9362 359 NOT REVENGED SALOME RYAN 9346 1 6 0 THE LOW ROAD 4 1610 DOMITIUS 9362 360 REVENGED PERSONAL ADS Overfilled -- Just because you were overloaded was no reason to dump it all on me, you bum! -- Gamecocks (University Of South Carolina) P.S. Who was that what said the basher is the aimer of the future? Grrrr. Congrats to Leaping Lilac of Parks And Recs! A fitting and pleasantly smellng Duelmaster! -- Coach Rah Rah Vitruvius -- Rock, chalk, Jayhawks! -- Jayhawks (Kansas University) Overtheweightlimit -- You are over a lot of things, and I am embarrassed I lost to you. If you were in the ACC, I would chew you up! -- Tigers (Clemson University) Holes -- Aren't you lucky? Everybody beats the Hoosiers nowadays. I guess I am glad you joined the party. -- Hoosiers (Indiana University) Broken Zipper -- I can only say that I badly needed the win. I did not enjoy killing you, but since you were not a basher, I am sure Assur doesn't care. Ta ta. -- Aces (University Of Evansville Purple Aces) Assur -- You despicable bum, this was our first below .500 start in Noblish and you led the surge with your 3-1 over our College Footballers. You are probably sitting there smirking about having already put Coach Rah Rah's job in jeopardy. May the bluebird of false happiness fly up your nose or other appropriate orifices. -- The Consortium Elders P.S. We liked your spot last round. It was the most understandable and well written one you have ever done--by far. Okay, all. Who wants to ask a question? -- Coach Rah Rah Are we there yet? Neuter Nell -- Are you a park or a wreck? -- Queenie Paroe Domitius -- Though that was a random match, it counts as a bloodfeud victory. Good! Salome was an okay person, even if she was a rotten gladiator. -- Zoe Yates P.S. A suggestion or two: If you have the necessary WL, train up that deficient stat. And drop the off-hand impediment. It isn't helping you any that I can see. Quick Hit Harry -- Bah. If I'd remembered to bring a book and a sandwich, I could've held you off long enough for you to COLLAPSE. -- Vince Yare the Optimisitic Ah! An Embezzling Scribe. A good beginning for me. -- Tommy Snaith Wilma -- You should consider the following two weapons: SH and SS. If those don't work, try LO even though you won't be well suited. I don't know if you are well suited to the SS or not, but it is easier to use than the weapon you were trying (A bad choice for lungers). -- Assur Even worse than the war flail? Wilma -- Here is what beat you: Broken Wheel 12-9-14-11-17-10-11 B. He rolled well and will likely do well in his career here in Noblish Island. -- Assur LAST WEEK'S FIGHTS GAMECOCKS was bested by OVERTHEWEIGHTLIMIT in a 1 minute amateur's Challenge fight. TIGERS viciously subdued HOLES in a 3 minute brutal beginner's Challenge battle. ACES defeated QUICK HIT HARRY in a 2 minute Challenge duel. JAYHAWKS unbelievably bested BROKEN WHEEL in a 4 minute novice's Challenge contest. LEAPING LILAC bested KUNG-FU MASTER in a action packed 1 minute Title brawl. VINCE YARE savagely defeated NEUTER NELL in a exciting 3 minute bloody struggle. QUEENIE PAROE was handily defeated by AVERAGE JOE in a 1 minute gruesome uneven fight. ZOE YATES viciously subdued THE USEROUS MERCHANT in a action packed 3 minute duel. HOOSIERS won victory over TOMMY SNAITH in a 1 minute novice's duel. OVERFILLED overpowered EMBEZZLING SCRIBE in a 1 minute uneven match. WILLA VARAN was overpowered by RIPPED OPEN in a 1 minute one-sided fight. BATTLE REPORT MOST POPULAR RECORD DURING THE LAST 10 TURNS |STRIKING ATTACK 4 PARRY-RIPOSTE 2 - 1 - 0 67 | |BASHING ATTACK 4 BASHING ATTACK 52 - 34 - 3 60 | |SLASHING ATTACK 3 PARRY-STRIKE 7 - 5 - 1 58 | |AIMED BLOW 3 SLASHING ATTACK 17 - 14 - 2 55 | |TOTAL PARRY 2 STRIKING ATTACK 25 - 21 - 5 54 | |WALL OF STEEL 2 WALL OF STEEL 6 - 7 - 0 46 | |LUNGING ATTACK 1 LUNGING ATTACK 6 - 9 - 0 40 | |PARRY-LUNGE 0 AIMED BLOW 12 - 19 - 1 39 | |PARRY-STRIKE 0 TOTAL PARRY 8 - 15 - 1 35 | Turn 363 was great if you Not so great if you used The fighting styles of the LUNGING ATTACK 1 - 0 SLASHING ATTACK 1 - 2 3 BASHING ATTACK STRIKING ATTACK 4 - 0 AIMED BLOW 0 - 3 2 STRIKING ATTACK BASHING ATTACK 3 - 1 PARRY-LUNGE 0 - 0 2 SLASHING ATTACK TOTAL PARRY 1 - 1 PARRY-STRIKE 0 - 0 2 TOTAL PARRY WALL OF STEEL 1 - 1 PARRY-RIPOSTE 0 - 0 1 LUNGING ATTACK 1 WALL OF STEEL TOP WARRIOR OF EACH STYLE FIGHTING STYLE WARRIOR W L K PNTS TEAM NAME LUNGING ATTACK LEAPING LILAC 9359 6 1 0 64 PARKS AND REC (1611) STRIKING ATTACK AVERAGE JOE 9360 5 2 3 47 PARKS AND REC (1611) The overall popularity leader is QUEENIE PAROE 9345. The most popular warrior this turn was BROKEN WHEEL 9410. The ten other most popular fighters were ZOE YATES 9349, The least popular fighter this week was WILLA VARAN 9414. The other ten least popular fighters were TOMMY SNAITH 9412, QUEENIE PAROE 9345, HOLES 9407, GAMECOCKS 9408, RIPPED OPEN 9415, and HOOSIERS 9404. Article for Newbies KEY FOR ODDS OF WINNING: 1 - ALMOST IMPOSSIBLE 6 - BETTER THAN EVEN 2 - HARDLY LIKELY 7 - GOOD CHANCE 3 - NOT MUCH 8 - HIGHLY PROBABLE 4 - SLIGHT CHANCE 9 - MOST DEFINATELY 5 - 50/50 10 - ALMOST GUARANTEED Reminder: Left Row vs Top Column BA ST LU SL AB TP PS PL PR WS BA 5 2 6 6 8 9 7 6 5 6 ST 8 5 8 9 9 6 7 5 6 4 LU 4 4 5 6 10 5 7 7 9 6 SL 4 1 5 5 8 6 8 7 8 9 AB 2 2 2 4 5 10 6 6 6 7 TP 3 4 6 4 1 5 7 8 6 5 PS 3 3 3 4 5 4 5 4 5 4 PL 5 4 3 3 6 2 7 5 7 6 PR 7 5 2 2 4 5 6 3 5 4 WS 5 8 6 2 4 6 7 5 6 5 roundtable and without whose effort little projects like this would prove quite David Gottwald - Magic Man The Crew - Monuntial (34) Rocky's Heroes - North Fork (47) Tragedy Strikes - Andor (57) (inactive) Natural Born - Illis (59) Slow & Easy - Aradi (60) Omega Squadron - Dragonhead (72) MAKING YOUR CHALLENGES GO THROUGH 2. Is your handwriting clear? experienced warriors so they will learn.) the last two turns, then they can't challenge. There are several things that will help: still might get your second challenge. increases your chance of getting your own challenge through. that they won't be fighting this turn. issues of the newsletter to find out who the TVs are. get challenged a lot. includes any warrior your team fought within the last two turns. currently at war with anyone. I hope this will be of some help to you. The Rogue She-Puppy following procedures. the excitement. accidentally uncover an unexpected replacement rollup. This is a traumatic experience go exactly as planned. to leave a loaded musket nearby. it is nice to choose an exceptionally violent alternate selection for those hockey gear. consecutive war hammer shots to the head... Isn't Duelmasters great? DM-10 (Kolact) Beermacht DM-22 (Solven) Steel Warriors DM-24 (Zorpunt) No Escape Winning With the Average Warrior Primus TVs. the arena! M.A., a proud member of GAPPDA TWELVE TIPS I missed. try the jerk routine!) can be hurt both striking and defending.) real "rush!" was a long, long time ago.) THE CONSORTIUM DM 8 Smithsonian (Curator) DM 11 Bulldogs (Kennelworth) DM 20 Animal Farm (Mino) DM 46 Fandils (Fandil the Wise) (and many, many more)
__label__1
0.592191
The ABC of SPVs There are many circumstances in which an SPV may provide a convenient legal structure for a financing arrangement. Favourable tax treatment may be achieved by SPVs being incorporated in and managed from some offshore jurisdictions. Limited liabilities companies are frequently incorporated in Jersey as SPVs for purposes such as raising money by issuing debt securities, structuring security arrangements ancillary to bank financing, making investments off-balance sheet, tax driven financing structures and asset repackaging/securitisation transactions. A key consideration in establishing SPVs used in the context of asset repackaging/securitisation programmes is often that the SPV must be and remain “off” the balance sheet of the arranging participants. A frequent structure used to give effect to this is a general charitable trust established by a fiduciary service provider with the intention that the trustees of that trust will hold the entire issued share capital in the SPV, thus separating ownership of the SPV from the arranging parties. This gives rise to the so-called “orphan” company. Whether an orphan structure is appropriate will depend on the originator's objectives and accounting treatment in its jurisdiction. Although the capacity of a Jersey company is not limited by anything contained in its constitution, directors authority can be restricted and the SPVS's memorandum of association will usually specifically provide for the purposes for which it has been incorporated as the sole permitted activities of the company. In addition, the SPV will enter into various contractual restrictions under the programme documentation so that the parties to the programme have some assurance that the SPV is tailored solely for the purposes of the programme for which it was created. As the SPV is only created to provide a clearly defined function in the context of a given securitisation/repackaging arr-angement, a good deal of attention has to be paid to the “ring-fencing” of the underlying assets for the benefit of those investors having an interest in the SPV and to limiting recourse against the SPV so that the obligations of the SPV itself (under the funding documentation) are limited in recourse to the assets it has available to discharge those obligations. The “investors” should be taking a risk on the assets acquired by the SPV and not the SPV itself. This is a key issue for the prospective directors of the SPV which will typically have a paid-up share capital of only £2. Where a separate SPV is incorporated for the purposes of issuing a single series of securities, the proceeds of which are used to acquire the underlying assets, the security offered to the investors in the securities (and any relevant swap counter- party) may take the form of a security interest over the underlying assets. This will sometimes involve the use of an independent security trustee. Where the same vehicle is being used for various note issues “ring-fencing” becomes particularly important so that investors in the notes “backing” a particular acquisition of underlying assets have recourse only to those assets. In conjunction with these provisions defining investor recourse one has to consider the issue of limited recourse from the viewpoint of the SPV itself. Corporate benefit flowing to SPVs in these types of structures can be difficult to find – the SPV is raising money or receiving assets and passing them through to acquire assets or rights, making little return on its role in the transactions. The programme documentation will usually provide for the SPV to make a turn so that there will be a residual amount in it to enable the directors to satisfy themselves as to the corporate benefit issue. However, these margins are usually small and it will be important for the directors to ensure that they are satisfied that the liabilities of the SPV should never exceed the assets to be made available to the SPV under the terms of the transaction. Unless recourse against the SPV under its funding arrangements is limited to the assets it has available to meet its obligations it will be taking a credit risk on the underlying assets without any bona fide commercial return flowing to it. This approach to the limited recourse provisions that must be included in the programme documentation complements the desire of the participants to ensure that the SPV is a so-called “bankruptcy remote” vehicle. It is generally accepted that this refers to the SPV being a limited purpose entity, the parties to the programme undertaking not to commence bankruptcy proceedings against the SPV until after repayment of the relevant securities and expiration of any applicable preference period under relevant insolvency laws, and that there must be a true sale of the relevant assets to the SPV (which requires analysis of applicable insolvency laws in the jurisdiction of the originator and the SPV to assess the risk of any challenge to set aside the sale on an insolvency of the originator or the SPV whether as a transaction at an undervalue or on some other basis). To include appropriate limited recourse language must be to assist the bankruptcy-remote nature of the SPV. It will also assist the SPV's directors in considering any possibility of “wrongful trading” arguments where the cash flows may at some future time indicate an inability to source repayment of the funding obligations from the cash flows originating from the underlying assets. This will be a material consideration throughout the period of the life of the programme. Common in US securitisation structures is language which deems any obligations of the SPV to be extinguished to the extent that it does not have sufficient assets available in accordance with the programme documentation to discharge them. That must surely be a preferred practice in any such structure. It is clearly important that all the programme creditors join into the documentation to contractually agree that treatment. The SPV should not have any other creditors given the limited nature of its proposed functions. This underlines the importance of identifying a responsible service provider to provide the board of the SPV.
__label__1
0.857556
A Different World Divided A Different 'World Divided' Can you imagine if the world’s population were divided evenly over the Earth’s regions? Neither could we before this map did the thinking for us. Savannah Cox
__label__1
0.740731
European Journal of Wildlife Research , Volume 59, Issue 4, pp 495–503 • V. J. Kontsiotis • Department of Wildlife Management and Freshwater Fisheries, School of Forestry and Natural EnvironmentAristotle University of Thessaloniki • Department of Forestry and Management of Natural EnvironmentTechnological Educational Institute of Kavala • A. C. Tsiompanoudis Original Paper DOI: 10.1007/s10344-013-0697-8 Cite this article as: Kontsiotis, V.J., Bakaloudis, D.E. & Tsiompanoudis, A.C. Eur J Wildl Res (2013) 59: 495. doi:10.1007/s10344-013-0697-8 ProductivityFood shortagePopulation growth rateStructural equation modelManagementMediterranean ecosystems The European wild rabbit (Oryctolagus cuniculus) plays a multidimensional role in Mediterranean type ecosystems (Thompson and King 1994; Devillard et al. 2008). It acts as a disperser of seeds and a primary consumer of plants and seeds and so modifies the native vegetation (see Delibes-Mateos et al. 2008). It is also a digger of soils thereby improving their physicochemical conditions (Delibes-Mateos et al. 2008; but see Eldridge and Simpson 2002; Eldridge et al. 2006) and is prey for many mammalian and avian predators (Serrano 2000; Ferreras et al. 2011). Finally, it is an important game species (Angulo and Villafuerte 2003; Delibes-Mateos et al. 2011). On the contrary, in several areas where it has been introduced, it remains one of the most important threats for biodiversity and human economy (see Thompson and King 1994; Manchester and Bullock 2000), competing with ecologically similar livestock and wild species, destroying both native and cultivated vegetation. As a result, efforts are commonly made to reduce its negative impacts (Myers et al. 1994; Thompson and King 1994; Williams et al. 1995; Courchamp et al. 2003). In the Iberian Peninsula where it is a native species, its population has declined dramatically due to the release of myxomatosis then the subsequent spread of rabbit hemorrhagic disease (RHD; Moreno et al. 2007), causing a number of reactions in ecosystem functions and processes (see Delibes-Mateos et al. 2008; Lees and Bell 2008; but see Barrio et al. 2010b). By contrast, in the south-eastern Mediterranean basin, it has been introduced on a few Greek islands in the Aegean Sea, but there is little published information concerning its ecology and its impacts on natural ecosystems, or whether it is a pest species or beneficial for the human economy. Rabbits are the main prey for raptorial species (e.g., the Bonelli’s eagle Aquila fasciata and the common buzzard Buteo buteo), and favored by hunters, while at the same time causing extensive damage to agricultural crops having a devastating impact on the local economy during the last two decades (Kontsiotis 2011). Given the contradictory views of the European rabbit in its geographic range and its key role in the Mediterranean basin, its strategic management should be based on an understanding of its ecology; in particular the limiting factors which potentially affect its population growth rate. The estimation of population growth rate (r or pgr) of a target species on certain time scales has been considered the central issue in population dynamics (see Sibly et al. 2003; Sinclair et al. 2006). Consequently, the determinants of pgr are important factors in understanding their influence on the target species population. In this sense, a wildlife manager should understand all those factors in order to manipulate, increase, maintain, or decrease the pgr depending on the status of the species: endangered, game, or pest, respectively (see Krebs 2001). Factors which influence year-to-year pgr may be demographic (fecundity and mortality), mechanistic (food, parasites, predators, etc.), and density-dependent (see Sibly et al. 2003; Korpimäki et al. 2004), and they act in an unpredictable way on the population. In seasonal environments, like those in the Mediterranean region, year to year population fluctuation is rather negligible (Béltran 1991; Williams et al. 2007). Although year-to-year pgr of lagomorphs has been studied in detail [see Krebs (2011) for hares; Villafuerte et al. (1997) and Palomares (2003) for wild rabbit], seasonal changes of the European wild rabbit pgr and its determinants are limited in literature (Caley and Morley 2002; Cabrera-Rodriguez 2008). Investigations into the relationships between the European rabbit’s seasonal population growth rate (spgr hereafter) and its determinants could lead to a better understanding of its seasonal population fluctuation, as well as assisting integrated management to reduce its abundance in a reasonable population level at the appropriate time (Smith and Trout 1994) or to moderate the extensive damage caused to agricultural crops by population control (Delibes-Mateos et al. 2011). Consequently, the scope of the present study was to define the demographic and mechanistic factors related to spgr of the European rabbit with the aim of evaluating the combined effects of demographic and mechanistic parameters on the wild rabbit’s spgr by using a structural equation model. Implications for managing rabbits are then extensively discussed. Materials and methods Study area The study was conducted in an agricultural area, located in the central region of Lemnos Island (39° 55′ N, 25° 12′ E), in the north Aegean Sea, Greece. It is a lowland area between 8 and 30 m above sea level, dominated by non-irrigated crops (annual cereals, >80 %), with irrigated alfalfa Medicago sativa, native vegetation in fallow fields and shrubby riparian vegetation (mixed Rubus spp. and Arundo donax) occupying the remaining study area. The vegetation in fallow fields is mainly composed of annual winter grasses, legumes, and forbs. The climate is typical-Mediterranean (Csa), with very warm and dry summers and mild winters. The mean annual temperature is 15.9 °C and the average annual precipitation is 474.4 mm, concentrated mainly between November and January. The area experiences occasional thunder storms and heavy rain during June. Mammalian predators are absent in the area. However, the most important predator is the common buzzard (B. buteo). The study area is grazed by sheep between June and September, after the cereal crop harvest. The Greek law allows night-shooting from vehicles from October to early March. Unfortunately, the area is also subject to illegal shooting outside this period. Data collection and parameter determination A direct method was employed in the current study for assessing rabbits’ abundance. Direct methods provide a tool for rapid and accurate estimate of density (Williams et al. 1995), although the detectability of animals and their behavior can undermine their suitability (Barrio et al. 2010a; Fernandez-de-Simon et al. 2011). According to Barrio et al. (2010a), however, direct methods are suitable for the estimation of relative abundance or population trends of wild rabbits in open agricultural landscapes. In addition and in order to minimize the drawbacks of the method, we only surveyed open habitats and only during sunrise and dusk to ensure better sampling of active rabbits (Moreno et al. 2007). Data were collected during the years 2007–2009. A line transects survey technique was used to estimate the relative density of rabbits. Three fixed line transects were established in the study area and surveyed on consecutive days. Transects were distributed among three neighboring sites which were separated by streams and water channels. This spatial arrangement prevented the movement of rabbits between sites, even though they were spaced at a minimum of 400 m apart. The total length of transects was 8 km (3, 2.5, and 2.5 km each) and the width was 60 m (30 m on either side of the route). Because of the open nature of the habitat it is assumed that within this strip, most rabbits were counted. Surveys were conducted by the same observer with a mean walking speed of 1.5–2.0 km h−1. They were performed only during fine weather and over two daily periods to better sample active rabbits. The first transect counts started early in the morning and finished after 1 h, subsequent counts began in the late afternoon and concluded soon after sunset. Surveys were repeated on each transect at one and a half a monthly intervals (twice a season); early April and mid May in spring, early July and mid August in summer, late September and mid November in autumn, and early January and mid February in winter. In total, 72 replications were carried out during the study period (3 transects × 8 intervals per year × 3 years). Rabbit relative density (individuals ha−1) was calculated by dividing the average number of rabbits counted by the surveyed area. The spgr was calculated using the formula r = logeλ, where λ is defined by dividing the density in time t + 1 by the density in time t (Sibly et al. 2003). The mean productivity for each specific survey period (t, t + 1) was estimated from post-mortem examination of 180 wild rabbits collected monthly by shooting. It was calculated by multiplying the average number of embryos of pregnant females by the percentage of females in the population that were pregnant. In order to calculate those reproductive parameters for the time period (t, t + 1), we used the average of monthly measurements of the included months for that period. Given that the gestation period of wild rabbits is 28 days, the pregnancy is detectable after 5–7 days (Brambell 1942), and young rabbits emerge at 21 days (Gibb 1990; Williams et al. 1995), we defined that the spgr had been influenced by the productivity with a time delay of one month. Food shortage was determined by taking into consideration two elements: the frequency of green vegetation, and the quality of food. That data set was collected for each period. The frequency of green vegetation was estimated using line-point intercepts (see Cook and Stubbendieck 1986) along five 25 m linear transects, and expressed as a percentage (%). A total of 500 points were sampled each period, and points were spaced at 0.25 m apart. The concentration of nitrogen was used to measure the quality of food. During each period, we selected 10 square plots (50 × 50 cm) which were randomly distributed throughout the study area, and we clipped the vegetation to ground level. Vegetation was dried at 60 °C for 48 h and the crude protein was estimated using the Kjeldahl method (Bremner 1965). Protein content was estimated as N × 6.25 and we used the average content of the total sampled vegetation. The shortage of food was assessed as an inversely ordinal variable with three values: 0 = highest percentage (≥70 %) of green vegetation and qualitative food (crude protein of vegetation ≥ 15), 1 = limited percentage (40–70 %) of green vegetation with medium quality of food (crude protein of vegetation between 10–15), and 2 = almost no green vegetation (<40 %) with the lowest nutritional quality (crude protein of vegetation <10). We assumed that the spgr lagged behind food shortage by one sampling period, since neither the direct effect of deaths caused by starvation nor the indirect effect through the reduction of productivity appear directly related to the initial limitation of vegetation (Boos et al. 2005; Tablado et al. 2009). Predation pressure was calculated as the number of predators (common buzzards) counted by the same observer at each visit during sampling. Furthermore, between May and July we added three chicks for each pair counted during the sampling. The mean number of chicks was obtained by checking 25 nests on the whole island. In the model, for each time period (t, t + 1) we used the value of predation which was measured for time t, as this number of predators affects the density of rabbits in that period. The hunting pressure was estimated from the number of cartridges counted along a 2,720-m permanent route, established on a dirt road where both legal and illegal hunting was being practiced. In order to avoid duplicate counting, cartridges were removed after each sampling. In each period, when sampling had ended, a new empty cartridge was left on the route. Cartridges that were probably missed from the last counts and had a higher degree of oxidation, than those which were left, were not considered at the next sampling. In the model, for each time period (t, t + 1) we used the number of cartridges counted at time t + 1, as this number of cartridges affect rabbit density in that period. The maximum and minimum temperature, total precipitation and maximum daily precipitation between consecutive surveys (t, t + 1) were calculated during the three study years. These variables are of particular importance since they have a direct effect on the survival and reproduction of rabbits (Cooke 1977; Gibb and Fitzgerald 1998; Trout et al. 2000; Palomares 2003; Rödel et al. 2009; Rödel and Dekker 2012), and an indirect affect through food shortage. The above variables were represented in the model through a climatic index, which was obtained after performing a principal component analysis on the original climatic data (Calvete et al. 2004; Williams et al. 2007) that measured between consecutive population surveys. Precipitation variables were found to be strongly related to the scores of the first principal component axis, and thus affecting mostly the climatic index. In each surveying time, we collected five soil samples from each linear transect, located at distances of approximately 500 m. The soil samples were collected at least one week after a rainfall event and not during a dry period (Rueda et al. 2008). The soil moisture was determined as the difference of the weight of dried samples at 100 °C for 24-h by the net weight of soil. We applied management treatments on rabbit warrens during November 2008. One third of rabbits warrens were ripped randomly in our area using diggers and two-wheel-drive tractors with rippers. The destruction of warrens on the selected sites was performed at approximately 1 m depth, including a 4-m buffer zone around each site. Warrens were first ripped in one direction and then at right-angles to this to completely destroy the warrens (see Williams et al. 1995). Hence, the habitat treatment was introduced as a binary variable in the model; 0 for periods before treatments, and 1 for periods after warrens had been ripped. Structural equation modeling Structural equations modeling (SEM) is a powerful statistical method to address causal (direct and indirect) relationships between population parameters and a number of explanatory variables and their interactions. It is a technique for testing and estimating causal relationships, by using a combination of statistical data and qualitative causal assumptions, and it is typically used to develop a modeling strategy (Mitchell 1992). Initially, a path diagram is formed in which the illustrated relationships arise primarily from knowledge based on literature and field experience. Then, estimates of regression weights are computed and the strength of these relationships is depicted in the path diagram (Meyers et al. 2006). The model consists of a series of equations which transform the graphical form of the initial theoretical model to causal linkages among all studied variables (Marcoulides and Schumacker 1996). The hypothetical model (Fig. 2a) developed assumes that both demographic (productivity) and mechanistic (food shortage, predation, hunting pressure, and habitat manipulation) parameters have a direct effect on spgr; climate has either a direct or an indirect effect, while soil moisture has only indirect effect. Given that neither myxomatosis nor RHD have been recorded in the study area during the last decade, we assumed that the aforementioned parameters regulate the seasonal population fluctuation of the wild rabbit. We also assumed a number of correlations among the independent variables. Statistical analysis We used the maximum likelihood procedure to estimate standardized path coefficients. They express the proportions of variance and the correlations among variables. We used different approaches to statistically assess the model fitness: the chi-square test, the normed fit index (NFI), the goodness-of-fit index (GFI), and the root mean square error of approximation (RMSEA; see Browne and Cudeck 1993; Marcoulides and Schumacker 1996). The process of analysis in a structural equations system focuses on minimizing the discrepancy function. The chi-square was also mentioned as a measure of discrepancy. Lower chi-square values indicate better models adaptation. NFI and GFI range between 0 and 1, with values >0.90 indicating a good fit (Marcoulides and Schumacker 1996; Hair et al. 1998), although values higher than 0.8 were considered acceptable (see Marcoulides and Schumacker 1996). RMSEA index values less than 0.05 indicate a good model fit, between 0.05 and 0.08 indicate moderate adjustment, while higher index values indicate poor adjustment (Browne and Cudeck 1993). SEM analysis was performed with the AMOS (release 7.0 for Windows) procedure of the SPSS statistical package (release 15.0 for Windows). Population fluctuation The general pattern of seasonal population trend of the European rabbit is shown in Fig. 1. European rabbits start breeding within the two last weeks of January, whereas substantial numbers of young rabbits appear during spring (from March to May) and summer (June and July). The population decreases gradually from the beginning of autumn (September) until the end of winter (February). Fig. 1 Population trend (mean seasonal number of individuals per hectare) of European wild rabbit obtained by line transects in Lemnos Island. Vertical lines above columns show the standard error (SE) of the mean. The black arrow indicates the time that habitat treatment occurred to rabbits’ warrens Model evaluation Path analysis revealed non-statistically significant differences (χ2 = 6.896, df = 9, P = 0.648), indicating a good fit of the collected data to the null model that assumes independence among variables. GFI and NFI suggested a good adjustment, as both indices were greater than 0.90 (0.91 and 0.92, respectively), and the zero value of RMSEA index confirmed a good fit of the model. Assessment of factors The values of standardized path coefficients and the values of zero-ordered correlation coefficients are shown in Fig. 2b. The mean productivity had a significant positive effect on spgr (+0.77). On the other hand, significant negative effects on spgr were obtained for food shortage, hunting pressure and predation, but not for management treatment and climate index. Management treatment indicated a negative and climate index had a positive effect, but not significantly so. Soil moisture negatively affected food shortage. The unexplained variance (R) resulted by the model was 0.88. Fig. 2 a Structural Equation Model (SEM) depicting hypothetical relationships between both demographic and mechanistic factors, and seasonal population growth rate of European rabbits. Single-headed arrows (solid lines) indicate causal relationships (standardized coefficients) between variables and double-headed arrows (dashed lines) show associations (zero-ordered coefficients) between variables. b SEM illustrating the direct and indirect effects of demographic and mechanistic factors on the seasonal population growth rate of European rabbits, using density data from line transects (mean relative density). Black arrows represent positive effects and grey arrows represent negative effects. Arrow widths are proportional to magnitude of coefficients. Values of standardized partial regression coefficients are shown with asterisks (* = P < 0.05; ** = P < 0.01) when they are significantly different from zero. Number above R denotes the unexplained variance by the model The results of SEM are generally in accordance with our assumptions, since most of the variables were determinants to the spgr of European rabbits in a typical Mediterranean ecosystem. Both demographic and mechanistic parameters, acting directly on spgr and indirectly through food shortage, explain to a great extent the seasonal population fluctuation of European rabbits. This is in line with the findings reported in the literature (see Williams et al. 1995; Krebs 2001; Caley and Morley 2002; Palomares 2003). However, the unexplained variance indicates that other factors, including density-dependent factors, such as colonization, age structure, parasites, over-winter survival etc. (Thompson and King 1994; Williams et al. 1995; Rödel et al. 2004a, b), are likely to influence the seasonal population variation. Therefore, the model reveals a global picture of the factors causally influencing seasonal population fluctuation, detecting at the same time their combined effect. However, the consideration of several factors simultaneously may partly obscure the significant effect of individual factors, due to their opposing action. For instance, the negative synergistic action of predation and hunting pressure may mitigate the positive effect of productivity on spgr. Mediterranean ecosystems are highly seasonal environments which tend to confine reproduction and mortality in certain periods during the year. In those environments, a particular fluctuation of population is exhibited, and herbivores, like the European rabbit, take advantage of short favorable seasons to reproduce (di Castri and Mooney 1973). The rapid population growth occurring during spring and early summer is followed by a population decline from early autumn through to the end of winter (Fig. 1). This cyclical fluctuation in population size is apparent and it is further characterized by a high degree of inter-annual variation. The mean productivity was the main determining factor which is conducive to the European rabbit’s spgr. The onset and the end of the breeding season is influenced by environmental and/or ecological factors (e.g., growing vegetation season, precipitation, etc.), and may therefore vary among different areas and among years within an area (Soriguer and Myers 1986; Gonçalves et al. 2002; Rödel and von Holst 2008; Tablado et al. 2009). This variation of the breeding season length may have consequences on the spgr through density fluctuation. Food shortage appears to be the most important negative factor for the spgr. In the Mediterranean region, it could be more pronounced in late summer and early autumn when herbaceous vegetation has dried up and has almost disappeared due to the dry climate (Dallman 1998). Furthermore, the quality of vegetation is degraded drastically during this period, and the available food provided by agricultural crops is minimized due to harvesting (Kontsiotis 2011). All these factors signal the end of the breeding season (Gonçalves et al. 2002; Rödel and von Holst 2008; Tablado et al. 2009) by zeroing the productivity, hence negatively influencing the spgr. In addition, they are responsible for the direct or indirect wild rabbit’s mortality, due to starvation or malnutrition (Moreno and Villafuerte 1995; Palomares and Delibes 1997; Villafuerte et al. 1997; Gibb and Fitzgerald 1998; Palomares 2001; Wilson et al. 2002; Williams et al. 2007), by exerting additional negative effects on the spgr, especially during this period where wild rabbit numbers are at their peak (Kontsiotis 2011). Livestock grazing did not have an impact on the available vegetation, since its effect was very low (<1 %) on the total available vegetation for wild rabbits (Kontsiotis 2011). The negative effect of hunting pressure on wild rabbits has been noted in other studies (Caruso and Siracusa 2001; Angulo and Villafuerte 2003; Williams et al. 2007), but this effect may not be of critical importance in their population dynamics (Williams et al. 1995). In our study, the significant negative effect of hunting pressure could be attributed to its high and constant occurrence throughout the year (both legal and illegal shooting), to the absence of periodically emerging viral diseases (myxomatosis and RHD), and to the low impact of other mortality factors such as human intervention, adverse weather conditions etc. Predation was a significant parameter influencing spgr of European rabbits in our study. This may be caused by the high breeding density of common buzzards on the island. Several studies have addressed the effect of predation on European rabbits (Moriarty et al. 2000; Lombardi et al. 2003), but the results are rather inconclusive. In the Iberian Peninsula where rabbits constitute the prey for more than 40 predators, the predation displayed a reciprocal negative effect on the predator–prey system (Moreno et al. 2007; Delibes-Mateos et al. 2009), as in most areas of Australia (Gibb and Fitzgerald 1998; Moriarty et al. 2000). Predation is also referred as an important determinant of wild rabbits at low densities, mainly after populations collapse due to diseases (e.g., RHD, myxomatosis), playing an inhibitory role in the population resurgence (Moreno et al. 2007). In England and Wales, Trout et al. (2000) have noted that predator removal was associated with higher numbers of rabbits, but the cause and effect was unclear. In contrast, predation pressure of both red foxes and common buzzards on rabbits was not found by Caruso and Siracusa (2001) in Italy. Furthermore, a relationship between red fox abundance and rabbit abundance was not observed in northeastern Spain (Williams et al. 2007). Neither habitat treatment nor climate index had significant effects on spgr. Although habitat treatment showed the expected negative effect on spgr, however, the obtained weight was weak and insignificant (Fig. 2b). There are probably two reasons behind this pattern. Firstly, the ripping of the one third of the study area only was apparently not adequate to cause a significant impact. Secondly, although ripping was suggested by Williams et al. (1995) as the most efficient control measure, especially in agricultural areas (Barrio et al. 2011), it was also suggested that it needs to be combined with additional population control measures, which was not the case in the current study. Climate index, which in the current study is primarily a function of precipitation, also had an insignificant effect on spgr (Fig. 2b). This was probably due to the good soil drainage, which prevents warren flooding, and the mild climate of the region. It was further found to have a negative but insignificant effect on food shortage, which is explained by the positive effect of precipitation on biomass production. Several researchers have demonstrated the negative impact of adverse weather conditions on European rabbits in areas with prolonged snow cover (Trout et al. 2000), extremely high temperatures (Cooke 1977), and floods (Gibb and Fitzgerald 1998; Palomares 2003; Rödel et al. 2009), as well as interactions between seasonal temperatures and precipitation (Rödel and Dekker 2012). Significant correlations in zero-ordered coefficients between pairs of variables (e.g., between productivity and predation, see Fig. 2b) are mainly due to their temporal values coincidence. For example, the negative correlation between productivity and food shortage is the result of the opposite temporal appearance of their values; as productivity peaks in spring, food shortage diminishes simultaneously. On the other hand, productivity and predation coincidently both peak during the spring. In conclusion, the integrative approach made in this study, revealed the importance of demographic and mechanistic parameters to the spgr of the European rabbit. It is further highlighted that the seasonal variability in population density, expressed here as spgr, is determined by direct and indirect interactions between spgr and productivity, food shortage, predation, hunting pressure and soil moisture. Both fecundity and mortality factors regulate the European rabbit’s population interannually in Mediterranean ecosystems and generally have considerable potential for its spgr. Therefore, factors determining spgr have profound implications for managing European rabbits (Sinclair et al. 2006), in particular when seasonal population size exceeds a desired level. The effect of warren ripping, as identified in the current study, indicates the need for its application on a larger scale, since its effect on population control is unambiguous. Implications for management The management of the European rabbit population is an immediate priority in many regions of the world (Delibes-Mateos et al. 2011), either for control or for conservation purposes (Lees and Bell 2008). Our findings may contribute to a combination of actions and regulations aiming for a multi-purpose management of rabbits. For example, a reduction of productivity was motivated by management measures, i.e., the exclusion of rabbits from productive habitats, a simultaneous increase in the hunting intensity and/or an adjustment of hunting period (Gonçalves et al. 2002), could cause a reduction in their populations in areas and/or periods of time where the species is considered a pest. On the other hand, where and when the reduction of rabbits is associated with cascade effects in the ecosystems, stimulation of productivity, e.g., by qualitative food supply, decreasing hunting intensity and/or an adjustment of hunting period (Ferreira and Alves 2009; Gonçalves et al. 2002), could enhance their populations. Even though our study area represents a typical insular Mediterranean agricultural landscape, and a well-defined intra-annually population fluctuation pattern was identified during a three-year study, the extrapolation of our findings to broader scales might be risky and needs to be done with caution (Schaub et al. 2011). Our results suggest that understanding how spgr varies according to both demographic and mechanistic factors is crucial to managing effectively a seasonally fluctuated future rabbit population. This research is part of a Ph.D. thesis, and supported partly financially by the Prefecture of Lesbos. Thanks are given to Professors N. Papageorgiou, C. Vlachos, and V. Papanastasis for their valuable comments during this research. We are most grateful to two anonymous reviewers for their insightful suggestions for the improvement of the manuscript. Furthermore, we would like to thank Mrs Margaret Gallacher for her linguistic assistance and Dr P. Xofis for his fruitful comments on this article. The Ministry of Rural Development and Food is also acknowledged for the permission to collect data. The study was conducted according to the Greek and EU laws. Copyright information © Springer-Verlag Berlin Heidelberg 2013
__label__1
0.993264
The Department of Romance Studies offers our students a unique trans-cultural perspective on an increasingly fluid world culture and a globalized economy. Our students engage literature, film, and history, critical theory, visual culture, linguistics and philosophy and develop spoken and written abilities fostering their profound enjoyment and critical understanding of the diverse places where these languages are spoken: Africa, the Caribbean, Asia, Europe and the Americas—North, Central, and South. Our majors graduate with extraordinary international and domestic opportunities. The competencies that they acquire will enable them to gain admission to outstanding graduate programs in their field of interest or to pursue careers in areas as diverse as publishing, environmental science, medicine and the arts. Undergraduate Student News
__label__1
0.869331
COLCOT Primary School pupils and staff are celebrating, after earning the Eco-Schools Green Flag. This prestigious award, which is recognised throughout Europe, is given out when pupils and staff show a commitment to a variety of important global and ecological issues. Colcot pupils from all year groups have conducted class projects into subjects such as: Climate Change; Conserving Energy; Recycling and minimising waste; Sustainable Development; Fair Trade; Developing Countries; Global Citizenship, Healthy Living; Growing Food and Endangered species. They have also all been working on changing their daily habits, to improve their own health and the health of the planet. The school’s eco-council, made up of class representatives aged six to 11, has ensured that pupils take key roles in decision-making in order to reduce the environmental impact of the school, and all classrooms have been recycling their cardboard, paper and plastic. One decision, which has been supported by parents, is that children only bring in healthy snacks at playtime, such as fruit or vegetables. And the staff have not escaped the scrutiny of the eco-council members – who have made sure that teachers are following the rules by turning off lights and other unused electrical equipment when classrooms are not in use, and recycling their staffroom waste. Not content to rest on their laurels, the newly elected eco-council intends to build upon its success and has the following aims: 1. To maintain the school’s Fairtrade status. A Fairtrade tea party for parents is coming soon. 2. To save water around the school. Plenty of water is falling at the moment, but how can we best conserve it for the future in our ever-changing climate? 3. Each class will adopt its own endangered species via the World Wildlife Fund.
__label__1
0.997269
1-800-328-7311 (7:30 AM - 5:30 PM CST M-F) Glossary of Terms A4 - ISO paper size 210 mm x 297 mm used for letterhead. AI - Adobe Illustrator's metafile format, which is actually a type of Encapsulated Postscript. Achromatic - Having no color or hue. Actinic Light - Light that exposes a coating or emulsion. Adhesive Binding - Applying a glue or another, usually hot-melt, substance along the backbone edges of assembled, printed sheets; the book or magazine cover is applied directly on top of the tacky adhesive. Addressability - In a line of printed digital information the number of positions per unit length, usually per inch, at which successive pixels are placed. Alpha Channel - An eight-bit channel reserved by some image-processing applications for masking or retaining additional color information. Artifact - A visible defect in an electronic image, caused by limitations in the reproduction process (hardware or software). Aliasing patterns are an example of artifacts. Artwork - comprehensive. Design produced primarily to give the client an approximate idea of what the printed piece will look like. Alternative terms: comprehensive; comp. Ascender - The part of a lower case letter which rises above the main body, as in “b” or “d”. Assembling - Collecting individual sheets or signatures into a complete set with pages in proper sequence and alignment. Assembling is followed by binding. Author’s Proof - Prepublication copy sent to the author for approval. It is returned marked "OK" or "OK with changes." Return to top ↑ Banding - An electronic prepress term referring to visible steps in shades of a gradient. Bar Code - A binary coding system using a numerical series and bars of varying thicknesses or positions that can be read by optical character recognition (OCR) equipment. Bar codes are used in printing as tracking devices for jobs and sections of jobs in production. Basic Size - 25” x 38” for book papers, 20” x 26” for cover papers, 22 fi” x 28 fi” or 22 fi” x 35” for bristols, 25 fi” x 30 fi” for index. Basis Weight - Weight in pounds of a ream (500 sheets) of paper cut to a given standard size for that grade; example: 500 sheets of 17” x 22” 20 lb. bond paper weighs 20 pounds. In countries using ISO paper sizes the weight, in grams, of one square meter of paper. Bleed Tab - A bleeding ink square at the edge of a page that functions as a guide for locating specific material. Body - (1)The printed text of a book not including endpapers or covers. (2)The size of type from the top of the ascenders to the bottom of the descenders. Body Type - Text set in paragraph or block form, as distinguished from heads and display type matter. Alternative term: body matter. Boilerplate - Standard text that is stored electronically and can be rearranged and combined with fresh information to produce new documents. Book Paper - A general term for coated and uncoated paper. The basic size is 25” x 38”. Breakacross - A photo or other image that extends across the gutter onto both pages of the spread. Alternative terms: crossover; reader’s spread. Brick-and-mortar - Located or serving consumers in a physical facility as distinct from providing remote, especially online, services. Buckle Folder - A bindery machine in which two rollers push the sheet between two metal plates, stopping it and causing it to buckle at the entrance to the folder. A third roller working with one of the original rollers uses the buckle to fold the paper. Return to top ↑ CGM - Computer Graphics Metafile, an American National Standards Institute/International Standards Organization metafile format for images of pretty much any kind. Calibrate - To adjust the scale on a measuring instrument such as a densitometer to a standard for specific conditions. Calibration - A process by which a scanner, monitor or output device is adjusted to provide a more accurate display and reproduction of images. Callout - A portion of text, usually duplicated from accompanying text, enlarged, and set off in quotes and/or a box to draw attention to what surrounds it. Camera-Ready - Copy and all other printing elements are ready photography. Character Generation - Constructing typographic images electronically as a series of dots, lines, or pixels on the screen of a cathode-ray tube (CRT). Character Recognition -The function of systems that automatically read or recognize typed, printed, or handwritten characters or symbols and convert them to machine language for processing and storing in electronic systems. See also: optical character recognition. Chill Rolls - On a web offset press, the section located after the drying oven where heatset inks are cooled below their setting temperature. Chopper Fold - Conveying a signature from the first parallel fold in a horizontal plane, spine forward, until it passes under a reciprocating blade that forces it down between folding rollers to complete the fold. CIE - International Commission on Illumination. A standards institute most well known in the graphic arts for its work in color space definition. Coating - An unbroken, clear film applied to a substrate in layers to protect and seal it, or to make it glossy. Collate - In binding, the gathering of sheets or signatures. Color Fidelity - How well a printed piece matches the original. Color Specification System - Charts or swatches of preprinted color patches of blended inks, each with a corresponding number, used to allow designers, printers and customers to communicate color with more accuracy. Combination Folder - A bindery machine or in-line finishing component of a web press that incorporates the characteristic of knife and buckle folders. Composite File - A PostScript file that represents color pages containing picture elements specified in terms of RGB (red, green and blue) color space as opposed to black and white “gray level” pages which represent separations. Compression - Reducing the size of a file for storage purposes or to enhance the speed of data transfer by eliminating the redundancies and other unnecessary elements from the original. See also: data compression. Concept Creation - Selecting images and generating and approving ideas from thumbnails and rough layouts during the graphic design process. Content Proof - A proof that shows the customer the correct text and position of image elements but does not necessarily show accurate color reproduction. Content Provider -One who owns or is licensed to sell content. Copyfitting - Adjusting copy to the allotted space, by editing the text or changing the type size and leading. Crop - To opaque, mask, mark, cut, or trim an illustration or other reproduction to fit a designated area. Cropping - (1)Indicating what portion of the copy is to be included in the final reproduction. (2)Trimming unwanted areas of a photograph film or print. Cyan - One of the three subtractive primary colors used in process printing. It is commonly known as “process blue.” Cylinder - Part of a system of large rollers on an offset lithography press. The plate cylinder transfers an image onto the blanket cylinder, which is then offset onto a press sheet passing between the blanket and impression cylinders. Return to top ↑ Data File - Text, graphics, or pictures that are stored electronically as a unit. Decompress - To return compressed data to its original size and condition. Default - A method or value that software will use in processing information unless the computer operator specifies otherwise. For example, a scanning program has default settings for variables like brightness and contrast that apply unless the user requests something else. Delivery - (1) The section of a printing press that receives, jogs and stacks the printed sheet. (2) The output end of bindery equipment. Density - The amount an object absorbs or reflects light is called “density level.” High-density objects absorb or stop light; low-density objects reflect or transmit light. Descender - The part of a lower case letter which extends below the main body, as in “p”. Desktop Color Separation (DCS) - A color file format that creates five PostScript files, one for each color (CMYK) and a data file about the image. Desktop Publishing - The creation of fully composed pages with all text and graphics in place on a system that includes a personal computer with a color monitor; word processing, page-makeup, illustration, and other off-the-shelf software; digitized type fonts; a laser printer; and other peripherals, such as an optical image scanner. Completely paginated films are output from an imagesetter. Desktop Publishing Stripping - Electronic assembly of all elements in final imposition for direct output as composite negative or plate. Digital - Method of representing information in numerical (binary) code. Unlike analog signals, digital ones are either "on" or "off." See also: analog device. Digital Color Proof - Proof printed directly from computer data to paper or another substrate without creating separation films first. Proof made with computer output device, such as laser or inkjet printer. Digitize - To convert an image or signal into binary form. Digitized Information - Text, photographs and illustrations converted into digital signals for input, processing and output in an electronic publishing system. Document - (1)Recorded information regardless of physical form or characteristics. Often used interchangeably with record. (2)An individual record or an item of nonrecord materials or of personal papers 2.A collection of information that is processed as a unit. Document Content - Document Content refers to the substance of the material or information within the document that is intended to be communicated. Dot - The individual element of a halftone. Dots Per Inch (DPI) - A unit that describes the resolution of an output device or monitor. Drier - A substance added to ink to hasten drying. Dryer - A unit on a web press that hardens the heatset ink by evaporating the solvent ingredient in it. Dynamically-generated pages - Web pages, birthed at the time they are downloaded, often contain up-to-the-second data pulled into a template. Search engine results pages are dynamically generated. Return to top ↑ Easter egg - A small cartoon, animation, or other feature hidden by a programmer in the code of a game or application and triggered by an arcane sequence of keystrokes or mouse clicks. Electronic Data Interchange (EDI) - (1)The communication or transmission of data as electronic messages according to established rules and format s in order to transact business. (2)Electronic Data Interchange (EDI). The computer-to-computer exchange of formatted, transactional information between autonomous organizations. (3)EDI is the exchange of routine business transactions in machine readable format . It covers many areas including, ordering, pricing, quoting, backordering, shipping, receiving, planning purchases as well as invoicing and payments. There are two competing standards : EDIFACT and ASC X12. ASC X12 and EDIFACT consider their format differences to be minor and are pursuing reconciliation. Electronic Printing, Black or Spot Color - Technology that reproduces pages in black or black plus spot (highlight) colors directly from a computer file without negatives, plates, etc., typically using electrostatic or electrophotographic processes. Electronic Printing, Full-color - Technology that reproduces pages in process colors directly from a computer file without negatives, plates, etc., typically using electrostatic or electrophotographic processes. Encapsulated PostScript (EPS) - A file format used to transfer PostScript™ image information from one program to another. Encapsulation - In programming, the process of combining elements to create a new entity. For example, a procedure is a type of encapsulation because it combines a series of computer instructions. Likewise, a complex data type, such as a record or class, relies on encapsulation. Object-oriented programming languages rely heavily on encapsulation to create high-level objects . Encapsulation is closely related to abstraction and information hiding. Engraved Cylinder - An image carrier with recessed image areas that are filled with ink, which is then transferred to the substrate. Engraved, or intaglio, cylinders are often used in the gravure process. Enhanced Multi-color (“High-fidelity”) Printing - Full-color printing using six, seven or more “process” colors instead of the traditional four. Estimating - The process of determining approximate cost, specifying required quality and quantity, and projecting waste. Environmentally-friendly Processes - Reduced-chemical, silver-and VOC-free processes for preparation of printed materials. Return to top ↑ Face - Edge of a bound publication opposite the spine. FAQ - Frequently-asked-questions. Felt Side - The smoother side of the paper. File - A collection of digital information stored together as a unit on a computer disk or other storage medium and given a unique name, which permits the user to access the information. A file may contain text, images, video, sound, or an application program. Filler - Inorganic materials like clay, titanium dioxide, calcium carbonate, and other white pigments added to the papermaking finish to improve opacity, brightness, and the overall printing surface. Fold - Bending and creasing a sheet of paper as required to form a printed product. Foot - The bottom of a page or book. Format - (1)The sequential organization of data in terms of its components. Also: A specific arrangement of data. (2)a.The shape, size, style, and general makeup of a particular record. (2)b.In electronic records , the arrangement of data for computer input or output, such as the number and size of data fields in a logical record or the spacing and letter size used in a document . Also called layout. See also FILE LAYOUT, RECORD LAYOUT. (2)c.In microform records, the placement of microimage s within a given microform (image arrangement) or the arrangement of images in relation to the edges of the film (image orientation). Form - Each side of a signature. Frequency-modulated Screening - See stochastic screening. FTP - File Transfer Protocol is the language computers speak to transfer files between systems over the Internet. Full-scale Black - A black printer separation that prints dots in every part of the picture, from the highlight to the shadow. Also called full-range black. Return to top ↑ Gather - To assemble folded signatures in proper sequence. GIF - The Graphic Interchange Format is a compression format for images. Pictures and graphics you see on Web pages are usually in GIF format because the files are small and download quickly. Grain - The direction in which most fibers are aligned. Gutter - The inside margin of a bound page. Return to top ↑ Halftone-based Digital Proofing - Producing a proof with reliable color and halftone pattern directly from a digital file, usually by electronic process, without producing a set of film negatives. Head - The top of a page or book. Return to top ↑ Image Area - On a lithographic printing plate, the area that has been specially treated to receive ink and repel water. Image Capture - The process of converting photographs or other artwork into digital data so that they can be used in computer-based layouts. Image Carrier - The device on a printing press that carries an inked image either to an intermediate rubber blanket or directly to the paper or other printing substrate. A direct-printing letterpress form, a lithographic plate, a gravure cylinder and a screen used in screen printing are examples of image carriers. Image Processing - The alteration or manipulation of images that have been scanned or captured by a digital recording device. Can be used to modify or improve the image by changing its size, color, contrast, and brightness, or to compare and analyze images for characteristics that the human eye could not perceive unaided. This ability to perceive minute variations in color, shape, and relationship has opened up many application s for image processing. Imposition - The process of placing graphics into predetermined positions on a press-size sheet of paper. Page layout is the process of defining where repeating elements such as headlines, text, and folios (page numbers) will appear on multiple pages throughout a document, while imposition can be thought of as defining where these completed pages will appear on much larger sheets of paper. Imposition, Head-to-Head - Arranging pages on a form during stripping so that the top of one page is located adjacent to the top of the opposite page. Imposition Layout - A guide that indicates how images should be assembled on the sheet to meet press, folding, and bindery requirements. Impression - One sheet passing once through the press. Impression Cylinder - The hard metal cylinder that presses the paper against the inked blanket cylinder, transferring the inked image to the substrate. The impression cylinder on most sheetfed presses uses paper grippers to hold the sheet through its rotation. Indexed Color Images - An image where each pixel value is used as an index to a palette for interpretation before it can be displayed. Such images must, therefore, contain a palette which has been initialized specifically for a given image. The pixel values are usually 8-bit and the palette 24-bit (8-red, 8-green, and 8-blue). See also eight-bit image. Infeed - (1) The section of a sheetfed press where the sheet is transferred from the registering devices of the feedboard to the first impression cylinder. (2) The set of rollers controlling web tension ahead of the first unit on a web press. Ink - A printing ink is a dispersion of a colored solid (pigment) in a liquid, specially formulated to reproduce an image on a substrate. Inking System - The section of a lithographic press that controls the distribution of ink to the plate. Inplant - A department or division of a company that usually does printing for only that company. Intensity - The measurement of color from dull to brilliant. ISO - International Standards Organization. Return to top ↑ JPG - Joint Pictures Expert Group. The committee which set standards for a file format for graphics. The JPEG file format is a compressed format, with some loss of quality during compression. A popular web format do to the generally small size of pictures. File formats of .jpg, .jpeg, and .jpe. JDF - JDF is a comprehensive XML-based file format/proposed industry standard for end-to-end job ticket specifications combined with a message description standard and message interchange protocol. JDF is designed to streamline information exchange between different applications and systems. JDF is intended to enable the entire industry, including media, design, graphic arts, on demand and e-commerce companies to implement and work with individual workflow solutions. JDF will allow integration of heterogeneous products from diverse vendors to seamless workflow solutions. Basic Idea upon which JDF is based: To develop an open, extensible, XML-based job ticket standard, as well as mechanism that provides new business opportunities for all individuals and companies involved in the process of creating, managing and producing published documents in the new economy. Building on existing technologies of CIP3’s PPF and Adobe’s PJTF, the Job Definition Format supplies a means for printing businesses to streamline the process of producing printed material. The most prominent features of JDF are: (1)Ability to carry a print job from genesis through completion. This includes a detailed description of the creative, prepress, press, postpress and delivery processes. (2)Ability to bridge the communication gap between production and Management Information Services. This ability enables instantaneous job and device tracking as well as detailed pre- and post calculation of jobs in the graphic arts. (3)Ability to bridge the gap between the customer’s view of product and the manufacturing process by defining a process independent product view as well as a process dependent production view of a print job. (4)Ability to define and track any user defined workflow without constraints on the supported workflow models. This includes serial, parallel, overlapping and iterative processing in arbitrary combinations and over distributed locations. (5)Ability to do so (1,2,3&4) under nearly any precondition. Job Specifications - A detailed description of the requirements of a print job. Return to top ↑ K - Abbreviation for black in four-color process printing. Knife - In folding machines, the three or four blades at different levels and at right angles to each other that force the paper between the folding rollers. The sheet of paper is pushed from one knife folding mechanism to the other until the desired number of folds have been made. Return to top ↑ Lap Register - Register where ink colors overlap slightly. Return to top ↑ M - The abbreviation for magenta in the four-color process. Also the abbreviation for “one thousand”. MNG - (pronounced "ming") The proposed Multiple Network Graphics format is a multi-image extension of the existing PNG format. Magenta - One of the three subtractive primary colors of process printing. It is commonly called “process red.” Master - To etch pits (tracks) into the Glass Master (acts like a negative) from which a CD-ROM “stamper” is made. Mechanical Binding - Clasping individual sheets together with plastic, small wire, or metal rings. Two examples are three-ring binding and spiral binding. Misregister - Printed images that are incorrectly positioned, either in reference to each other or to the sheet’s edges. Mottle - Spotty or speckled printing. Mount - To fasten the plate or blanket to an offset press. Moveable Type - The individual metal or wooden type characters that are taken from the typecase, arranged to form words and sentences, and then returned to the case for reuse later. Return to top ↑ Non-image Area - The portion of a lithographic printing plate that is treated to accept water and repel ink when the plate is on the press. Only the ink-receptive areas will print an image. Non-impact Printing - A printing device that creates letters or images on a substrate without striking it. Large, high-speed and ordinary office photocopiers as well as laser and ink-jet printers are some examples. Return to top ↑ Oblong - A booklet or catalog bound along the shorter dimension. Oxidation - Combining oxygen with the drying oil in a printing ink to promote a slow chemical reaction that produces a dry ink film. Return to top ↑ Page - One side of a leaf in a publication. Page Layout Software - Computer programs used to assemble type and images into page form. PDF - Portable document format. A computer file format that preserves a printed or electronic document’s original layout, type fonts and graphics as one unit for electronic transfer and viewing. The recipient uses compatible “reader” software to access and even print the PDF file. Perforating - Punching a row of small hole or incisions into or through a sheet of paper to permit part of it to be detached; to guide in folding; to allow air to escape from signatures; or to prevent wrinkling when folding heavy papers. Pinholes - Tiny areas that are not covered by ink. Pixel Interleave - System of organizing color data within a computer pixel-by-pixel (i.e., a pixel of yellow, a pixel of magenta, a pixel of cyan, a pixel of black, etc.). See also: pixel. Pixelization - A technique used to represent areas of complex detail as relatively large square or rectangular blocks of discrete, uniform colors or tones. Plate Cylinder - In lithography, the cylinder that holds the printing plate tightly and in register on press. It places the plate in contact with the dampening rollers that wet the nonimage area and the inking rollers that ink the image area, then transfers the inked image to the blanket, which is held on its own cylinder. Platemaking - Preparing a printing plate or other image carrier so that it is ready for the press. Platesetters - A device that images printing plates directly from digital image data; no film or any analog processes are required. Platform - (1)A computer hardware usually incorporating a specific operating system. (2)The underlying hardware or software for a system. For example, the platform might be an Intel 80486 processor running DOS Version 6.0. The platform could also be UNIX machines on an Ethernet network. The platform defines a standard around which a system can be developed. Once the platform has been defined, software developers can produce appropriate software and managers can purchase appropriate hardware and applications. The term is often used as a synonym of operating system. The term cross-platform refers to applications, formats, or devices that work on different platforms. For example, a cross-platform programming environment enables a program mer to develop programs for many platforms. Point-and-Click Access - Use of graphical-user-interface (GUI) software and a mouse to execute computer commands. POP - Point of Presence, terminology for local access to a network or telecom service. Also point of purchase. Port - The connecting point between an electronic device and the equipment that transfers data to the rest of the system. Positive - A reproduction which is exactly like the original. PostScript - Adobe Systems, Inc. tradename for a page description language that enables imagesetters and other output devices developed by different companies to interpret electronic files from any number of personal computers ("front ends") and off-the-shelf software programs. PostScript, encapsulated - A file format used to transfer PostScript™ image information from one program to another. Postpress - The final stages in the printing process in which printed sheets are transformed into saleable products, including binding, finishing and delivery. Preflighting - An orderly procedure using a checklist to verify that all components of an electronic file are present and correct prior to submitting the document for high-resolution output. Premakeready - The stage prior to printing in which all production specs are examined, necessary materials are brought to the press, and materials are checked for damage. Presswork - All operations performed on or by a printing press that lead to the transfer of inked images from the image carrier to the paper or other substrate. Printer Control Language - (PCL) the page description language (PDL) developed by Hewlett Packard and used in many of their laser and ink-jet printers. PCL 5 and later versions support a scalable font technology called Intellifont. Print Quality - The degree to which the appearance and other properties of a print job approach the desired result. Printing Plates - A thin metal, plastic or paper sheet that serves as the image carrier in many printing processes. Printing Unit - The sections on printing presses that house the components for reproducing an image on the substrate. In lithography, a printing unit includes the inking and dampening systems and the plate, blanket and impression cylinders. Process Control - A system using feedback to monitor and manage a certain procedure, input and output data are tabulated according to specific formulas and compared with certain standards and limits; the process is then adjusted as necessary. Process Photography - (1) Creating line and halftone images for photomechanical reproduction. (2) The equipment, materials and methods used in preparing color-separated printing forms for color reproduction. Proof - A prototype of an image that is supposed to show how it will appear when printed on the press. Property Rights - Metadata recording the ownership of Content and the history of ownership may be stored in the wrapper in order to facilitate the establishment and preservation of copyright. Return to top ↑ Quality Control - The day-to-day operational techniques and activities that are used to fulfill requirements for quality, such as intermediate and final product inspections, testing incoming materials and calibrating instruments used to verify product quality. Return to top ↑ RAW - This may be a Photoshop RAW file, which is a PSD file with no identifying header. Or it may be a minimally formatted image data dump. RTF - Microsoft's Rich Text Format, which is normally used as a well-understood cross-platform word processing document format, but which can store pictures as well as text. As image storage formats go, though, this one is as inefficient as Postscript. Random Access - A system of data file management in which a record is accessible independent of its file location or the location of the previous record accessed. In other words, records need not be accessed sequentially. Random Proof - A color proof consisting of many images ganged on one substrate and randomly positioned with no relation to the final page imposition. This is a cost-effective way to verify the correctness of completed scans prior to further stripping and color correction work. Also called scatter proof. Raster - An image composed of a set of horizontal scan lines that are formed sequentially by writing each line following the previous line, particularly on a television screen or computer monitor. See also: bitmap. Raster Image Processor (RIP) - The device that interprets all of the page layout information for the marking engine of the imagesetter or platesetter. PostScript or another page description language serves as an interface between the page layout workstation and the RIP. Rasterization - The process of converting mathematical and digital information into a series of variable-density pixels. Resolution - S(1)The density of dots or pixels on a page or display usually measured in dots per inch. The higher the resolution, the smoother the appearance of text or graphics. (2)The precision with which an optical, photographic, or photomechanical system can render visual image detail. Resolution is a measure of image sharpness or the performance of an optical system. It is expressed in lines per inch or millimeter. Rich Text Format - A standard developed by Microsoft Corporation for specifying format ting of documents. RTF files are actually ASCII files with special commands to indicate format ting information, such as fonts and margins. Other document format ting languages include the Hypertext Markup Language (HTML), which is used to define document s on the World Wide Web, and the Standard Generalized Markup Language (SGML), which is a more robust version of HTML. Rotogravure - A printing process that uses a cylinder as an image carrier. Image areas are etched below nonimage area in the form of tiny sunken cells. The cylinder is immersed in ink, and the excess ink is scraped off by a blade. When the substrate contacts the printing cylinder, ink transfers, forming the image. RRED - Right reading, emulsion side down. Return to top ↑ Scanner - Electronic device used to digitize an image. Secondary Colors - Colors created by combining two primary colorants of a color system. Example: red would be the secondary color produced with magenta and yellow. Also referred to as overprint colors. Sharpen - Reducing the size in halftones or separations. Sheeter - A device on a printing press that converts continuous forms into smaller sheets. Sheetfed Press - A printing press that feeds and prints on individual sheets of paper (or another substrate), rather than a continuous paper roll or web. Shrink Wrap - Using heat to affix a thin plastic material around printed and bound products to prepare them for shipment. Silhouette Halftone - A halftone with all of the background removed. Slur - A smearing of ink that occurs in printing when there isn’t enough pressure on the blanket. SNAP - Specifications for Nonheatset Advertising Printing, a set of standards for color separations and proofing developed for those printing with uncoated paper and newsprint stock in the United States. Soft Proof - A proof that is viewed on a color-calibrated video monitor as opposed to a hard proof printed on paper. Software - The stored instructions (programs) that initiate the various functions of a computer (the hardware). Instructions may be written in machine language or in another programming language, then compiled, interpreted, or assembled into machine language. Word processing, page layout, and drawing programs are a few of the software programs used in the graphic arts. There are also other more specialized software programs that control high-end color electronic prepress systems and even some presswork applications. Solvent - A component of the vehicle in printing inks that disperses the pigment and keeps the solid binder liquid enough for use in the printing process. Spectrum - The series of color bands formed when a ray of light is dispersed by refraction; the rainbow-like band of colors resulting when a ray of white light is passed through a prism. Splice - The area where two paper rolls are joined to form one continuous roll. Spread - Two facing pages. They can be a reader’s spread or a printer’s spread. Stamping - Using a die and often colored foil or gold leaf to press a design into a book cover, a sheet of paper or another substrate. The die may be used alone (in blank stamping) if no color or other ornamentation is necessary. Special presses fitted with heating devices can stamp designs into book covers. Statistical Process Control (SPC) - Method of understanding and managing production processes by collecting numerical data about each step in the process and all materials used in the production sequence, including output; this data is then analyzed to locate causes of variations. Stock - The paper or other substrate to be printed. Substrate - Any surface on which printing is done. Subtractive Color System - A means of producing a color reproduction or image with combinations of yellow, magenta and cyan colorants, which serve as filters to “remove” colors from a white substrate. Swatch - A small, printed solid used for color matching or measurement. It represents what an ink color might look like after it is printed. Return to top ↑ TIFF/TIF - TIFF stands for Tag Image File Format; TIFF was a large, unwieldy, 24 bit format untilversion 6 came out, which supported compression and made it less painful. The fact that its compression was somewhat broken and might or might not be compatible with different programs on different computers somewhat reduced the bonus. The compression is LZW and thus owned and licensed out by Unisys (see GIF) is another problem. TIFF is, nonetheless, a very popular professional graphics format. A TIFF file permits the image to be edited in other applications (ie, QuarkXpress, and Macromedia Freehand) Terms and Conditions - This is metadata that describes the "rules" for use of an object. Terms and conditions might include an access list of who can view the object, a "conditions of use" statement that might be displayed before access to the object is allowed, a schedule (tariff) of prices and fees for use of the object, or a definition of permitted uses of an object (viewing, printing, copying, etc.). Tonal Compression - The reduction of an original’s tonal range to a tonal range achievable though the reproduction process. Type 1 - A format for storing digital typefaces developed by Adobe Systems. The most popular typeface format for PostScript printers. Typesetting - Composing type into words and lines in accordance with the manuscript and typographic specifications. Typography - The art and craft of creating and/or setting type professionally. Return to top ↑ Uncoated Paper - Paper that has not been coated with clay. URL - The Uniform Resource Locator is the address of a page on the Web. Return to top ↑ Vacuum Frame - A device that holds film or plates in place by withdrawing air through small holes in a rubber supporting surface. Varnish - A thin, protective liquid coating applied to the printed sheet for protection or appearance. Vector - Mathematical descriptions of images and their placement. Vehicle - The liquid component of a printing ink. Visible Spectrum - That portion of the electromagnetic spectrum to which the human eye is sensitive; wavelengths of approximately 400 through 700 nanometers. Because of the characteristics of cone sensing (color-reading mechanism of the retina), it is generally agreed that humans detect only red, green, and blue. All perceived colors are combinations of those sensitivities (hue) in relation to the strength of the transmitted or reflected light (brightness) and the intensity of the light hitting the retina (saturation). Ultraviolet wavelengths are shorter and infrared wavelengths are longer than the sensitivity range of the eye and are invisible as a result. Return to top ↑ WMF - Windows Metafile format, which is an intermediate vector format for Windows programs to use when interchanging data and, generally speaking, should never be seen anywhere else. WPG - WordPerfect metafile format, used by WordPerfect software on various platforms. It supports bitmapped, vector and Encapsulated Postscript data. Washup - The process of cleaning the rollers, form or place and fountain of a press with solvents to remove ink as required after a day’s run, or during a run for ink color changes. Waterless Lithography Sheetfed - Water-free offset lithographic capability on a sheetfed press that allows ultrafine reproduction and improved, almost continuous-looking halftones. Waterless Lithography Web - Water-free offset lithographic capability on a web press that allows ultrafine reproduction and improved, almost continuous-looking halftones. White Light - Theoretically, light that emits all wavelengths of the visible spectrum at uniform intensity. In reality, most light sources cannot achieve such perfection. Widow - A single word in a line by itself, ending a paragraph, or starting a page, frowned upon in good typography. Wire Side - The side of a sheet next to the wire in paper manufacturing; opposite the felt or top side. With the Grain - Folding or feeding paper into a press parallel to the grain of the paper. Return to top ↑ Xerography - An electrostatic nonimpact printing process in which heat fuses dry ink toner particles in electrically charged areas of the substrate, forming a permanent image. The charged areas of the substrate appear dark on the reproduction, while uncharged areas remain white. X-Height - The height of lowercase letters in a font (not including ascenders or descenders). XML - eXtensible Markup Language is designed especially for Web documents. It enables Web authors and designers to create their own customized tags to provide functionality not available with HTML. Return to top ↑ Return to top ↑ Return to top ↑
__label__1
0.998271
SD council considers plastic bag ban Plastic carryout shopping bags would be prohibited in San Diego if an ordinance advanced by a City Council committee today ultimately passes. The Natural Resources and Culture Committee voted 2-1 to forward the proposed law to the incoming city attorney for a legal analysis before the issue goes before the full City Council. Councilman Kevin Faulconer cast the dissenting vote, saying he wants to explore alternatives that favor recycling over an outright ban on plastic carryout shopping bags. "I do have concerns on how this will impact consumers and how this will impact cost,'' Faulconer said. Councilwoman Donna Frye argued that something needs to be done to help stop the proliferation of plastic in the environment. "The fact of the matter is that when you think that almost every single piece of the planet probably has a piece of plastic on it, at some level you have to start asking yourself 'is that the kind of planet we want to pass on to our kids?' " Frye said. The proposed law would prohibit supermarkets and pharmacies from providing plastic carryout bags to customers, beginning July 1, to encourage the use of reusable shopping bags. Customers could also opt to pay a 25-cent per-bag fee for paper carryout bags. Similar laws have already been enacted in San Francisco, Los Angeles, Malibu and Manhattan Beach. Encinitas is the only other city in San Diego County considering prohibiting plastic carryout bags. Outgoing City Council President Scott Peters said he supports a ban on plastic bags to help keep San Diego's beaches clean, but expressed concern about exposing the city to litigation. "This is a difficult time for the city to be taking on costs, and I take very seriously the threat of litigation and the CEQA (California Environmental Quality Act) compliance issues," Peters said. A ban on carryout plastic bags in Oakland was recently overturned due to the lack of an environmental review. Mayor Jerry Sanders does not support the proposed law due to the threat of litigation and because of the potential impacts it will have on small businesses, a mayoral policy adviser testified. Danielle Miller, with San Diego Coastkeeper, said plastic bags often end up in the ocean, where they damage the ecosystem. She said volunteers with Coastkeeper have removed 12,000 plastic bags over the past two years from city beaches. "Many of these bags actually get blown into the ocean or they get washed down storm drains, where they impact the ocean environment negatively," Miller told the City Council committee. The American Chemistry Council's Jennifer Forkish called for more time to consider alternatives to a ban on plastic bags. She said plastic bags use far less energy to create than paper bags, which are also much more costly. "A ban on plastic retail bags would naturally force retailers to use alternative bags, namely paper," Forkish said. "These products are three to five times more expensive, which means higher operating costs for stores and higher prices for their customers. "Businesses and consumers cannot afford higher costs during these difficult economic times,'' she said. It's unclear when the ordinance will go before the full City Council for consideration, but Frye requested that it be heard within 90 days after the new city attorney has had a chance to review the issue.
__label__1
0.701902
Elsa Marston children's author Fiction, young adult An adventure in ancient Egypt poems inspired by animals depicted in Southwestern rock art history, political science Stories in Y-A collections Quick Links Find Authors THE OLIVE TREE (illustrated by Claire Ewart) A story for all ages about an old olive tree in Lebanon, a source of pleasure, of conflict, and much more. Based on the author's award-winning and much reprinted short story, with exquisitely rendered illustrations. In a mountain village, following many years of devastating civil strife, an old olive tree still bears fruit that Sameer's family calls the best in Lebanon. But there's a curious thing about that tree. While some of its twisting branches drop their olives on the land of Sameer's family, its roots grow in the yard of the neighboring family--who. because of the war, have not lived in their house for many years. Now they're coming back. Will they have children, especially a boy who can be Sameer's buddy? It turns out they do have a child Sameer's age--a girl. But Muna is not friendly, and she makes it clear that she doesn't want to share the best olives in Lebanon. No olives, no buddy, not even a friendly smile from the other side of the stone wall between the two houses. Sameer is disappointed and resentful. Then something heartbreaking happens. Will Sameer find that the olive tree may have one last gift, before it's too late? For group discussion, here are some background information and suggested questions. A bit of background . . . . Lebanon, a Middle Eastern country along the Mediterranean Sea, part of the Arab world, is unique in many ways. It's famous for beautiful mountains covered with pine trees and fruit trees, houses built of warm-colored stone with orange roofs, fascinating history going back thousands of years, and talented and enterprising people. Because the population of Lebanon includes large numbers of people from several different religious sects--Muslim, Christian, Druze, and others--the people have traditionally tried to share power in ways that reduced friction. From 1975 to 1991, however, Lebanon was torn by civil war, with fighting forces from many different groups within the country and from neighboring countries as well. The war intensified anxieties and anger, which led to violence and heartbreak. In some villages where people of different religions had lived together peaceably for generations, neighbors turned against neighbors. (And in some villages, the people refused to give in to fear and hatred.) I was in Lebanon at the outbreak of violence and again when things had more or less settled down. Of course I was deeply distressed by all that happened during those sixteen long years of destruction. But because I try, in my writing, to search for some signs of hope, I decided to write a story about people who are trying to find ways to get past their differences and their memories of hatred and anger. The result is this story, "The Olive Tree," which I first wrote in 1994. It has won a number of national awards and reprintings, and at last--and still timely, I believe!--it is a book, with illustrations that convey both the beauty of the land and life in Lebanon,and the emotional complexities that still lie below the surface. Some questions for discussion . . . . THE OLIVE TREE takes place in Lebanon, a country in the part of the Middle East that some people call "the Holy Land." What did you learn about Lebanon, from the story and pictures, that may have been different from what you were expecting (such as the land, houses, animals, people's clothes)? The story says that Muna's family moved away from the village during a time of trouble, because they were "different" from most of their neighbors. In what ways do you think they may have felt different? Have you ever seen olive trees? What sort of land and climate do you think they like? When Sameer climbs the olive tree, do you suppose he picks the ripe olives and eats them, like cherries? Why do you think Muna doesn't want to make friends with Sameer? Sameer starts picking up pieces of the shattered tree and taking them to Muna's house. Do you think he really needs to do that, after Muna has behaved so selfishly? What might you have done, in Sameer's shoes? Most stories have a "hero" of some sort. Does this one? Why do you think so? There seem to be lots of trees in this village, but the houses don't appear to be made of wood. Why do you suppose the people build their houses in a different way? Have you ever seen something carved from olive wood? Do you remember anything unusual about the wood? Maybe you can act out this story! Make up two or three short scenes, like a play, with three or four characters--Sameer, Muna, the olive tree, and maybe a grownup.. Let's see what these characters say and think. (Don't forget the olive tree!) You could either write down some of their speeches, or make them up as you go along. Here are some possible scenes: Muna sees Sameer picking up the fallen olives, and decides to talk to him. Sameer starts to pick up the shattered wood. What goes on in Muna's head--and in Sameer's? What does Sameer say to Muna the next time he sees her?
__label__1
0.523089
Every tourist has heard the rule: only drink bottled water in foreign countries. That maxim is especially true for travelers in Tanzania, a still-developing nation where lack of infrastructure often means bottled water is the only safe drinking water. But the reliance on bottled water comes with one significant drawback: huge amounts of plastic waste in a nation that is only now taking the first steps towards recycling. One group, however, is turning trash into treasure for their community. Meeting Point Tanga (TICC), a community development organization with a particular focus on education, decided to build an internet café for students entirely out of discarded bottles. bottle house tangaThe building in progress. Photo: TICC Meetingpoint Tanga According to TICC director Rugh Nesje, the decision to build using bottles served two purposes: showcasing a better alternative to burning plastic waste, tossing it in dumps, or simply throwing it along the roadside (all of which are common practices in Tanzania); and creating a community space where students can gather safely outside of school hours. The group used a total of 3,450 bottles of different sizes to create the structure. It’s a small dent in the worldwide issue of plastic waste—the US alone goes through a staggering 29 billion water bottles a year—but it’s a start in a country where thousands of bottles are used every day. Best of all, the building has some structural advantages; the relative flexibility of the bottles means they can take heavier loads and withstand shocks more easily than rigid building materials like brick. This low-cost project will have a huge impact on youth in the community, who will be able to freely access the internet outside of school hours, now. Hopefully it can also serve as a model throughout Tanzania, and the rest of the developing world. Significant investment in agricultural education, followed by targeted micro-loans, is having a hugely positive effect on Tanzania’s agricultural sector, according to Robert Pascal, the head of agri-business for Tanzania’s National Microfinance Bank (NMB). Operating primarily in parts of the country where farming is still the main source of income, the NMB currently has over 170 branch offices in 90 different regions nationwide. Thanks in part to a $588 million loan from the U.S., the bank has been able to issue micro-loans to over 261,000 farmers throughout Tanzania. Before issuing these loans, however, the bank invests in a more far-reaching, structural fashion, providing educational opportunities on topics ranging from fertilizer benefits to improving overall farm efficiency. About 500,000 farmers nationwide have attended the seminars. The seminars are helping spread best practices in a sector that still comprises some 30%+ of Tanzania’s GDP. Currently just 16-17% of Tanzanian farmers employ basic fertilizers to improve crop yields, and 92% of farmers still rely on hand hoes to harvest their crops. Training on how to employ more modern technologies, followed by loans that allow the farmers to invest in them, have already been showing positive results. The NMB has been working closely with agricultural research institutions and farm equipment suppliers to ensure the program is functioning smoothly from start to finish. To date, the NMB has issued upwards of $150 million in targeted micro-loans at every level of the agricultural production chain. “Substantial finance to agriculture, particularly in the latest and most relevant technology, is an important component in commercializing” the business, Pascal said. “The NMB will continue to support farmers to make farming a highly paying business.” Credit: The Disposable Project As part of ongoing efforts to improve worldwide health, in particular in poor and underserved areas, Global Handwashing Day was held on October 15th. Tanzania was one of 100 countries worldwide to participate in the event, which aims to bring awareness to the necessity of this often neglected (or poorly understood), but highly effective practice in reducing the spread of disease. Over 200 million people worldwide participated in events and education programs related to Global Handwashing Day. Especially in poor areas where access to clean water and adequate sanitation may be limited, the importance of thorough handwashing with soap and water is critical. Studies have shown that even with unclean water, washing with soap greatly reduces the spread of illness. Studies have also shown that even in countries where the average standard of living is high, and the link between human feces and disease is well understood, individuals often neglect to wash their hands. WaterAid (formerly WaterCan) was one of the most prominent NGOs participating in the worldwide event. The group, which focuses on improving access to clean water and safe sanitation worldwide, considers the event an integral part of their annual efforts. Past efforts by WaterAid to draw attention to clean water issues and to raise funds to help improve access have included awareness walks, philanthropic gala events, and worldwide adventure trips that double as fundraisers (including a Kilimanjaro trek with Thomson Safaris).
__label__1
0.704119
One of these shots made it to The ART FOR AIDS, Rumah Cemara X Lomonesia BDG Exhibition. Can you guess which one?
__label__1
0.999147
Anna Kirsten Lemnitzer It can be said that going through various mental states and conditions is universal in the human race and therefore a reflection of the societies in which we live and participate. This continuum in turn allows us to be capable of understanding others at a deep and profound level. Thematically, my work examines perceptions of “self” and “psyche”. These explorations often reference an individual’s domestic and very personal spaces (including the body), which allows for a physical awareness and appreciation of another’s personality and psychological characteristics. In my artistic practice, the content dictates the materials and methods necessary for the formulation of an idea. The approaches I use include performance, video and installation, all of which ride the backbone of drawing and traditional applications. The manipulation of ideas through color, processes, space, sound and the body to fabricate visual metaphors is a rich area of exploration. The psychological, physical and often intimate experience that is manufactured through art allows for the creation of understanding and awareness which allows me to communicate an aesthetic experience with my viewer.
__label__1
0.88454
The best laid plans of mice and small minds Well, here’s an assload of evidence that suggests that’s not true “…culture of poverty! Single moms! Phrenology!” A response to Lee 2. accept the claim, start accusing. And in a separate comment… Movie Friday: FAN MAIL! Let’s watch! #IdleNoMore: Something is happening And I was doing that. A letter to Michael Shermer This morning I was pointed toward a post written by Dr. Michael Shermer, a prominent skeptic, author, and neuroscientist. In it, he responds to an article by author and fellow FTBorg Ophelia Benson in which she sharply critiques the acceptance of stereotypes about the agency and willingness of women to speak up in skeptical circles, using a snippet of an statement that Dr. Shermer said in an interview: that while the gender ratio of non-belief is probably roughly even, it may be that men are more willing to speak up about it, which is one explanation of why it is more difficult to book female atheists for interviews. I encourage you to read both Ophelia’s article and Dr. Shermer’s response first. My response is below: Hello Dr. Shermer, I remember watching the interview in question and being annoyed by your response to the question of why it was more difficult to find female atheists to join discussions. Your response, that speaking out might simply be “a guy thing”, was non-controversial but nonetheless disappointing, because this is not a question about which there is no information. You are, by your own admission, aware of the growing role that feminist discourse has been playing in the skeptic community overall in the past number of years. And yet, despite your awareness of its existence, your response betrayed no hint that you had listened to or understood anything that had been said by those voices – which is not to say that you haven’t, but there was certainly nothing in your “guy thing” response that suggests you have. Let’s rewind the clock a bit and look at the context into which your statement was spoken. [Read more…] Who should we fear? A mysterious and puzzling mystery There are some things, for all our vaunted expertise and powerful scientific tools, that we can simply not seem to answer. We may never be able to figure them out. They are the mysteries of the universe. And this is one of them: A new poll released by the charitable organization Samara suggests Canadians are less satisfied with their democracy compared to eight years ago. Last spring, researchers conducted a poll using a question identical to one used in 2004, asking respondents about their level of satisfaction “with the way democracy works in Canada.” It’s weird. Why would people’s confidence in the Parliamentary system decline so precipitously since 2004? What has changed since then? Anything? I certainly can’t think of an answer. [Read more…] Is blackness a credible threat? Feedback on the new layout Those of you who browse from the main page will probably have noticed that Freethoughblogs (including this page) has a new layout. What may be apparent to you, but not to me, is how that has changed functionality with commenting, or the readability, or any number of other factors that affect how you use the site. If there’s anything missing, inaccessible, or otherwise seriously problematic about the new format, please let me know in the comments or by e-mail. You can also tell me stuff like “silver is stupid” or “the new logo is for dorks” or “I like the old site better”, but please rest assured that I am ignoring you, because I can’t change that stuff. A response to Larry Moran As I mentioned in my summary of my experience at Eschaton2012 in Ottawa, I had a brief exchange after my presentation with biologist ad blogger Larry Moran. He took me to task for a statement that I made during my presentation, in which I asserted that race is not a biologically-defined reality, but rather a socially-derived construct. In response, Larry had this to say: My position is that the term “race” is used frequently to describe sub-populations of species, or groups that have been genetically isolated from each other1 for many generations. By this definition, races exist in humans just as they do in many other species. The genetic evidence shows clearly that Africans form a distinctive, but somewhat polyphyletic, group that differs from the people living outside of Africa. Amongst the non-Africans, we recognize two major sub-groups; Europeans and Asians. I see no reason why these major sub-populations don’t qualify as races in the biological sense. Please read: Do Human Races Exist?. I don’t think that denying the existence of races is going to make racism go away. Nor do I think that accepting the existence of biological races is going to foster racism. I think that most of my disagreement with Dr. Moran (or perhaps more accurately his disagreement with me) is a product of a number of things. The first and most obvious one is my lack of familiarity with the full scope of the genetic literature when it comes to human beings and their (our) descendent trees. The second seems to be an unfortunate result of the time limit of the presentation and the imprecision of the language I chose. The third one is a bit more complicated, but has largely to do with what evidence we are using to arrive at a definition. I will discuss each of these issues in detail, with the hope of clarifying the problem. [Read more…]
__label__1
0.911269
Пересказ любой сказки на английском языке.5-6 предложений 1 Январь 0001 → Пересказ любой сказки на английском языке.5-6 предложений • сказка про золушку(начало) Once upon a time there lived a one happy family: father, mother and their only daughter that parents were very fond of. For many years they lived happily and joyfully. Unfortunately, one day in autumn, when she was sixteen years old, her mother was seriously ill and died a week later. The house reigned a deep sadness. Two years have passed. The girl's father met a widow that had two daughters, and soon married her. • Tale of the Golden Cockerel    In the kingdom of Far Far Away , tridesyatom state, there lived a glorious king Dadon . Neighbors and then cause offense easily , old age wanted to take a break from martial affairs. But it began to disturb the neighbors . Protect the kingdom did not succeed . Addressed to the sage . The wise man took out a bag of Golden Cockerel and said to put the bird on the needle , since everything around will be peaceful, so it will sit still. But if you have to wait for the war - pripodymet rooster comb, scream and vstrepenetsya and the place will turn out.    The king thanked him, promised to fulfill any will . The cockerel was to guard the border. Neighbors grew quiet .    Year , the other passes peacefully . Cockerel crying again . The king sends an army to the east , the eldest son of his leads. Cock calmed down .    It takes eight days , no news from the army . Cockerel crying again . Klichet another king 's men , his youngest son, he now sends to the rescue . Cock again subsided . Again, there is no lead from them! Again eight days go by. Cockerel crying again , call together the king 's men and the third leading her to the east.    The troops are going day and night - no carnage . The army of the king of the mountains leads , and Intermedia high mountains sees a silk tent . Around the tent is a beaten army . Tsar Dadon to the tent in a hurry ... What a terrible picture ! Before him his two sons without Shelomov and no armor are both dead , plunging a sword into each other. Suddenly the tent flew open ... and virgin queen Shamahanskaya quietly met the king. Forgot it before her death two sons . She took him by the arm and took her into the tent . Week feasted her Dadon .    Went home with the girl . The crowd saw a eunuch . Asked to give her virginity. The king refused. King of grabbing his wand over his forehead ; he fell on his face , and the spirit of the mind. That is - he enters the city , suddenly there was a light ring, cock sporhnul with needles; flew to the chariot and the king sat on the crown of the head , pecked at the crown and hoisted ... and at the same time, the chariot fell Dadon ! Gasped again - and he died . And the queen suddenly disappeared , though it had not happened. Tale of a lie , but it hints ! Good fellows a lesson. See also: Английский язык Похожие записи Комментарии закрыты.
__label__1
0.50832
After the Meiji Restoration As a result of the abolition of feudal domains and the establishment of a centralized prefectural system put in place by the Meiji government, the castle site fell under the control of the War Ministry, and all unnecessary buildings were torn down. For a period of time the main buildings in the secondary enclosure were used as regimental headquarters before they were burnt down in an accidental fire which also destroyed many of the other remaining buildings. Although barracks were still standing on various parts of the site, after the Second World War it became campus to Kanazawa National University.
__label__1
0.998238
Bridging the Literacy Gap in Science Bridging the Literacy Gap in Science How do you teach the new science standards to students who struggle with reading and writing? Many middle school science teachers face this dilemma, which is especially challenging for science educators at my school. Like many urban schools across the country, we have a large population of students who are behind academically; many have specific learning difficulties. We also serve a large population of English Language Learners. How can we ensure the success of these students and bridge their literacy gaps? The South Carolina Department of Education (SCDE) developed new science standards in 2014 that are based on the Framework for K-12 Science Education. These new standards require students to develop and use models, obtain and use information, analyze and interpret data, and construct scientific arguments. SCDE also expects students to use reading, writing, speaking, and listening skills throughout their scientific processes. Meeting these requirements can be difficult for students who struggle with reading and writing. They also challenge teachers to bridge the learning gap and develop students' abilities to successfully use these higher-order literacy skills. Scaffolding Learning AMLE talks to Theressa Varner about Bridging the Literacy Gap in Science Students are expected not only to be immersed in authentic science investigations, but also to effectively communicate what they learned during those investigations using observable and measurable information and accurate science terminology. With that in mind, I combined several tried and true, research-supported strategies to scaffold my students' learning and bridge their literacy gaps. Prior to doing an investigation, students learn key vocabulary terms in order to communicate their findings. They use graphic organizers to compare and contrast concepts, and they engage in argument writing using evidence to support their claims. According to Nicole Stants in her 2013 NSTA Science Scope article, "Parts Cards: Using Morphemes to Teach Science Vocabulary," teachers can use morphemes—prefixes, suffixes, and root words—to help students learn vocabulary. So, we break down words into these basic components. When students become familiar with common morphemes, they can use that knowledge to determine the meaning of unfamiliar words. Each week, I also present four new science stems that are associated with our current vocabulary terms. The scholars keep a master list of the stems in the front of their science journal; an entry includes the stem, its definition, the concept that it is related to, and a vocabulary term. They also keep an index card book that includes modified Frayer models for each stem (see Figure 1) that they've constructed. Another learning strategy is graphic organizers. Students use them to organize notes, classify or categorize information, or compare and contrast concepts. Some graphic organizers also require the students to draw visual models of concepts. BrainPOP ( is a great source for a variety of activities that reinforce science concepts. Following the 5E Model All of the lessons follow the 5E Model of Instruction: engage, explore, explain, evaluate, and elaborate. We begin with an engagement activity such as a video or question related to the content being taught, then follow up with an exploration activity. If the engagement or exploration involves text, I use a variety of strategies to help bridge the literacy gap. For example, sometimes I think aloud while reading the text and have the students follow along. I may divide the text among the students and have them do a jigsaw activity. Each group of students reads and annotates the text using annotation marks that have been standardized and used school-wide. Then, each group presents what they have learned to the rest of the class. As the other groups are "teaching" the rest of the class, students are writing information in their journals. When the students have completed their presentations, I provide direct instruction related to the text, followed by a check for understanding, such as exit slips or a quick write. Quick writes may require students to answer a question, write a summary, complete a Venn diagram, or write a question of their own. Next, the students engage in an authentic lab experience or performance assessment. For example, when studying acids and bases, the students participate in an investigation using indicators to identify solutions. They then demonstrate a neutralization reaction between an acid and a base. The students record their observations and write an explanation of what happened when the acid and base were combined. Writing a descriptive explanation is the most difficult part of a lab experience for my students. I have been working with them on writing from the third-person point of view. Some educators make the argument for using the first-person in argumentative writing because they believe texts using "I" can be just as well-supported as those that don't. However, my students struggle with starting every sentence with "I think." If they take themselves out of the experience, they are forced to think more critically and objectively about what they want to communicate. I also encourage them to avoid the use of pronouns wherever possible and to write technically using proper science terminology. Finally, to reinforce the concept, students write an argument based on a real-world scenario. They copy the scenario into their notebook and create a graphic organizer. I provide them with sticky notes on which they write down the evidence to show what happens when an acid and base are combined. All the students place their "evidence" sticky notes in a designated area on the board, and we group them with other similar responses. As a class, we review all of the responses and determine what evidence we want to use. Step two requires students to repeat the process by writing an inference as to how the scenario could be solved based on the evidence we chose to use. The final step requires them to make a claim by combining their evidence and inference. We review all of the responses as a class. After the group activity is completed, the students work independently to construct an explanation that solves the scenario. Again, students are required to write in the third person using technical vocabulary. Bridging the Gap It is still too early to tell what effect these strategies will have on bridging the literacy gap and helping all my students learn science concepts, but I am confident they will make gains. The strategy extends and expands their scientific reasoning and motivates and focuses their learning, helping all scholars bridge the learning gap. Theressa Varner is a seventh grade science teacher at the ARMS Academy at Morningside Middle School in Charleston, South Carolina. Published in AMLE Magazine, August 2016. Author: Theressa Varner Number of views (315)/Comments (0)/ One Text—Many Perspectives One Text—Many Perspectives Writing and reading "I Am" poetry through different lenses. • Where did the pandemic start? • How did the pandemic spread? • When did the pandemic end? • How many Americans died from influenza? • From the article, what do you think a pandemic is? Some students began to read; some peered at the questions on the board and skimmed the article for the answers; some stared at the board, not knowing how to begin. When they finished the assignment, they had gained neither deep understanding of the pandemic nor empathy for its victims. They had all approached the subject from one perspective—that of middle school students reading about an event that took place a century ago and affected people with whom they felt no connection. Had students read from a variety of perspectives through the eyes of citizens, doctors, and even President Woodrow Wilson, they might have developed a deeper, broader understanding of the significance of the influenza pandemic that swept the globe, killing an estimated 675,000 people in the United States alone. They also would have begun to develop life skills, such as empathy, openness to new ways of thinking, and the ability and willingness to think reflectively—all skills that support the Common Core State Standards. "I Am" Poems One of the most effective ways to engage students with a text is through "I Am" poems. The I Am poetry format (see chart above) puts the readers into someone else's shoes, so to speak, requiring them to read more deeply, closely, and critically as they explore text from a particular point of view. I Am poems can be used in all disciplines. In English-Language Arts texts, readers can take on the perspective of major and minor characters and even characters who don't directly appear in the text, such as the residents in the convent across the street from Mr. Pignati's house in The Pigman. Social studies offers countless opportunities for students to consider the perspective of persons in history, from General William Tecumseh Sherman to a nameless Confederate soldier or a native child forced to walk the Trail of Tears. Science students can write as a famous scientist, as a scientific phenomenon, as someone affected by a scientific event, or even as a tree. In health classes, students can respond to articles about issues such as concussion in "I Am a Victim of Chronic Traumatic Encephalopathy" or "I Am a Football Coach." Incorporating I Am Poetry Use the following steps to incorporate reading and responding in the I Am poetry format: 1. Distribute the text to be read. 2. Assign the reading. 3. Brainstorm with the class perspectives from which the text can be viewed. 4. Explain how students will choose the perspective(s) from which they will re-read the text. 5. Assign students to re-read the text from their chosen perspectives, marking details important to them from that viewpoint. This includes text evidence and inferences based on the text. 6. Explain the I Am poem format and examine and analyze the text for ideas. 7. Invite students to revise any verbs that may better fit their interpretations and responses and to add research from other sources. I distributed the article, "The Great Pandemic of 1918–19," to a class of eighth graders. As they read, they used a during-reading response strategy that I refer to in my book, The Write to Read: Response Journals That Increase Comprehension, as marginal notes. As they read, they marked in the right margin of the article: √ = I knew this N = new information (I didn't know this) ! = important information about the pandemic I then assigned the activity: Write an I Am poem, "I Am a Philadelphian in 1918–1919." The class reflected on the content of the article and brainstormed various perspectives from which readers could re-read the article and write a response. • I Am a Man/Woman Living in Philadelphia in 1918–1919 • I Am a Child Living in Philadelphia in 1918–1919 • I Am a Victim of The Great Pandemic of 1918–1919 • I Am a Funeral Director in Philadelphia in 1918–1919 • I Am the Mayor of Philadelphia, 1918–1919 I added an option to encourage creativity : I Am The Great Pandemic of 1918–1919. Students based their poems on the facts given in the article, plus personal knowledge and research about influenza, Philadelphia, or the time period. Students read, wrote, highlighted the facts they used in their poems, added "because" statements wherever appropriate, and shared their favorite lines. Some took their poems home to revise and further research. As they read each other's poetry, the students observed that some classmates focused on the same facts in the same way, some perspectives interpreted the same facts in dissimilar ways, and some regarded different facts in distinct ways. Sample Perspectives Hinton wrote from the perspective of a victim. Here is an excerpt: I am one of the many victims of the Great Pandemic of 1918-1919. I wonder about the other 675,000 Americans who died, leaving orphans or widows. I hear about the eighteen cases of influenza that were reported in Kansas. I see the results of the three waves of the Pandemic that occurred in late spring and summer of 1918, the fall of 1918, and the spring of 1919. I don't want the recovered men to develop secondary pneumonia, "the most virulent, deadly type." I am starting to become fearful for the world. I pretend to be strong. I feel that the Pandemic shouldn't have spread from the military to the civilian population. I touch my chest to make sure my heart is still beating. I worry it will be too late before this outbreak is over. I cry at the fact it has spread to Asia, Africa, South America, and back to North America. I am trying to believe that everything will be all right. The same format in a sixth grade science class required students to choose a famous scientist and conduct online research about that person. Integrating their class notes, they wrote I Am poetry from the perspective of the scientists. Phoenix wrote as Maria Tharp, a female scientist born in 1920. Here is an excerpt: I am Maria Tharp. I wonder if the ocean floor is really flat. I see girl scientists being neglected. I want girl scientists to be excepted [sic] and respected. I am Maria Tharp. I pretend the ocean floor is rugged and bumpy. I feel rejected because I was not allowed to board a research vessel that was going to cross the sea when all the men did. I touch the maps that I create. I worry that my theory of the ocean floor is incorrect. I cry because girls are not being able to become great scientists even if they are smart. I am a mapmaker of the ocean floor. From this student's poem, it is evident that she not only learned what Maria Tharp contributed to society but also recognized the struggles Dr. Tharp endured at that time in order to make those contributions, something this young reader may have missed if she had written from the perspective of a young woman living in 2016. Choice, Creativity, Comprehension In addition to encouraging, and training, readers to read and interpret from multiple perspectives, I Am poetry can be used as an after-reading response strategy for readers to take themselves back to the text multiple times, comprehending at a deeper level as they analyze to synthesize and manipulate text. In that way readers actually learn material. And because writing I Am poems allows for choice, creativity, and fun, more students are engaged, the point of any academic activity. Lesley Roessing, a middle level ELA-humanities teacher for over 20 years, is a senior lecturer in the College of Education at Armstrong State University, director of the Coastal Savannah Writing Project, and editor of Connections, the journal of the Georgia Council of Teachers of English. The ideas for this article were taken from strategies included in her book, The Write to Read: Response Journals That Increase Comprehension. Published in AMLE Magazine, August 2016. Author: Lesley Roessing Number of views (424)/Comments (0)/ Literature Circles: Student-Driven Instruction Literature Circles: Student-Driven Instruction Literature circles bring reading to life for these sixth graders. What did we just read? I am so bored! Why are we reading this? What's the point? Sound familiar? These were some of the responses to reading that I received during my first year teaching bright and talented sixth graders. How could 24 students, who for the most part liked reading, have those attitudes? It had to be the way I taught. How could I engage all 24 students? I needed a reading instruction strategy that engaged the students and held them accountable. Literature circles seemed to fit the bill. Literature circles keep students accountable within their group, incorporate discussions about what they read at each meeting, and provide a choice that will keep them interested and invested. When I decided to implement literature circles I kept five things in mind: (1) students get a choice in what they read, (2) teacher models literature circles to the class, (3) students remain accountable due to roles assigned within the group, (4) students have discussions about what they read and complete journal entries, and (5) students have a final project by which the teacher can assess comprehension and understanding. 1. Giving students a choice. I asked my students what types of books they were interested in reading. I had an idea, but wanted to show them that they had a voice in their learning and had a choice of what to read. After discussing the types of books the students were interested in, which included dystopia/utopia novels, I began to research new and forthcoming novels that my students had not read and that seemed interesting to their age group. I presented a list of books to our librarian.Not only was she enthusiastic about finding the books on my list, she gave me other options from schools around the district. We came up with the following books based in part on being able to have enough copies for the students in my class: • Barcode Tattoo by Suzanne Weyn • The Uglies by Scott Westerfeld • People of Sparks by Jeanne DuPrau • Among the Hidden by Margaret Peterson Haddix • Life as we Knew It by Susan Beth Pfeffer • Enclave by Ann Aguirre I presented the titles and authors as a book talk and asked the students to rank the books in order from those they most wanted to read to least favorite or already read. Based on the results of the survey, students were assigned to their literature circle group. 2. Modeling the literature circle. To help students understand the basics of book discussions I modeled the roles within a literature circle by reading a picture book to the class and then leading a discussion about the book using a fishbowl strategy. In a fishbowl strategy, a small group of students are in the middle of a circle discussing the book while the majority of the class members are listening from outside the circle. I followed the demonstration with a video of a literature circle. As a class, we talked about the positive and negative aspects of the literature circle discussion presented on the video and how they could improve it. Now that the students understood how the discussion should work in a literature circle, I modeled the roles of connector, discussion director, summarizer, literary luminary, and illustrator which the students would rotate through during each meeting. 3. Holding students accountable. Students each received a schedule indicating which roles they were to play each day. Having assigned roles made the students not only accountable to read the assignment, but also helped them prepare for discussions within their group. Students had 15 days to read their book front to back and were responsible for calculating how many pages they had to read each day to complete the novel in the time allowed. Role sheets helped prompt discussions at the beginning, but toward the end of their literature circle, students were having open discussions about what they had read and did not need their role sheet for guidance. 4. Discussing and journaling. Before each literature circle meeting, students spent 10–15 minutes in their small groups discussing what they had read and asking questions before jumping into their book. Every student had a voice and was accountable to discuss within the group. Students made journal entries five times during those 15 days, writing about the characters, setting, plot, problems, and resolutions. Across the five journal entries, all students increased their higher-order thinking skills. They credited the discussions with helping them understand what they had read and their ability to write about it. 5. Assessing reading comprehension. At the end of the 15 days, students completed two final projects based on their novel. One project was a tic-tac-toe board where students completed three small projects relating to character, setting, and plot. Students had a choice of activities such as write a newspaper, create a wanted poster, write a skit, write a song, or develop a sketchbook with captions. Given choices, students were invested in the project and had a voice in their learning. They had the motivation to complete their projects and were enthusiastic about presenting them to the class. For the second project, students created their own society and tried to persuade their classmates to be part of it. Since these students were reading utopia/dystopia novels, the society project fit nicely. They could implement what they knew about societies and incorporate the information they gained from their novel, including social studies, technology, math, reading, and science. Students had the option of working independently or in pairs. To "recruit" their classmates, students prepared a Google slideshow to present to their peers. Students' slides addressed all elements of a society, including money, government, family, education, recreation, and transportation. Students developed the presentations using Google Classroom, which made it possible for me to see their progress as they added slides. Above and Beyond Literature circles not only improved my students' reading comprehension, they also increased their love for reading and their motivation to read. During the 15 days we spent with literature circles, every group finished the assigned book and three finished their assigned book plus the next one in the series. One group not only finished their assigned novel, The Uglies, but they finished the whole series! I didn't suggest that they read the whole series; I simply told the class to have their novel read front to back in 15 days. They enthusiastically read not one, not two, but four novels in 15 days! With literature circles, the students take control and the learning is driven by the students, for the students. The teacher takes on the role of facilitator—and isn't that what we really want? Samantha Schnoor is a fifth grade teacher at Century Elementary School in Grand Forks, North Dakota. Published in AMLE Magazine, August 2016. Author: Samantha Schnoor Number of views (1155)/Comments (0)/ Making a Middle School Magazine Making a Middle School Magazine A student-produced magazine celebrates middle level student voice. In October 2015, a team of seven editors—all eighth grade boys at St. Christopher's School in Richmond, Virginia—met during lunch to compare two digital publishing platforms. They judged entries for the Cover Art Contest and debated the potential of QR codes. By mid-November, a staff of 36 students released the fall edition of PaperBoy, a 58-page magazine focused on student culture. Within a month, the digital magazine had been read by 700 viewers in the United States, plus a dozen viewers globally, including several from Australia, Thailand, and the UK. Why a Middle School Magazine? Authentic writing experiences have been credited with motivating students to compose their best work. Nancie Atwell's In The Middle, along with her subsequent publications, have validated the importance of empowering student voices. Writing for a publication allows students to explore a choice topic, serve as an "expert in residence," and build social connections with peers who share common interests. Students write in an authentic way, collaborate meaningfully, and often strengthen their personal identity. Nate reviews classic movies, Brett writes humorous pieces, Lane views video games with an intellectual lens. Voices emerge with increased confidence when students have the safety net of a team initiative. Although a school newspaper offers those same features, a magazine offers many more and varied advantages for young adolescents. Consider that if each student is given a page for free expression, that student has complete ownership of the space. Some may choose to tell their story with images, simply select a suitable background color, or designate fonts to customize the appearance of their pages. And, there's plenty of room for partnerships: a student writer can pair with a friend who is a page designer. A savvy mathematician can collect survey data and analyze it with a peer journalist who translates findings into narrative form. A "newspaper column publication" requires time-consuming page formatting to ensure consistency, yet each magazine page is formatted independently. Cohesion is achieved by shuffling pages into a reader-friendly sequence. Starting Small Paper Boy wasn't always this big or this popular. Five years ago a staff of seven boys worked the entire school year to produce a 20-page publication ( Our first edition was a recap of an event we call Activities Day. During this biannual event, all students sign up for an extracurricular field trip or focused project that runs for half of a school day. For our first edition, our student reporters each chose a different activity, carried old-school digital card cameras, and wrote a short synopsis of their chosen activity. They loaded their content into Glogster Edu pages and hyperlinked the pages together. The first edition went live online. The next year, three boys who were already contributing to a library-inspired book blog were invited to try something new: input their book reviews into a Glogster page and add some images. What would they think about publishing with the PaperBoy staff? Not only did they accept the invitation, they harnessed animation tools to create Harry Potter-like moving-news images. By pairing Activities Day articles with reviews, PaperBoy increased its readership. Merging these two small groups also helped cultivate new friendships and generate more recognition within our school. In addition to releasing the magazine digitally, we printed each page (about 12–15 pages at that time) and hung them on a hallway bulletin board with the URL address printed in large type. "Visit" Students gathered around the display to note who was caught by the camera, to chat about Activities Day, or to point out books they had read. Recruiting Writers We were able to quickly recruit book reviewers and serious writers, but staff growth exploded when we asked, "Would anyone be interested in writing movie and video game reviews?" We initiated a policy that limited reviewing games to those rated for teens or younger audiences; movie reviews covered those marketed as PG-13 or younger. Other popular features now include technology reviews, top ten lists, student survey results (favorite products, music, or hobbies), and teacher interviews. Offering a menu of categories can spark student interest, but individuals always feel free to propose original story ideas. This is not a club and there are no cuts. Students "join" the staff by submitting artwork, a creative story, a feature story, or a review. Platforms for Publishing Students need a canvas on which they can work. Any digital word document can serve this purpose. If you select a specialized publishing platform, first ensure that it complies with the Children's Online Privacy Protection Act ( Then, check end-user agreements and subscription costs. We started with Glogster Edu (, which offers a multitude of design options, but requires a subscription for a "class account" where multiple students can create. The class account is key for its faculty-editing privileges. This year, many students opted to work with Lucidpress ( Our director of academic technology, Hiram Cuevas, added Lucidpress to our Google School account as part of our suite of apps. In addition to being free, Lucidpress allows us to share pages with digital collaborators. We collate our finished pages into a singular pdf document and upload them to Issuu requires a subscription, but it is far less expensive than printing paper copies and allows student work to be shared in the digital domain. Journalism 101 We do not offer a journalism class. We do, however, ask writers to model best practices. Reviewers are required to read professional reviews on Amazon or video game websites. Feature story writers are reminded to cover the 5Ws and 1H (who, what, when where, why, and how). All interview questions and student surveys are submitted to the faculty advisor before they are sent to their target audience. We publish what students write as long as it is appropriate for our middle school community and respectful of the values and ideas of others. If the story needs extra work and doesn't make it in the upcoming edition, editors help the students revise it so it can be published in the future. With regard to reviews, if a student dislikes a product, he is welcome to share his views in an objective way based on details. He may be asked to balance his perspective by citing a few strengths or by sharing a marketing quote from a vendor. When we started the magazine, our school had two computer labs that housed a total of 30 desktops. We are now a 1:1 school, which has been an asset. If your school isn't there yet, allot more time for students to complete their pages. Magazine Mechanics How does our process work for staff members? A "draft deadline" is set for each edition, and a Google Document is established for story proposals. Students sign up digitally to request a story and/or to complete a page design. Drafts are submitted to the faculty advisor and a section editor via Google Drive. Feedback is returned within a week so students can make revisions and resubmit by the following week. At that time, page design begins. Students search for copyright-free images and record all image links for the advisor to check. Text is pasted into the designed page and then shared with the advisor and editors for final review. Study halls and lunch periods have provided ample time for team communication and collaboration. If a one-to-one writing workshop is needed, a student can meet with the advisor or an editor during recess. To celebrate the release of each edition, all contributors are invited to the library for a pizza party during lunch. We've never asked English teachers to give extra credit to staff. I do share the publication directly with teachers and parents to ensure that students' efforts are recognized. You can see our fall 2015 edition at A Match with Middle School Culture Launching a student publication presents challenges, but those very challenges empower student growth. With flexibility and creativity, the entire staff develops problem-solving skills to meet team goals. From the start, we have been able to offer a meaningful realm for adolescent development. Ownership, leadership, peer relationships, and collaboration skills are all cultivated in authentic ways. Digital citizenship, writing skills, and technological savvy are embraced purposefully. If you want to engage students in active literacy and celebrate middle school voices, a magazine may be the perfect match. Lisa Brennan is the middle school librarian at St. Christopher's School in Richmond, Virginia. Published in AMLE Magazine, May 2016. Freewriting Middle Grades Math Freewriting Middle Grades Math Incorporating writing into math helps students understand their thinking. If you'd asked me not too long ago about the challenges of teaching pre-algebra to adolescents, I would have talked about procedural questions such as "What is the square root of 121?" or "What is the formula for volume of a sphere?" Today my response is very different because what counts as math in the middle grades is also shaped by notions of academic language within and across content areas. Beyond the Numbers More than strictly procedural fluency and factual recall, students today solve language-rich, complex, real-world application problems. For example, a problem from a unit on the laws of exponents today might read something like: "A shipping box is in the shape of a cube. Each side measures 3c^2d^2 inches. Express the volume of the cube as a monomial." In short, adolescent learners need to not only know the exponent laws, they also need to apply prior knowledge about volume as they interpret and process a multi-step word problem. Finally, and perhaps at the center of CCSS math reform, learners need to articulate on paper the thinking behind their solutions—step by step. Common Core math pushes students to think deeply and apply what they know and what they are learning. Freewriting on paper can help take adolescent learners there. Sometimes students reject writing when it comes to solving mathematical problems. However, exploratory writing—thinking aloud on paper—can provide access to higher-level questions, word problems. Or, to paraphrase a quote by British novelist E. M. Forester, "I'll know what I am thinking when I see what I say." Discipline-specific exploratory writing allows problem solvers to tease out their reasoning and work behind each step of a solution with words. Stop and Jot One technique that I use is called "Stop and Jot." The Stop and Jot strategy is a brief moment when everybody pauses and writes about what we've been learning. I tell my students that they don't need to worry about correct grammar, spelling, or punctuation—they simply need to write their thoughts about the day's math activities: what they understood or what they are still trying to figure out. The idea is for students to see what they and their classmates are thinking. For example, when a September lesson centered on the laws of exponents, the Stop and Jot prompt went like this: "Describe what law of exponents would help solve this problem. What do you already know about this question?" To guide them, I provide the following instructions: "Write between 3–5 sentences about the day's problems. You can write about your understanding or you can extend a problem. You can write a question you have or you can explain how you solved the problem. Write silently and independently. We'll share our thinking on paper with desk mates." Here's what three students wrote about the shipping box container: 1. "The question is asking me to multiply, so I use power to a power law." 2. "I know it is multiplication because in the volume formula you multiply; I know the volume of that shape is length x width x height." 3. "I am not sure how to write the exponent law using those numbers." Talking About Math The great thing about Stop and Jot is that we can choose to freewrite as often or as little as we need. Students only need a journal, a pen or pencil, and a quiet place where they can write and then share with peers. Having my math students see what they think on paper has made them confident about doing math and talking about math! Rebecca Stelfox is an eighth grade math teacher at Northeast Middle School in Charlotte, North Carolina. Published in AMLE Magazine, April 2016. Author: Rebecca Stelfox Number of views (1473)/Comments (0)/ Humanities in a STEM World Humanities in a STEM World Engler pointed out the United States' reliance on more qualified workers to fill the increased number of jobs in the science, technology, engineering, and mathematics fields, or STEM. "We need STEM-related talent to compete globally, and we will need even more in the future. It is not a matter of choice: For the United States to remain the global innovation leader, we must make the most of all of the potential STEM talent this country has to offer," Engler said. While students are being encouraged to explore the STEM field, it would be unfortunate if the humanities fields and skills were deemed to be less important. It is precisely skills like effective written and oral communication, written expression, and interpersonal skills that can make a qualified STEM candidate stand out from the crowd. According to the U.S. Department of Commerce Economics and Statistics Administration report, STEM: Good Jobs Now and for the Future, STEM jobs are projected to grow at a rate of 18% from 2008 to 2018, compared to 9.8% for non-STEM occupations. The job force needs prepared college graduates to fill these jobs, and candidates' communication, problem-solving, and interpersonal skills they learned in the humanities are more vital than ever. For example, in a LinkedIn job posting for a senior software engineer search algorithms and data analytics position at The Home Depot, the first skills listed are "strong interpersonal skills, written and verbal communication" and "strong decision-making, problem-solving skills, critical thinking, and testing skills." Clearly, even the most technical STEM jobs require verbal and written communication skills. STEM and Beyond Holcomb Bridge Middle School in Alpharetta, Georgia, operates a STEM Academy as a "school within a school" model. Teachers and administrators select students with high science and math scores and place them in a cohort for integrated science, math, and engineering classes. Students participate in a science fair, hear guest speakers from STEM professions, and complete a STEM portfolio with artifacts from their three STEM classes. The STEM Academy gives high-achieving students with an interest in math and science a great place to grow their talents and gain experience in these high- demand areas. Beyond the STEM Academy, to prepare all students to be effective communicators, the humanities department at Holcomb Bridge promotes the continuous development of our readers and writers. This year, our language arts and reading classes focused heavily on reading and analyzing technical documents. Teachers challenged students to read and use data from technical documents in their writing in order to better understand the often-difficult nonfiction texts. Our school also implemented a writing portfolio. Students save one example of persuasive, narrative, and expository writing from throughout the year, along with the prewriting and preliminary drafts. The portfolio travels with them from sixth to eighth grade and serves as an artifact of writing improvement for high school admissions. Students have the chance to reflect on their improvement throughout the year, as well as throughout their middle school career. To encourage our students' love of reading, the humanities department created a monthly book club. Teachers in all the humanities content areas (reading, English language arts, world language, and social studies) volunteer to sponsor a book club session once a month. Teachers choose books that are high interest and do not necessarily apply to the curriculum. Students are more interested and involved when they are given the chance to work with students from different grade levels and share their thoughts on the complex plots and themes. Finally, the eighth grade language arts teachers started a program they call Literacy 4 Life. Students follow "Lacey," a cartoon girl posted on the classroom wall, through her life, using literacy skills to navigate complicated documents. Students scour documents such as the Georgia Driver's Manual, a college application, and job applications. They learn the importance of being able to apply an understanding of nonfiction documents in real-life situations. Many of our students speak English as a second language; some of their parents do not speak English at all. As such, they often are challenged to read and interpret nonfiction texts for their families. As young adolescents, they are tasked with reading, interpreting, and sometimes writing for their parents. Understanding student and family needs makes the Literacy 4 Life initiative even more applicable and important. Maintaining Importance English language arts, reading, and social studies teachers: have no fear for your future in education. Although technology is rising in importance and the STEM areas are gaining in popularity, there will always be a place for the humanities. Students still need to learn history. They need to know how to communicate effectively and they must learn strategies for deciphering difficult nonfiction texts. That is where English language arts, reading, social studies, and foreign language come in. The humanities give students a foundation on which to attach new information and build a deeper level of knowledge. Brittany Durkin is an eighth grade reading and language arts teacher and humanities department chair at Holcomb Bridge Middle School in Alpharetta, Georgia. Published in AMLE Magazine, April 2016. Author: Brittany Durkin Number of views (2164)/Comments (0)/ Encouraging Student Voice Through Writing Encouraging Student Voice Through Writing These simple strategies put students' voice in their writing. Some students come to class ready to engage and offer their voice, but others wish to go unnoticed and remain silent observers to the educational scene. The following is an assortment of creative writing, image writing, and cross-curricular writing ideas that may encourage young adolescents to become actively involved in their learning. Creative Writing To encourage freedom in narrative and deeper thinking, involve students in a What If writing activity. This strategy lends itself to a variety of texts; the example here involves pairing the strategy with the Jack London story "Up the Slide," found in many middle grades textbooks. In "Up the Slide," the protagonist must overcome several man-versus-nature struggles. Teachers may ask students to imagine the story in an alternative setting and to reapply the narrative and protagonist to a new set of struggles revolving around nature or another type of conflict. Students may also rewrite the narrative from a first-person or third-person point of view, addressing the Common Core point of view standards. An Open-Ended Narrative activity provides part of a story, then asks students to complete it—like a choose-your-own-adventure story. This open-ended strategy can be applied to other writing genres, such as explanatory or argumentative writing. In that case, teachers provide portions of an already-created brainstorm web, essay outline, or a roughly drafted essay and ask students to use these elements to create a more fully developed writing product. For the Genre Optional strategy, teachers create a list of writing styles, such as poems, memos, and digital texts, and ask students to reach objectives through the chosen genre. For example, students may use a digital text or poem to teach three to five vocabulary terms related to algebraic equations. Or, they may create a memo to demonstrate their understanding of a scientific process or historical event. Image Writing With Single-Image Writing, teachers select an image of a character or setting and ask students to infer to describe or respond to the image. These descriptions and responses can take the shape of a character web, a dialogue, or a paragraph structure. Multiple-Image Writing requires students to use their cognitive skills to connect images. These images can revolve around an already-identified theme, or teachers can help students construct a narrative or expository response to the image set. Created-Image Writing requires students to visualize aspects of a single character or story element or create an extensive storyboard design based on the events of a longer text. This storyboard design can emphasize discrete language concepts, or it can be used to demonstrate the overall structure of a typical plot, requiring students to identify and explain which panels in the storyboard relate to the exposition, resolution, and other segments of the plot. Expanding Writing When technology tools are available, students can create Informational Websites using digital tools like Blogspot ( or Weebly ( Teachers can model this strategy using their own informational websites. Reading and Responding to a variety of texts across the curriculum is also important. The more students explore a variety of texts, the better prepared they will be for standardized assessments. Some students are eager to share their thoughts in the classroom; others lack the confidence, interest, or even skill-set to immediately dive into the world of language. These strategies may give them their voice. Jason D. DeHart, an eighth grade English teacher, is also a student in the Department of Theory and Practice in Teacher Education at the University of Tennessee, Knoxville. Published in AMLE Magazine, March 2016. Author: Jason D. Dehart Number of views (1708)/Comments (0)/ Tags: Writing Reaching the Reluctant Reader Reaching the Reluctant Reader Turning negative perceptions into positive outlooks. Common Core State Standards require that students read at complex levels. Guiding students through these increasingly complex materials can be daunting for teachers of mixed ability students, special education students, English Language Learners, and students considered to be Level 1 and Level 2 readers. Some students do not have the same ability as their classmates; other students lack the motivation needed to read complex texts. Still others are hampered by negative attitudes toward reading. Among the several strategies teachers can use to motivate reluctant readers is keeping a growth mindset at the forefront of their thinking. Reading Perceptions Carol Dweck's work on growth mindset is described in her book, Mindset. Dweck describes the growth mindset as the belief that regardless of talents, aptitudes, interests, or temperaments, "everyone can change and grow through application and experience." Students must be explicitly taught how to embrace this mindset in the content areas. Unless they have fully embraced the growth mindset, they are vulnerable to academic and social stagnation , or worse, their abilities may decline in these areas. By the time students reach middle school , the enjoyment of reading, or lack thereof, has been instilled. As their minds become full of technology and social media, and academic expectations grow more complex, we must teach them how to approach reading in a positive way. Tackling a lackluster attitude may be enough to light the fire and give adolescents at least some desire to engage with the reading materials in each class. Each student is unique and has a different approach to reading. Teachers can begin to adjust their instruction to emphasize positive viewpoints on reading. Overemphasizing the difficulty of the text may shut down apprehensive readers. Instead, teachers might say a text is challenging, but then explain ways the class is going to strategize to understand the text. Rather than saying, "This is a really difficult text, so we need to pay attention to understand it," try saying, "This text is a challenging text, but we are going to look at different strategies to help us understand the content. These strategies will help us understand this text and make it easier to read other texts later this year because we all will know how to apply these strategies effectively." The latter statement helps students see how they can be successful. The language is more positive, which transfers to a positive classroom environment. Another way to address students' negative attitude toward reading is to refuse to allow it to permeate the classroom environment. When students make statements such as, "I don't like reading" or "Reading is boring" or "I'm not good at reading," teachers can introduce positive statements that help students see value in what they are reading: "It's okay not to like everything that you have read. Today, however, I would like us to think about how this text can help us understand the world. This will allow us to see why our textbook might have included this selection and help us locate other texts that may answer questions we have about the content of what we are reading." This generic statement can be modified for a specific text or content, but it may change a negative comment into a yearning for knowledge statement. Relating the text to something in an adolescent's world helps alter his or her mindset and delivers a sense of intrigue about a topic. Some texts in a prescribed curriculum may not relate to adolescents. If teachers take time to find additional text, related to first text or as an alternate text that appeals to the interests of students in their classroom, a reading resister may be more likely to engage with the content. Appealing to the interests of students is key to creating an equitable classroom as teachers form positive relationships with students and get to know them as individuals. This is also a way teachers can differentiate for the needs in the classroom. However, appealing to the interests alone may not be enough to engage students who resist reading. Low-Skill Readers A fixed mindset is the opposite of a growth mindset. In a fixed mindset, students believe they have only what Dweck describes as a "certain amount of intelligence, a certain personality, and a certain moral character." Once this mindset takes root in a student's mind, it is difficult to shake. When a student determines he or she has failed at something, this belief tends to stick and the "I am not good at reading" and "I dislike reading" comments become reality statements rather than avoidance techniques. The belief of not being good at reading typically takes root in third or fourth grade and is particularly problematic at the middle school level. By the time students reach seventh or eighth grade, this mindset is creating a foundation that is academically dismal. However, challenging the adolescent fixed mindset regarding reading gives students the opportunity to change from taking a defeatist approach to learning to embrace the tools needed to be successful in the future. To address this mindset, educators must first recognize it exists. They also must believe a student is capable of reading complex material at the appropriate grade level. It's important to recognize that a fixed mindset will not change immediately. It takes persistence and patience to work with a student who has a "failure" response to reading. Providing students with adequate feedback can help them adjust their thinking patterns. It takes work to provide positive feedback, but it will pay off. Teachers can follow a simple formula to provide effective feedback: Area Addressed + Present Behavior + Future Implication. This formula can be adapted to any situation for any student. Before giving this type of feedback, the teacher must understand the root of the problem. For example, let's say a student is struggling to comprehend a particular text. The teacher may say, "It seems that you are having some trouble identifying the main idea of this text. I notice that when you read, you are skimming through one section and then moving on to another section. Try slowing down and when you come to the end of a section, identify any words or phrases that you may not know. I can help you understand these terms. If you continue to use this strategy, you will begin to answer some of these questions on your own and texts similar to this one will become easier to understand later in the semester." A teacher using the feedback formula might say, "I hear you say you are not good at reading. When we read in class, you seem to be able to follow along and you ask some great questions about the characters. Sometimes you don't know all the answers to these questions, and I think that is what is troubling you. When you can't find an answer, try re-reading the text. If you still don't know the answer after you have read the text, continue reading. The answer may come later in the text. If you become confused, let me know. Together, we can find these answers. When you are able to find these answers, and if you continue to question characters, you will be better prepared for the narrative we will be writing in our next unit." Closing Thoughts There are many reasons students enter our classrooms as reluctant readers. Initially, addressing these readers can be taxing; however, with the appropriate tools, educators can begin to change the mindset of resistant reader. John Helgeson has taught middle school students for 17 years. He is currently the Secondary English Instructional Specialist in the Northshore School District in Bothell, Washington. Published in AMLE Magazine, March 2016. Author: John Helgeson Number of views (1971)/Comments (0)/ Necessary Noise: The Importance of Collaborative Learning Necessary Noise: The Importance of Collaborative Learning An old-fashioned radio broadcast encourages deeper reading. Connections between students, connections between texts and students, and connections between texts and the real world are vital to student learning. In classrooms, one way to make connections is by linking people, ideas, behaviors, and activities through projects. My first experience teaching in a classroom was when I substituted for 10 days for a foreign language teacher who taught one English class: a ninth grade basic reading class. She was assigned this class because there were not enough language classes to fill her roster and, frankly, no one else wanted these students. This was a class of "reluctant readers." The students were supposed to be reading Dandelion Wine by Ray Bradbury. Most of Ray Bradbury's works would have been an appropriate choice for this class, but Dandelion Wine was based on his childhood summer with his grandfather in a small American town in 1928. This was 1988, and the remedial English class was comprised primarily of 15-year-old urban males. Nothing could have been further from their reality. After a few minutes with the students, even I, brand-new and idealistic as I was, could tell they had no intention of reading the novel. In front of me were the endless lists of vocabulary words, end-of-chapter "discussion" questions, and quizzes I was pretty sure they all would fail—and they wouldn't care. I had to find a way to interest them in the book—something social, something active, something that would entice them to read the book and engage them in academic discussions about characters, setting, plot elements, theme, diction, and author's purpose. On the Air I cleared my throat and announced: "We are going to turn this book into a radio news show." Now I had their attention. We discussed the components of a 30-minute news show: lead stories, local news, human interest or feature stories (the "kicker"), sports, the economy, lifestyle stories, weather, and commercials. Then, for the next few days, we partitioned the class into two parts: (1) learning about the structure and content of each type of news story, reading newspapers, and listening to broadcasts as "mentor texts"; and (2) reading the novel and looking for the news stories within. Readers now had a purpose for reading. Even though they did not personally connect to the characters and events in the novel, they had a purpose for learning about them: to report on them. This novel became more of a window than a mirror. Each day I gave a 15-minute focus-lesson during which we listened to a clip of a radio news show or read a newspaper column as our "mentor texts." Sometimes we had a mini-lesson on research and reading for details, since they were conducting some light research on the time period. During the 45-minute workshop time (although this was long before I had heard of reading-writing workshop), students individually caught up with their reading and used sticky notes to jot ideas for news segments. The students divided themselves into broadcast groups, each planning its specific part of the radio show. Within their groups, they flipped back and forth through the pages of the novel, reading and re-reading; questioning and explaining and arguing over events and dialogue; analyzing details and events and setting. They searched for newsworthy events, wrote scripts, and played with word choice. They created jingles and ads to advertise dandelion wine and dandelion wine recipe books, green apple pie, and sneakers. They drafted summer weather reports and human interest stories based on events and characters from 1928. During the development of the program segments, I continued 15-minute focus lessons on skills such as reading for—and writing to include—details, leads, script writing, interviewing or persuasive advertising techniques, which they would be able to apply to reading, writing, and speaking during the year. The classroom was abuzz with laughter and singing. Students were reading the novel, local newspapers, and research articles and primary sources about the time period of the book—all at an adolescent decibel. Absenteeism was at an all-time low. Radio Show Day Radio Show Day arrived. All the students had read most, if not all, of Dandelion Wine, drafted and practiced their scripts, and arrived prepared to perform. The class was loud, boisterous, committed. When I looked around, I saw boys perched on desks arranged in a circle, rather than in neat rows. It was their newscast, and they were ready to go. The students put on a wonderful news show for each other, filled with facts and creative commercials and easily filled a half hour with content based on the book and the time period. However, it wasn't the quality—or quantity—of the product that mattered; it was the quality of the process—the reading, analyzing, and synthesizing of information read and application to "real" situations. It also was the quality of the learning community that was built during those two weeks. Since I was not the teacher of record, I left copious notes for their regular teacher and gave the students feedback on their preparation, content, and delivery. If I were assessing the project, I would make sure that each student was responsible for researching, writing, and presenting a part of his group segment and would have given the students a content and delivery rubric in advance. As far as reading and comprehending the book, all students demonstrated that they read all or most (or at least more than previously) and were able to analyze and apply what they learned from the book and the informational multimedia texts they "researched" in their synthesis. I must say, they successfully climbed Bloom's Taxonomy of Thinking, which is what teachers hope for in any classroom lesson. Reading, Writing, and Collaboration After 27 years of a lot of reading, research, and writing about best practices in teaching and literacy and engagement strategies, as well as studying male literacy, I can look back and truly analyze the success of the radio news project. It was based on purposeful reading, writing, and speaking in a collaborative workshop format. This single project encompassed at least a dozen best practices: • Writing (in narrative, informative, and persuasive modes) for authentic purpose and audience and making reading-writing connections • Talking (and singing) and listening that is on-task, purposeful, and academic • Reader response, or writing to learn, as readers talk and write about what they read • Synthesis of text, taking students back to the book to re-read for deeper meaning and learning • Active, experiential, project-based learning • Use of supplemental mentor and research materials (newspapers, newscasts, and primary sources) to support learning • Higher-order thinking, such as analysis, application, and synthesis • Student responsibility, choice, and, therefore, engagement • Democratic principles (students decide what, how, and why) • Differentiation and individualization; valuation of different strengths and talents • Multiple intelligences • Interdisciplinary approach to teaching. Even though the project took place within English class, students studied and incorporated elements of history, science, and math. For example, students used math to figure out the timing of the newscast segments, to analyze the percentage of the newscast devoted to different topics, and to perform statistical analyses for their advertisements. The most apparent best practice was collaboration. As Kathryn Wentzel says in her contributed chapter to The Handbook of Competence and Motivation, "When teachers support [the] need for collaboration by allowing students to share ideas and build knowledge together, a sense of belongingness to the classroom community is established and the extension and elaboration of existing knowledge is facilitated." This or similar reading-writing-speaking-listening projects that combine independence with interdependence and collaboration can be used in each content area or across the team as an interdisciplinary unit, incorporating all content areas. Lessons Learned Twenty-seven years ago I may not have been able to articulate my rationale in academic terms based on the research of others. But I would have been able to explain why collaboration and noise were necessary to the learning of these adolescents. Helping Administrators Understand Recently, I worked with administrators who wanted to know what they should be looking for in a class in any discipline to determine whether reading and writing are being effectively taught. Many of the "look-fors" I suggested were not what they expected to hear—especially those administrators who value quiet students, rows of desks, pacing guides and scripted lessons, and everyone on the same page at the same time. Some key general look-fors in any content area classroom might be: • Direct instruction moving toward release of responsibility with teacher as facilitator. • An expressed purpose for reading. • Lessons that connect reading and writing standards. • A classroom arrangement that is conducive to the particular instruction or lesson. • All students actively engaged on task. • Talk that is on-task, purposeful, and academic. • True differentiation based on students' needs and strengths, not merely teaching a lesson at different "levels" However, observing these elements is not enough. To determine effective reading and writing instruction, administrators might ask students questions to determine if they understand what they are doing, why they are doing it, and what they are learning. Administrators might ask teachers to explain what they are doing and what the rationale, objectives, and goals of the lesson are, and initiate a critical self-reflection on their own teaching practices. Administrators might ask teachers: • What am I observing? • What teaching methods are you using and what are your objectives and goals? What outcomes are you seeing from this teaching method? • What literacy/comprehension strategies are you teaching? Why? On what research or readings do you base this teaching/lesson? • How are you delivering this instruction? Why? On what research or readings do you base this method of teaching this lesson? • How can you determine on what level your students comprehend the material? How are they responding to text? • How are you supporting the reading of text that may be too challenging for or uninteresting to students? How can your students apply it to other text or lessons? • What changes have you observed in your readers and their reading? How have you observed these changes? —Lesley Roessing Lesley Roessing is senior lecturer in the College of Education and director of the Coastal Savannah Writing Project, Armstrong State University, Savannah, Georgia. She is editor of Connections, the GCTE journal and author of several books and articles. The strategies in this article were incorporated in additional projects, in a variety of content areas, included in No More "Us" and "Them": Classroom Lessons & Activities to Promote Peer Respect (Rowman & Littlefield, 2012). Published in AMLE Magazine, February 2016. Preventing Plagiarism: Three Proactive Paraphrasing Lessons Preventing Plagiarism: Three Proactive Paraphrasing Lessons You've read it … that one passage in a student research paper that startles you. The sentence structure and vocabulary exceed middle school norms. You raise your eyebrows, shake your head, and take a deep breath. How many times did you say, "Don't copy word for word." As middle school educators, we're used to saying things more than once; we're also comfortable with learning from mistakes. Middle school plagiarism is often unintentional. Plagiarism prevention, however, needs to be explicitly intentional. The excerpt below, from, lists six types of plagiarism. Three of them can be avoided when students can confidently paraphrase. What is plagiarism? All of the following are considered plagiarism • turning in someone else's work as your own • copying words or ideas from someone else without giving credit • failing to put a quotation in quotation marks • giving incorrect information about the source of a quotation So what can teachers do? Be proactive. Introduce your students to effective paraphrasing strategies before they begin a research project. Once a research project begins, students need to navigate a number of new routines, platforms, and skills. Short term assignments are replaced by longer term deadlines. Textbooks are temporarily traded for database articles. Bibliographies, website evaluation, and citation styles (which may not have been mentioned for a few months) are suddenly essential again. The lessons shared below are designed to require minimal class time while targeting concepts middle schoolers need to paraphrase successfully. Lesson #1 - Introduction: What is Paraphrasing? Share the three images below and ask students to explain how they might be sequenced into a simple story. After listening to their ideas, introduce the term "paraphrasing" and its definition. Reveal the image captions, and guide students to apply the image "story line" to the concept of paraphrasing. In this way, you're bridging the concrete to the abstract and offering a visual anchor that can be referenced when the research project begins. Gather information. Copyright © Encyclopedia Britannica ImageQuest Assess what you have. Copyright © Wells Fargo Bank Create your own work. Copyright © DwellStudio Why are image attributions listed under each picture? Three different artists contributed to my illustration of a new message. That's exactly what students will do when research and writing ensues. Each artist (or author) deserves credit for their work. Not citing sources is one type of plagiarism. For a research paper, full citations would be included in a formatted bibliography. Lesson #2 - Practice Active Reading and Note Taking Prior to research, class time is always at a minimum. A review of note taking skills, however, can be paired with a regularly scheduled textbook reading as the groundwork for paraphrasing. Selecting key facts is critical to an effective paraphrase. Remind students to ask questions before they read, highlight only valuable keywords (these are "golden"), and apply a note taking strategy such as two-column notes, bullet notes, or an outline. If your students are well versed in these study skills, a homework assignment is likely to offer sufficient practice. If your students aren't familiar with note taking and highlighting strategies, consider inviting your school's literacy coach to lead a lesson. Lesson #3 - Paraphrasing Dos and Don'ts After students have practiced highlighting and note taking, devote some time for paraphrasing practice. Modeling paraphrasing from existing notes is ideal. Offering time for students to work in groups as they paraphrase notes can build confidence. Still, some students rely on "rephrasing" strategies that are within the realm of plagiarism. Sentence level plagiarism frequently occurs when students reverse the structure of the sentence, substitute a synonym, or delete a conjunction to create two sentences. Try this system to support student understanding. Keep a sense of humor. The reminders are intentionally "light hearted," but middle school students are sure to recognize mistakes they've made in the past and to remember these "labels" in the future. It is not ok to…. Original Sentence Still Plagiarism "Flip Flop" When the cell is ready to divide, the nuclear membrane dissolves (Mitosis, UXL). The nuclear membrane dissolves when the cell is ready to divide. "Quick Swap" The chromosomes, are exactly replicated and the two copies distributed to identical daughter nuclei (Mitosis, Columbia) The chromosomes , are duplicated and the two copies are sent to identical daughter nuclei. "Chop Chop" In animal cells the centrioles separate and move apart, and radiating bundles of fibers, called asters, appear around them (Mitosis, Columbia). In animal cells the centrioles separate and move apart. Radiating bundles of fibers, called asters, appear around them. see bibliography below Plagiarism occurs any time an idea or work is borrowed from someone else without giving proper credit. While proactive paraphrasing lessons enable students to approach research writing with increased confidence, a comprehensive understanding of plagiarism and guidelines for its prevention remain part of an ongoing dialogue. The lessons above, designed in response to a History teacher's request, are now reviewed prior to English class research projects as well. Collaboration between departments, grade levels, and librarians can ensure that plagiarism prevention is dovetailed with meaningful student practice. DwellStudio. Gold Horse Figurine. N.d. DwellStudio. Web. 18 Feb. 2016. <>. Gold pan full of bullion. N.d. Wells Fargo Bank. Guided By History. Web. 18 Feb. 2016. <>. "Mitosis." UXL Encyclopedia of Science. Ed. Amy Hackney Blackwell and Elizabeth Manar. 3rd ed. Farmington Hills, MI: UXL, 2015. Research in Context. Web. 14 Feb. 2016. "Mitosis." The Columbia Electronic Encyclopedia™. New York: Columbia University Press, 2015. Research in Context. Web. 14 Feb. 2016. Prospector pans for gold. Photography. Encyclopædia Britannica ImageQuest. Web. 14 Feb 2016. "What Is Plagiarism?" IParadigms, LLC, 2014. Web. 14 Feb. 2016. Lisa Brennan is a librarian at St. Christopher's School in Richmond, Virginia. Author: Lisa Brennan Number of views (5506)/Comments (2)/ Tags: Writing Related Resources Topic Matter Experts Bring professional learning to your school. More info...
__label__1
0.694445
Music transcription Dear Colleagues, I am currently studying on automatic music transcription but need to find research papers other than written in the framework of Music Information Retrieval. However, I could not find much resource about manual transcription from the perspectives psychology of music, music education or music theory, etc. I could only find papers on "notation reading" such as the one written by John Sloboda (in his book, "exploring the musical minds). I would be grateful if anyone could suggest papers on manual transcription. (I have enough papers on the topic written by ethnomusicologists) Best regards,
__label__1
0.992359
Raisin Bran Muffins Raisin Bran Muffins Yield: 12 muffins Prep Time: 15 minutes (active), 45 minutes (inactive) Cook Time: 15 to 20 minutes Total Time: 35 minutes 1½ cups all-purpose flour ½ teaspoon baking soda ¼ teaspoon salt 1/3 cup granulated sugar 3 tablespoons light brown sugar 2½ cups Raisin Bran cereal 1/3 cup vegetable oil 1 egg 1¼ cups buttermilk ½ teaspoon vanilla extract 1. Preheat oven to 375 degrees F. Line a muffin tin with liners, or spray with non-stick cooking spray. 2. In a large bowl, whisk together the flour, baking soda, salt and sugars. Stir in the cereal; set aside. 3. In a medium bowl, whisk together the vegetable oil, egg, buttermilk and vanilla extract. 4. Pour the wet ingredients over the dry ingredients, and stir until completely combined. Allow the mixture to sit at room temperature for at least 45 minutes (this allows the cereal to soften in the batter). If you don't want to use immediately, you can refrigerate in an airtight container for up to three days. 5. When ready to bake, divide the batter between the 12 muffin tins. Sprinkle the tops with granulated sugar. 6. Bake for 15 to 20 minutes, or until a thin knife inserted in the center comes out clean. Store in an airtight container at room temperature for up to two days, and then in refrigerator for up to another week. (Recipe adapted from Joy the Baker)
__label__1
0.752655
Christ And The Adulteress (1508-10) Titian Kelvingrove Art Gallery and Museum, Glasgow Absurd comparisons can be telling. Try this. "He had just come to the bridge; and not looking where he was going, he tripped over something, and the fir-cone jerked out of his paw into the river. "'Bother,' said Pooh, as it floated under the bridge, and he went to get another fir-cone. But then he thought he would just look at the river instead, because it was a peaceful sort of day, so he lay down and looked at it, and it slipped away beneath him... and suddenly there was his fir-cone slipping away too. "'That's funny,' said Pooh. 'I dropped it on the other side'." This is the origin of Poohsticks. (Sticks soon replace fir-cones.) It's strange that this elemental game should have been created only in the early 20th century. It's not so strange that Pooh's revelation should hold various basic lessons about the mind and the world. For example, played as a competitive race between two dropped sticks Poohsticks demonstrates the power of delay. You could play the game without a bridge, with only a stream and a marked finish line, and it would work just as well. But the sticks would stay in view all through – and without the element of vanishing, the interval of invisibility, there would be no tension, no waiting, no surprise result. Played with a solo stick it's equally enlightening. It teaches the continuousness of things, how something that has disappeared can reappear elsewhere. But it also teaches the discontinuousness of things. Something that's expected to appear may not appear when or where it's expected. However much you might try to gauge the speed and direction of your Poohstick, you will fail. It always arrives out of nowhere. It shouldn't be surprising that Poohsticks has analogies with pictures either. It's a very visual game. It involves things vanishing and appearing, things going behind something else and emerging again. Pictures are made up of things overlapping other things. They're a tissue of hidings and showings. And occasionally something emerges from behind something else with no cue or apparent connection. The "out of nowhere" effect is one of painting's tricks. The work on this page, Christ and the Adulteress, is a sumptuous Venetian painting with a changeable history. It was once credited to Giorgione. Now it is normally thought to be by Titian. The subject has been disputed too. It looks like Christ and the adulteress, though a few have seen Daniel and Susannah. But, most seriously, the painting has been literally carved up. A copy by another artist shows what, roughly, it originally looked like. A narrow strip has been cut off across the bottom. More drastically, the whole figure of a standing soldier has been removed from the right side – you can still see the tip of his knee, in blue and white hose, just poking in. Most of this soldier remains lost, but a section with his head and shoulders has turned up, and is in Kelvingrove too. Fortunately no slices have been taken from the top. The scene, Christian-Venetian, has divided loyalties. It has its figurative drama, a relay of turns and gestures, telling a story of accusation and repentance, rebuke and forgiveness. It is also a surface of rich material delights, a quilting of colours and textures, folds and gleams, whose pleasures seem indifferent to subject matter. The accusing man is just as gorgeously clothed as the accused woman, and Jesus himself hardly less. But it is one small detail that becomes the emotional and sensational high point of the painting. It isn't directly a part of this human business. It's the background. The only bit of background that – as it stands – the picture has. (The surviving section of the soldier shows that in the original there was also a distant glimpse of the sea at the very right side.) In other words, it's that green grassy knoll, with trees and sheep, that appears above the woman's head. It appears out of nowhere. We can suppose that somehow this mound of ground is continuous with the green grass verge of the foreground. But there is no visible path between them, nor are the two divided areas near enough to establish a smooth pick-up between them. This foreground disappears behind the group of figures, and so conclusively that when it reappears at the top of the picture, behind their heads, you aren't expecting it. There is a visual delay. It vanishes. It suddenly emerges. Poohsticks. Other factors intensify this knoll. It provides the picture's maximum contrast between light and shade: the dark edges of building and cap against its bright green. It shows a little idyll of sheep, which have all kinds of significance in Christian parables and classical pastoral. It is a pillow of softness, on which the woman, leaning inward and slightly backwards, seems to be laying her head – a gesture that goes with the rather somnambulistic action of the whole scene, and seems to give her a blessing. But it's the pause, the interval, the effect of delay and sudden appearance, that gives this detail its piercing force. About the artist Titian (1485-c1576) recently made a surprising appearance in British politics. The issue came down to how long he had lived. We don't know, is the answer. But in his long life he discovered the full powers of oil painting, using it with unprecedented sumptuousness, tenderness, sexiness – and unprecedented roughness and bleakness. He turned his hand to religious ecstasy and pagan orgies, pampered nudes and meditative portraits, and in old age developed the first "late style" in European art – free and unfinished-looking, barely articulate, "painted more with his fingers than his brushes", as contemporaries said, made of "broad and bold strokes and smudges, so that from nearby nothing can be seen". Perhaps Mr Brown would say the same.
__label__1
0.844415
your daily reminder that taiwan was never meant to be an “east asian” country - it was and still is the homeland of indigenous austronesian tribes that share language roots and ancestry with filipinx and other pacific islander groups. taiwan is colonized land. taiwan is colonized land. stop using “taiwanese people” as synonymous with sino/Han/east asian. taiwan being grouped with china/the rest of east asia is NOT A NATURAL PHENOMENON. it is the result of the near-complete erasure and decimation of taiwanese aboriginal people, cultures, and languages. taiwanese as synonymous with Han ppl and culture only perpetuates a legacy of genocide, colonialism, erasure. if u want to learn more about this: google the Dutch Pacification campaign, spanish imperialism, indigenous resistance to brutal Japanese colonialism, forced sinicization. remember that taiwan was never supposed to be a mandarin-speaking, Chinese/Japanese-influenced, Han-inhabited place.  to all the brown taiwanese ppl deemed savage/lesser in their own country: i see u!!!!!!!!!!!!! Pokemon Go players in Taiwan
__label__1
0.582116
Digital Attachments XCOM sniper (not Dmitrri)It’s tough to lose a solider. Especially one like Dimitri. A fine sniper, with a good kill record. I had trained him for so long, raised him from a lowly private to sergeant, then to lieutenant. He was equipped with the best gear. His accuracy had improved to a deadly asset. He was a cornerstone to my tactical approach. He was also an investment in time and materiel. And as such, he was headed for greatness. Captain, maybe major. Until the aliens got him. That was nasty. Three of them swarmed his position, flanking his protection and taking him down with close melee attacks while the rest of the squad was busy defending citizens, too far to help. Not a pretty sight. The same battle took out Matt, the heavy weapons corporal who blasted whole blocks with his rocket launcher. Matt was caught in the blast of an exploding car outside a mall where the aliens had landed. Damn, I hadn’t counted on that when I moved him up to an overwatch position. But the aliens set the car on fire and that was that. Our assault got caught in an ambush. We won, eventually, but it was a long fight with every inch bitterly contested. Coming back to base we were a solemn group. Two dead. Not a good thing. Now the squad looks awfully thin, two down with rookies in their place. Big shoes to fill. And it’s not getting any easier out there, with the aliens ramping up their own technology, and getting tougher and smarter all the time. Winning this war won’t be easy. Matt I could almost afford to lose, being relatively new, but Dmitri was my best sniper. I need to start training someone, fast. But who? Of course it’s a game (XCOM: Enemy Unknown to be precise). Playing it this week has made me ponder the nature of attachment, in particular our attachment to characters in games or online. Why does it matter to us when a digital character “dies”? Or how he/she “lives”? How do we get so attached to virtual beings? After all, it’s not like real life or death. Just a game. But yet… Losing Dmitri irked me, but it also bothered me on a deeper level. Not simply because I had customized him, changed his suit colours, his facial hair, and imagined a background for him. He was mine. Or me. I’m not sure which. There was an emotional link. Not the easiest thing for a person who values logic and skepticism. When the aliens gutted Dmitri, I was torn between restarting at the last save-game position and playing the deus ex machina role to save him, or letting the narrative run as it played out. Starting again felt like cheating. Letting him die felt like I had failed him. It. Dmitri wasn’t real, of course. But he/it felt like he was, at times. The narrative won, but not without misgivings. Continue reading “Digital Attachments” 6,294 total views, 20 views today Narrative and free agency in game design World of WarcraftAs a former World of Warcraft player, I can attest to how compelling it is to play an immersive, massive, 3D role-playing game. Acting out scenarios in a fantasy world is more involving than merely reading a fantasy novel. You get addicted to being part of the narrative, to swinging the sword instead of just reading about it. Just as when you’re reading a good novel and can’t stop turning the pages, you keep playing to see how the next chapter/adventure/scenario plays out, especially when you don’t always have to follow the script. It’s not so much about the gameplay, as much as it is being part of the story. Well-designed games compel you to continue playing through a combination of action, puzzle solving, rewards and group activities. WOW is an MMO – massive multiplayer online game – set in a fantasy world that draws much of its substance from Tolkein and other fantasy writers. Many of the role-playing games (RPGs) follow the pseudo-Tolkein model, but most follow paths laid out in fantasy literature (i.e. characters and novels by Robert Howard, Edgar Rice Burroughs, H.P. Lovecraft or more modern writers). WOW is, of course, not the only game that offers that sort of setting, but at eight years old, with about 12 million subscribers, it’s both the largest and longest-lasting of them. It thus becomes the yardstick for measuring any other game in the genre. None of its competitors – Rift, Guild Wars, Lord of the Rings,Star Wars, etc. – have a fraction of the players. RPGs owe their ancestry to a small box set of rules published in 1976, called Dungeons and Dragons. Written by Gary Gygax and Dave Arneson (whose name subsequently disappears from the list of authors in later printings), it essentially created the standards for fantasy role playing that are still in use today. This is documented in great detail in Jon Peterson’s 700-page tome, Playing at the World (his blog is here). It was reading this book that got me thinking about game design again (and to dig through what few old wargames and rules books I have in the basement…). In his introduction, Peterson identifies “freedom of agency” as one of the key components, “as much a necessary condition for inclusion in the genre of role-playing games as is role assumption.” The ability to make choices of action, of goal, and behaviour are central to a compelling game. In the Wired interview, linked above, Peterson defends gaming, Which is similar to what I’ve been writing about for a few decades.* Gaming, at least in the simulation-style games, is not merely a pointless pastime, but rather an intellectual exercise. Computer games have both redefined entertainment and set the bar for hardware and software development. Games are incredibly demanding of computer resources compared to, say, a spreadsheet. Consider the processing required to keep track of dozens, even hundreds of players who are interacting in 3D space in realtime, plus all of the geography, terrain, in-game trades and purchases, combat, weather and environmental effects. And to keep everyone in the game fully informed of all the events, locations and activities of their characters, pets, party members, resources, movement paths, mail… it’s a stunning amount of work. Beyond the coding, there are some basic components any game needs to be successful: • Clearly defined purpose and goals; • Challenge; • Identifiable opponents to overcome; • Reward for accomplishing goals or overcoming challenges; • An understandable and accessible board geography where the game is played; • Clear and concise rules. RPGs add other elements to create that immersive experience, including: • Connecting story/narrative; • Character choice, advancement and development; • Consequences of actions or behaviour; • Alternate races (orcs, elves, dwarves, etc.); • Role assumption (taking on the persona of a character in the story); • Free agency (the ability to move and act independent of the script); • Believable fantasy, alternate or futuristic world environment; • Clear sides with which the races align and which have competing goals. Computer games have additional components: • Good graphics and visual appeal; • Good AI (artificial intelligence) and NPCs (non-player characters); • Believable environmental interactions, simulated physics and effects; • Appropriate sound (and sometimes music); • Interactivity with NPCs, environment; • Social activity (in MMOs). Some RPGs (i.e. Fable, Witcher, Fallout 3), have more complex “consequences” built into decision making within their games. Certain choices – such as attacking or stealing from non-player characters (computer-controlled NPCs) or how you answer their questions – affect the way others relate to your character. How well these mimic actual social or personal behaviour is debatable. Mostly they seem to me merely designed to add chrome to role assumption. In some cases, they don’t really affect the game or quests. Since these are solo games, rather than MMOs, you can usually save your game before you make a choice, then replay it with a different choice if you don’t like an outcome. That tends to dissipate the suspension of belief necessary for immersion. I don’t include “fun” in any of these lists because fun, like beauty or taste, is subjective. Players will gravitate to the games that provide the highest entertainment value for their own interests and aptitudes. I, for example, never found WOW’s battlegrounds “fun” but always enjoyed questing and exploring (solo and in parties). Others eschew the quests for the PvP combat in the battlegrounds. Can the storyline absorb the players sufficiently, for long enough to suspend belief, deeply enough to make you care about both the characters and the action? It depends on how well the narrative is scripted. A good storyline has to be crafted as carefully as a good novel and needs to generate a similar emotional response. Clearly, however, game narrative is very different from a storyline in a book, since choice is a key element in gaming. Quests can also be seen as ‘micro-narratives.’ In many games, the plot or story is merely a shell that contains numerous micro-stories presented as quests. Sometimes these are dynamic, so that the nature or goals of quest B depends on how or how well you accomplished quest A. However, the shell story needs to be coherent so players don’t simply feel they’re moving from one mini-game to the next for no reason. A lot of games fall down with thin stories, pointless quests (collect X eyeballs or Y spleens), and predictable A-to-B-to-C plots. And too many depend more on action to move them along rather than plot or participatory narrative (i.e. Diablo III). Patrick Holleman, on the Game Design forum, writes, “…the difference between traditional games and videogames is that videogames have a world in which everything about the game, except for controller input, takes place. This world is created, controlled, and sometimes populated by continuous and discrete artificial intelligence. The player is a guest in that world, the central participant in its mechanics. Even still, the world is usually not driven by the player; it is the designer’s world, and should be studied as such.” Holleman also asks, “…whether or not videogames are similar enough to traditional narratives that we should study them the same way.” In response, he adds, “To begin, it makes sense to admit that some portions of videogame narratives are exactly like books; the player reads them without interacting except to turn the ‘page’. Some narrative segments in videogames are exactly like movies; the player watches them without doing anything except pausing and unpausing. No decent videogame is entirely like movies or books. A movie creates a fictional world that one can see and hear, but viewers are locked into a guided tour that the filmmakers have scheduled for the viewer, and viewers can never deviate from that tour. In a videogame, on the other hand, the player is presented with a world that can be accessed largely at their own discretion. Videogames that are too linear—too much like the guided tours of movies—are often deprecated by critics and gamers.” Interactivity is essential, but is not synonymous with narrative. Ernest Adams, writing in “Three Problems For Interactive Storytellers,” said, In that same article, Henry Jenkins writes, “You say narrative to the average gamer and what they are apt to imagine is something on the order of a choose-your-own adventure book, a form noted for its lifelessness and mechanical exposition rather than enthralling entertainment, thematic sophistication, or character complexity… Most often, when we discuss games as stories, we are referring to games that either enable players to perform or witness narrative events – for example, to grab a lightsabre and dispatch Darth Maul in the case of a Star Wars game. Narrative enters such games on two levels – in terms of broadly defined goals or conflicts and on the level of localized incidents. “ Immersiveness also depends heavily on how much free will the player has, and the ability to write ourselves into the script. In games like Diablo III, the action is very linear and with little flexibility for explore or act outside the proscribed plot and territory. These games have little immersive value (and, at least with D3, little replay value, either). Others, like Mass Effect and Dragon Age, combine limited freedom with scripted activities and plots. Morrowwind, Skyrim and the post-apocalyptic Fallout 3 provide a generally freely roam-able world, and in some cases, the ability to attempt quests well beyond your character’s level (some MMOs offer this, as well).** While few solo RPG games offer such significant free agency, it is the hallmark of most MMOs. Holleman writes, “World of Warcraft is another game heavily dependent on the depth and persuasiveness of its world; it has the benefit of being an ever-expanding world as well, with content updates and expansion packs. The first time through the game tends to be the best, from a narrative perspective. The structure of the quests (tasks with completion rewards) that guide gameplay are heavy on exploration, but often a bit short on variety, i.e. collect 10 quest items, for the millionth time. What makes these quests and dungeons compelling—at least the first time through the game—is that they are driven by a strong, interesting setting.” Because RPGs have a character-building ladder system, the reason many players don’t explore the MMO environments more fully is usually that their characters are graded too low to survive in higher-level zones. Some sort of safe passage is sometimes offered (i.e. roads where hostile NPCs don’t patrol), or sometimes swift transportation is available (riding or flying mounts in WOW) to encourage more exploration. Most MMOs have graduated zones for each race. These offer playable regions challenging for characters within that given range, such as levels 1-5, 6-10, 11-15, etc. You play your character in a zone until it levels up to be able to enter (and survive in) the next zone. Each zone has level-related quests to fulfill to aid your advancement.*** Completing all available quests is also part of the achievement ladder. Players are encouraged to complete all quests in all zones, regardless of their level. The problem with this system is that, in many MMOs, when your high-level character enters a low-level zone (for example, for another race), the quests are ridiculously easy but yous till want to complete them. On the other hand, quests are designed to get players to explore the entire zone while questing, which increases the sense of immersion. Where most games have a defined end (in RPGs, usually the defeat of a final boss character), MMOs are often open-ended: they can be played after the characters have reached their highest level, accomplished all available quests and defeated all the boss characters. Usually such activities are social: group raids, battlegrounds and dungeons outside the formal narrative and questing lines (essentially making them into fantasy variants on the FPS-PvP line of gaming). It’s also possible to create new characters and start again from level one, often choosing a different race, type/class (warrior, priest, hunter, etc.,) or even alliance. As the goal of game design, immersion is difficult to achieve: it depends on the interaction of several factors, as well as the independent activities of players outside the scripted narrative. It’s an interesting challenge that, so far, no single game has managed to meet fully, but it’s always interesting to examine the results.  **** * I started wargaming in the mid-1970s, bought a computer in 1977, and by around 1980 was writing a regular column on computer games for Moves magazine, as well as writing articles for contemporary programming magazines. I wrote about computers and game design for several magazines in the 1980s including Antic and ST Log, and published a column on technology in Canadian newspapers for a decade from the mid 1990s, which often looked at game developments. ** First-person shooters (FPS) like Call of Duty and Medal of Honor usually combine scripted scenarios with open or semi-open gameplay in a small environment. Very few have a fully open environment (Far Cry, however was one). *** Level grinding is when you rush through all the available quests solely to get your characters up to a reasonable level of strength to be able to use powers or traits unlocked at higher levels, and then to engage in multiplayer activities like dungeons and raids. It’s common in WOW to see level 60-80 characters doing level 1-10 quests to complete their achievement ladders. For the lower level players, this can be frustrating as you watch a higher-level character blaze through an area, taking quest items or killing quest characters with ease, forcing you to wait for them to respawn. Guild Wars 2 has a different approach. When the player’s level is higher than the zone, that level is reduced in that zone to make repeat and collective quests competitive. A level 35 character, playing in a 1-5 zone, will play at level 3-5. Weapon and armour strengths are decreased accordingly. This is somewhat offset by the character’s accumulated buffs, unlocked skills and so on, so it is easier but still a challenge. This heightens the immersive value of GW2. **** One of the things in WOW that, for me, detracts from immersion is the cartoonish style of characters and buildings. Games like Rift and GW2 have tried to make the player feel less distanced through more realistic graphics and animation. However, none of them are up to the detail or lifelike characters we see in Call of Duty or Medal of Honor. Some licence must be allowed, of course, for fantasy races and characters. 7,073 total views, 5 views today Musings on Game Design I really understand his comment on the narrative nature of wargames. 8,319 total views, 10 views today Diablo III: Hype or Gaming Excellence? 8,308 total views, 5 views today
__label__1
0.752745
Company Name: High Moon / Activision Carlsbad, California United States Job Type: Visual Arts Position type: Full Time Experience Level: Entry Level Art Intern Great Games Start with Great People! High Moon Studios, part of Activision Blizzard, is developing a high-profile, action-packed title for Xbox 360(TM) and PlayStation®3.  Based in Carlsbad, we offer a creative and challenging environment that fosters collaboration, mentorship and camaraderie. We're dedicated to making successful high quality games, and we are looking for outstanding candidates to join our talented team. Art Intern (Temporary) An entry-level contributor whose primary duties involve art wrangling and asset management. This position does not involve art creation. ·         Works with the environment and props artists to import art assets into the game engine. ·         Sets up base shaders and materials within the game engine. ·         Organizes art assets into proper folder locations and validates them. ·         Provides updates to the art team regarding the locations of art assets in the art pipeline. ·         Builds collision for 3D game assets. ·         Reports to the Art Director. ·         Familiarity with polygon modeling and UVW unwrapping in 3ds Max or Maya. ·         Proficient in the UI for either 3ds Max or Maya. ·         Proficient with Photoshop. ·         Perforce experience a plus, but not mandatory. ·         Communicates and collaborates effectively with the development team. ·         Meticulous attention to detail. ·         Able to adapt to change, rapid iterations, and revisions. ·         Handles constructive feedback well. ·         Strong problem-solving and troubleshooting skills. ·         Continuous learner. Activision Blizzard and its affiliated companies is an equal employment opportunity and affirmative action employer.
__label__1
1.00001
Transmission rear seal leak on 2004 Nissan Titan About how much it cost to repair a rear seal leak? 2 answers cost of repair for rear seal leak for ford explorer 4x4 eddie bower edition 04 Titans have this problem, its almost always a indication of impending differential failure....RARELY its just a bad seal on 04 - early 05 titans. Dealer will probably say its a vent issue.BULL SH$%!!! As a auto tech of 15 years and titan owner I've seen more bad rear titan differentials then I care to. NISSAN NEEDS A RECALL. Leakage happens when bearings start to fail and allow gear set and thus axles move around more the allowable . In turn putting excessive loading and heat on components. Common to see spider gears with enough clearance between teeth to throw a wrench through it. If it leaks oil at rear axle, axle shafts must be R&R'd. To do so differential needs to be opened up.. AND SHOULD BE INSPECTED FOR PREMATURE WEAR. Nissan doesn't rebuild differentials for liability reasons. New differential is a UPDATED UNIT (if that doesn't screem nissan is aware of this problem, when the play blind and deaf to it) Also. upadated gear lube spec 75W-140 SYNTHETIC LUBE ONLY! MOST fail around 50-60k miles. Also front differentials fail around same mileage and same wear. In my. professional opinion its due to insufficient grade gear lube installed at factory and if serviced at dealer following Nissan's spec lube at service intervals. CAUSING. EXCESSIVE HEAT AND PREMATURE WEAR. Hence the new updated differential with. "heat dissipating" aluminum cover and 75-140 synthetic lube. Its a $2800 repair. depending on your local dealerships labor rate. Front differential $1300
__label__1
0.842049
1741-7015-9-661741-7015 Review <p>Bone regeneration: current concepts and future directions</p> [email protected] [email protected] [email protected] [email protected] Department of Trauma and Orthopaedics, Academic Unit, Clarendon Wing, Leeds Teaching Hospitals NHS Trust, Great George Street, Leeds LS1 3EX, UK UK Leeds NIHR Biomedical Research Unit, Leeds Institute of Molecular Medicine, Beckett Street, Leeds, LS9 7TF, UK Section of Musculoskeletal Disease, Leeds Institute of Molecular Medicine, University of Leeds and Chapel Allerton Hospital, Chapeltown Road, Leeds, UK BMC Medicine 1741-7015 2011 9 1 66 http://www.biomedcentral.com/1741-7015/9/66 10.1186/1741-7015-9-6621627784 Bone possesses the intrinsic capacity for regeneration as part of the repair process in response to injury, as well as during skeletal development or continuous remodelling throughout adult life 1 2 . Bone regeneration is comprised of a well-orchestrated series of biological events of bone induction and conduction, involving a number of cell types and intracellular and extracellular molecular-signalling pathways, with a definable temporal and spatial sequence, in an effort to optimise skeletal repair and restore skeletal function 2 3 . In the clinical setting, the most common form of bone regeneration is fracture healing, during which the pathway of normal fetal skeletogenesis, including intramembranous and endochondral ossification, is recapitulated 4 . Unlike in other tissues, the majority of bony injuries (fractures) heal without the formation of scar tissue, and bone is regenerated with its pre-existing properties largely restored, and with the newly formed bone being eventually indistinguishable from the adjacent uninjured bone 2 . However, there are cases of fracture healing in which bone regeneration is impaired, with, for example, up to 13% of fractures occurring in the tibia being associated with delayed union or fracture non-union 5 . In addition, there are other conditions in orthopaedic surgery and in oral and maxillofacial surgery in which bone regeneration is required in large quantity (beyond the normal potential for self-healing), such as for skeletal reconstruction of large bone defects created by trauma, infection, tumour resection and skeletal abnormalities, or cases in which the regenerative process is compromised, including avascular necrosis and osteoporosis. Current clinical approaches to enhance bone regeneration For all the aforementioned cases in which the normal process of bone regeneration is either impaired or simply insufficient, there are currently a number of treatment methods available in the surgeon's armamentarium, which can be used either alone or in combination for the enhancement or management of these complex clinical situations, which can often be recalcitrant to treatment, representing a medical and socioeconomic challenge. Standard approaches widely used in clinical practice to stimulate or augment bone regeneration include distraction osteogenesis and bone transport 6 7 , and the use of a number of different bone-grafting methods, such as autologous bone grafts, allografts, and bone-graft substitutes or growth factors 8 9 . An alternative method for bone regeneration and reconstruction of long-bone defects is a two-stage procedure, known as the Masquelet technique. It is based on the concept of a "biological" membrane, which is induced after application of a cement spacer at the first stage and acts as a 'chamber' for the insertion of non-vascularised autograft at the second stage 10 . There are even non-invasive methods of biophysical stimulation, such as low-intensity pulsed ultrasound (LIPUS) and pulsed electromagnetic fields (PEMF) 11 12 13 , which are used as adjuncts to enhance bone regeneration. During distraction osteogenesis and bone transport, bone regeneration is induced between the gradually distracted osseous surfaces. A variety of methods are currently used to treat bone loss or limb-length discrepancies and deformities, including external fixators and the Ilizarov technique 6 7 , combined unreamed intramedullary nails with external monorail distraction devices 14 , or intramedullary lengthening devices 15 . However, these methods are technically demanding and have several disadvantages, including associated complications, requirement for lengthy treatment for both the distraction (1 mm per day) and the consolidation period (usually twice the distraction phase), and effects on the patient's psychology and well-being 6 7 . Bone grafting is a commonly performed surgical procedure to augment bone regeneration in a variety of orthopaedic and maxillofacial procedures, with autologous bone being considered as the 'gold standard' bone-grafting material, as it combines all properties required in a bone-graft material: osteoinduction (bone morphogenetic proteins (BMPs) and other growth factors), osteogenesis (osteoprogenitor cells) and osteoconduction (scaffold) 16 . It can also be harvested as a tricortical graft for structural support 16 , or as a vascularised bone graft for restoration of large bone defects 17 or avascular necrosis 18 . A variety of sites can be used for bone-graft harvesting, with the anterior and posterior iliac crests of the pelvis being the commonly used donor sites. Recently, with the development of a new reaming system, the reamer-irrigator-aspirator (RIA), initially developed to minimise the adverse effects of reaming during nailing of long-bone fractures, the intramedullary canal of long bones has been used as an alternative harvesting site, providing a large volume of autologous bone graft 19 . Furthermore, because it is the patient's own tissue, autologous bone is histocompatible and non-immunogenic, reducing to a minimum the likelihood of immunoreactions and transmission of infections. Nevertheless, harvesting requires an additional surgical procedure, with well-documented complications and discomfort for the patient, and has the additional disadvantages of quantity restrictions and substantial costs 20 21 22 . An alternative is allogeneic bone grafting, obtained from human cadavers or living donors, which bypasses the problems associated with harvesting and quantity of graft material. Allogeneic bone is available in many preparations, including demineralised bone matrix (DBM), morcellised and cancellous chips, corticocancellous and cortical grafts, and osteochondral and whole-bone segments, depending on the recipient site requirements. Their biological properties vary, but overall, they possess reduced osteoinductive properties and no cellular component, because donor grafts are devitalised via irradiation or freeze-drying processing 23 . There are issues of immunogenicity and rejection reactions, possibility of infection transmission, and cost 8 23 . Bone-graft substitutes have also been developed as alternatives to autologous or allogeneic bone grafts. They consist of scaffolds made of synthetic or natural biomaterials that promote the migration, proliferation and differentiation of bone cells for bone regeneration. A wide range of biomaterials and synthetic bone substitutes are currently used as scaffolds, including collagen, hydroxyapatite (HA), β-tricalcium phosphate (β-TCP) and calcium-phosphate cements, and glass ceramics 8 23 , and the research into this field is ongoing. Especially for reconstruction of large bone defects, for which there is a need for a substantial structural scaffold, an alternative to massive cortical auto- or allografts is the use of cylindrical metallic or titanium mesh cages as a scaffold combined with cancellous bone allograft, DBM or autologous bone 24 25 . Limitations of current strategies to enhance bone regeneration Most of the current strategies for bone regeneration exhibit relatively satisfactory results. However, there are associated drawbacks and limitations to their use and availability, and even controversial reports about their efficacy and cost-effectiveness. Furthermore, at present there are no heterologous or synthetic bone substitutes available that have superior or even the same biological or mechanical properties compared with bone. Therefore, there is a necessity to develop novel treatments as alternatives or adjuncts to the standard methods used for bone regeneration, in an effort to overcome these limitations, which has been a goal for many decades. Even back in the 1950s, Professor Sir Charnley, a pioneer British orthopaedic surgeon, stated that 'practically all classical operations of surgery have now been explored, and unless some revolutionary discovery is made which will put the control of osteogenesis in the surgeon's power, no great advance is likely to come from modification of their detail' 26 . Since then, our understanding of bone regeneration at the cellular and molecular level has advanced enormously, and is still ongoing. New methods for studying this process, such as quantitative three-dimensional microcomputed tomography analyses, finite element modelling, and nanotechnology have been developed to further evaluate the mechanical properties of bone regenerate at the microscopic level. In addition, advances made in cellular and molecular biology have allowed detailed histological analyses, in vitro and in vivo characterisation of bone-forming cells, identification of transcriptional and translational profiles of the genes and proteins involved in the process of bone regeneration and fracture repair, and development of transgenic animals to explore the role of a number of genes expressed during bone repair, and their temporal and tissue-specific expression patterns 27 . With the ongoing research in all related fields, novel therapies have been used as adjuncts or alternatives to traditional bone-regeneration methods. Nevertheless, the basic concept for managing all clinical situations requiring bone regeneration, particularly the complex and recalcitrant cases, remains the same, and must be applied. Treatment strategies should aim to address all (or those that require enhancement) prerequisites for optimal bone healing, including osteoconductive matrices, osteoinductive factors, osteogenic cells and mechanical stability, following the 'diamond concept' suggested for fracture healing (Figure 1) 28 . <p>Figure 1</p> Male patient 19 years of age with infected non-union after intramedullary nailing of an open tibial fracture Male patient 19 years of age with infected non-union after intramedullary nailing of an open tibial fracture. (A). Anteroposterior (AP) and lateral X-rays of the tibia illustrating osteolysis (white arrow) secondary to infection. The patient underwent removal of the nail, extensive debridement and a two-staged reconstruction of the bone defect, using the induced membrane technique for bone regeneration (the Masquelet technique). (B) Intraoperative pictures showing: (1) a 60 mm defect of the tibia (black arrow) at the second stage of the procedure; (2) adequate mechanical stability was provided with internal fixation (locking plate) bridging the defect, while the length was maintained (black arrow); (3) maximum biological stimulation was provided using autologous bone graft harvested from the femoral canal (black arrow, right), bone-marrow mesenchymal stem cells (broken arrow, left) and the osteoinductive factor bone morphogenetic protein-7 (centre); (4) the graft was placed to fill the bone defect (black arrow). (C) Intraoperative fluoroscopic images showing the bone defect after fixation. (D) Postoperative AP and lateral X-rays at 3 months, showing the evolution of the bone regeneration process with satisfactory incorporation and mineralisation of the graft (photographs courtesy of PVG). BMPs and other growth factors With improved understanding of fracture healing and bone regeneration at the molecular level 29 , a number of key molecules that regulate this complex physiological process have been identified, and are already in clinical use or under investigation to enhance bone repair. Of these molecules, BMPs have been the most extensively studied, as they are potent osteoinductive factors. They induce the mitogenesis of mesenchymal stem cells (MSCs) and other osteoprogenitors, and their differentiation towards osteoblasts. Since the discovery of BMPs, a number of experimental and clinical trials have supported the safety and efficacy of their use as osteoinductive bone-graft substitutes for bone regeneration. With the use of recombinant DNA technology, BMP-2 and BMP-7 have been licensed for clinical use since 2002 and 2001 respectively 30 . These two molecules have been used in a variety of clinical conditions including non-union, open fractures, joint fusions, aseptic bone necrosis and critical bone defects 9 . Extensive research is ongoing to develop injectable formulations for minimally invasive application, and/or novel carriers for prolonged and targeted local delivery 31 . Other growth factors besides BMPs that have been implicated during bone regeneration, with different functions in terms of cell proliferation, chemotaxis and angiogenesis, are also being investigated or are currently being used to augment bone repair 32 33 , including platelet-derived growth factor, transforming growth factor-β, insulin-like growth factor-1, vascular endothelial growth factor and fibroblast growth factor, among others 29 . These have been used either alone or in combinations in a number of in vitro and in vivo studies, with controversial results 32 33 . One current approach to enhance bone regeneration and soft-tissue healing by local application of growth factors is the use of platelet-rich plasma, a volume of the plasma fraction of autologous blood with platelet concentrations above baseline, which is rich in many of the aforementioned molecules 34 . 'Orthobiologics' and the overall concept to stimulate the local 'biology' by applying growth factors (especially BMPs, because these are the most potent osteoinductive molecules) could be advantageous for bone regeneration or even for acceleration of normal bone healing to reduce the length of fracture treatment. Their clinical use, either alone or combined with bone grafts, is constantly increasing. However, there are several issues about their use, including safety (because of the supraphysiological concentrations of growth factors needed to obtain the desired osteoinductive effects), the high cost of treatment, and more importantly, the potential for ectopic bone formation 35 . Currently BMPs are also being used in bone-tissue engineering, but several issues need to be further examined, such as optimum dosage and provision of a sustained, biologically appropriate concentration at the site of bone regeneration, and the use of a 'cocktail' of other growth factors that have shown significant promising results in preclinical and early clinical investigation 32 or even the use of inhibitory molecules in an effort to mimic the endogenous 'normal' growth-factor production. Nanoparticle technology seems to be a promising approach for optimum growth-factor delivery in the future of bone-tissue engineering 36 . Nevertheless, owing to gaps in the current understanding of these factors, it has not been possible to reproduce in vivo bone regeneration in the laboratory. An adequate supply of cells (MSCs and osteoprogenitors) is important for efficient bone regeneration. The current approach of delivering osteogenic cells directly to the regeneration site includes use of bone-marrow aspirate from the iliac crest, which also contains growth factors. It is a minimally invasive procedure to enhance bone repair, and produces satisfactory results 37 . However, the concentration and quality of MSCs may vary significantly, depending on the individual (especially in older people) 38 39 , the aspiration sites and techniques used 39 , and whether further concentration of the bone marrow has been performed 37 , as bone-marrow aspiration concentrate (BMAC) is considered to be an effective product to augment bone grafting and support bone regeneration 40 41 . Overall, however, there are significant ongoing issues with quality control with respect to delivering the requisite number of MSCs/osteoprogenitors to effect adequate repair responses 40 . Issues of quantity and alternative sources of MSCs are being extensively investigated. Novel approaches in terms of cell harvesting, in vitro expansion and subsequent implantation are promising 42 43 44 , because in vitro expansion can generate a large number of progenitor cells. However, such techniques add substantial cost and risks (such as contamination with bacteria or viruses), may reduce the proliferative capacity of the cells and are time-consuming requiring two-stage surgery 45 . This strategy is already applied for cartilage regeneration 46 . Alternative sources of cells, which are less invasive, such as peripheral blood 47 and mesenchymal progenitor cells from fat 48 , muscle, or even traumatised muscle tissue after debridement 49 , are also under extensive research. However, the utility of fat-derived MSCs for bone-regeneration applications is debatable, with some studies showing them to be inferior to bone-marrow-derived MSCs in animal models 50 51 , and the evidence for a clinically relevant or meaningful population of circulating MSCs also remains very contentious 52 . It is fair to say that the role of MSCs in fracture repair is still in its infancy, largely due to a lack of studies into the biology of MSCs in vivo in the fracture environment. This to a large extent relates to the historical perceived rarity of 'in vivo MSCs' and also to a lack of knowledge about in vivo phenotypes. The in vivo phenotype of bone-marrow MSCs has been recently reported 53 and, even more recently, it has been shown that this population was actually fairly abundant in vivo in normal and pathological bone 54 . This knowledge opens up novel approaches for the characterisation and molecular profiling of MSCs in vivo in the fracture environment. This could be used to ultimately improve outcomes of fracture non-union based on the biology of these key MSC reparative cells. Scaffolds and bone substitutes Although they lack osteoinductive or osteogenic properties, synthetic bone substitutes and biomaterials are already widely used in clinical practice for osteoconduction. DBM and collagen are biomaterials, used mainly as bone-graft extenders, as they provide minimal structural support 8 . A large number of synthetic bone substitutes are currently available, such as HA, β-TCP and calcium-phosphate cements, and glass ceramics 8 23 . These are being used as adjuncts or alternatives to autologous bone grafts, as they promote the migration, proliferation and differentiation of bone cells for bone regeneration. Especially for regeneration of large bone defects, where the requirements for grafting material are substantial, these synthetics can be used in combination with autologous bone graft, growth factors or cells 8 . Furthermore, there are also non-biological osteoconductive substrates, such as fabricated biocompatible metals (for example, porous tantalum) that offer the potential for absolute control of the final structure without any immunogenicity 8 . Research is ongoing to improve the mechanical properties and biocompatibility of scaffolds, to promote osteoblast adhesion, growth and differentiation, and t0 allow vascular ingrowth and bone-tissue formation. Improved biodegradable and bioactive three-dimensional porous scaffolds 55 are being investigated, as well as novel approaches using nanotechnology, such as magnetic biohybrid porous scaffolds acting as a crosslinking agent for collagen for bone regeneration guided by an external magnetic field 56 , or injectable scaffolds for easier application 57 . Tissue engineering The tissue-engineering approach is a promising strategy added in the field of bone regenerative medicine, which aims to generate new, cell-driven, functional tissues, rather than just to implant non-living scaffolds 58 . This alternative treatment of conditions requiring bone regeneration could overcome the limitations of current therapies, by combining the principles of orthopaedic surgery with knowledge from biology, physics, materials science and engineering, and its clinical application offers great potential 58 59 . In essence, bone-tissue engineering combines progenitor cells, such as MSCs (native or expanded) or mature cells (for osteogenesis) seeded in biocompatible scaffolds and ideally in three-dimensional tissue-like structures (for osteoconduction and vascular ingrowth), with appropriate growth factors (for osteoinduction), in order to generate and maintain bone 60 . The need for such improved composite grafts is obvious, especially for the management of large bone defects, for which the requirements for grafting material are substantial 8 . At present, composite grafts that are available include bone synthetic or bioabsorbable scaffolds seeded with bone-marrow aspirate or growth factors (BMPs), providing a competitive alternative to autologous bone graft 8 . Several major technical advances have been achieved in the field of bone-tissue engineering during the past decade, especially with the increased understanding of bone healing at the molecular and cellular level, allowing the conduction of numerous animal studies and of the first pilot clinical studies using tissue-engineered constructs for local bone regeneration. To date, seven human studies have been conducted using culture-expanded, non-genetically modified MSCs for regeneration of bone defects: two studies reporting on long bones and five on maxillofacial conditions 61 . Even though they are rather heterogeneous studies and it is difficult to draw conclusive evidence from them, bone apposition by the grafted MSCs was seen, but it was not sufficient to bridge large bone defects. Furthermore, the tissue-engineering approach has been used to accelerate the fracture-healing process or to augment the bone-prosthesis interface and prevent aseptic loosening in total joint arthroplasty, with promising results regarding its efficacy and safety 62 63 . Recently, an animal study has shown the potential for prefabrication of vascularised bioartificial bone grafts in vivo using β-TCP scaffolds intraoperatively filled with autogenous bone marrow for cell loading, and implanted into the latissimus dorsi muscle for potential application at a later stage for segmental bone reconstruction, introducing the principles of bone transplantation with minimal donor-site morbidity and no quantity restrictions 64 . Overall, bone-tissue engineering is in its infancy, and there are many issues of efficacy, safety and cost to be addressed before general clinical application can be achieved. Cultured-expanded cells may have mutations or epigenetic changes that could confer a tumour-forming potential 44 . However, in vitro and in vivo evidence suggests that the risk of tumour formation is minimal 65 . No cases of tumour transformation were reported in 41 patients (45 joints) after autologous bone-marrow-derived MSC transplantation for cartilage repair, who were followed for up to 11 years and 5 months 46 . Another important issue is the difficulty of achieving an effective and high-density cell population within three-dimensional biodegradable scaffolds 66 . Consequently, numerous bioreactor technologies have been investigated, and many others should be developed 67 . Their degradation properties should also preserve the health of local tissues and the continuous remodelling of bone 44 . Three-dimensional porous scaffolds with specific architectures at the nano, micro and macro scale for optimum surface porosity and chemistry, which enhance cellular attachment, migration, proliferation and differentiation, are undergoing a continuous evaluation process. Gene therapy Another promising method of growth-factor delivery in the field of bone-tissue engineering is the application of gene therapy 68 69 . This involves the transfer of genetic material into the genome of the target cell, allowing expression of bioactive factors from the cells themselves for a prolonged time. Gene transfer can be performed using a viral (transfection) or a non-viral (transduction) vector, and by either an in vivo or ex vivo gene-transfer strategy. With the in vivo method, which is technically relatively easier, the genetic material is transferred directly into the host; however, there are safety concerns with this approach. The indirect ex vivo technique requires the collection of cells by tissue harvest, and their genetic modification in vitro before transfer back into the host. Although technically more demanding, it is a safer method, allowing testing of the cells for any abnormal behaviour before reimplantation, and selection of those with the highest gene expression 69 . Besides the issues of cost, efficacy and biological safety that need to be answered before this strategy of genetic manipulation is applied in humans, delivery of growth factors, particularly BMPs, using gene therapy for bone regeneration has already produced promising results in animal studies 70 71 . Mechanical stability and the role of mechanical stimulation in bone regeneration In addition to the intrinsic potential of bone to regenerate and to the aforementioned methods used to enhance bone regeneration, adequate mechanical stability by various means of stabilisation and use of fixation devices is also an important element for optimal bone repair, especially in challenging cases involving large bone defects or impaired bone healing. The mechanical environment constitutes the fourth factor of the 'diamond concept' of fracture healing, along with osteoconductive scaffolds, growth factors and osteogenic cells, interacting during the repair process 28 . During bone regeneration, intermediate tissues, such as fibrous connective tissue, cartilage and woven bone, precede final bone formation, providing initial mechanical stability and a scaffold for tissue differentiation. The mechanical loading affects the regeneration process, with different stress distribution favouring or inhibiting differentiation of particular tissue phenotypes 72 . High shear strain and fluid flows are thought to stimulate formation of fibrous connective tissue, whereas lower levels stimulate formation of cartilage, and even lower levels favour ossification 72 . The interfragmentary strain concept of Perren has been used to describe the different patterns of bone repair (primary or secondary fracture healing), suggesting that the strain that causes healthy bone to fail is the upper limit that can be tolerated for the regenerating tissue 73 . Since then, extensive research on this field has further refined the effects of mechanical stability and mechanical stimulation on bone regeneration and fracture healing 74 . Numerous in vivo studies have shown contradictory results regarding the contribution of strain and mechanical stimulation, in terms of compression or distraction, in bone formation during fracture healing. In early fracture healing, mechanical stimulation seems to enhance callus formation, but the amount of callus formation does not correspond to stiffness 74 . During the initial stages of bone healing, a less rigid mechanical environment resulted in a prolonged chondral bone regeneration phase, whereas the process of intramembranous ossification appeared to be independent of mechanical stability 75 . By contrast, a more rigid mechanical environment resulted in a smaller callus and a reduced fibrous-tissue component 76 . For later stages of bone regeneration, lower mechanical stability was found to inhibit callus bridging and stiffness 74 . Finally, in vitro studies have also shown the role of the mechanical environment on different cell types involved in bone regeneration. It has been demonstrated using cell-culture systems that the different cellular responses in terms of proliferation and differentiation after mechanical stimulation depend on the strain magnitude and the cell phenotype 74 . Mechanical stability is also important for local vascularisation and angiogenesis during bone regeneration. In an in vivo study, it was shown that smaller interfragmentary movements led to the formation of a greater number of vessels within the callus, particularly in areas close to the periosteum, compared with larger movements 77 , whereas increased interfragmentary shear was associated with reduced vascularisation with a higher amount of fibrous-tissue formation and a lower percentage of mineralised bone during early bone healing 78 . Finally, the presence of a mechanically stable environment throughout the bone-regeneration process is also essential when additional methods are being used to enhance bone repair 28 79 . Optimal instrumentation with minimal disruption of the local blood supply is required to supplement and protect the mechanical properties of the implanted grafts or scaffolds to allow incorporation, vascularisation and subsequent remodelling 79 . Systemic enhancement of bone regeneration As an alternative to local augmentation of the bone-regeneration process, the use of systemic agents, including growth hormone (GH) 80 and parathyroid hormone (PTH) 81 is also under extensive research. Current evidence suggests a positive role for GH in fracture healing, but there are issues about its safety profile and optimal dose, when systemically administered to enhance bone repair 80 . There are also numerous animal studies and clinical trials showing that intermittent PTH administration induces both cancellous and cortical bone regeneration, enhances bone mass, and increases mechanical bone strength and bone-mineral density, with a relatively satisfactory safety profile 81 82 . Currently, two PTH analogues, PTH 1-34 (or teriparitide) and PTH 1-84, are already used in clinical practice as anabolic agents for the treatment of osteoporosis 81 83 , and research is being carried out into their off-label use as bone-forming agents in complex conditions requiring enhancement of bone repair, such as complicated fractures and non-unions. In addition to the anabolic agents for bone regeneration, current antiresorptive therapies that are already in clinical use for the management of osteoporosis could be used to increase bone-mineral density during bone regeneration and remodelling by reducing bone resorption. Biphosphonates, known to reduce the recruitment and activity of osteoclasts and increase their apoptosis, and strontium ranelate, known to inhibit bone resorption and stimulate bone formation, could be beneficial adjuncts to bone repair by altering bone turnover 84 . In addition, a new pharmaceutical agent called denosumab, which is a fully human monoclonal antibody designed to target receptor activator of nuclear factor-κB ligand (RANKL), a protein that selectively inhibits osteoclastogenesis, might not only decrease bone turnover and increase bone-mineral density in osteoporosis, but also indirectly improve bone regeneration in other conditions requiring enhancement 85 . Recently, another signalling pathway, the Wnt pathway, was found to play a role in bone regeneration 86 . Impaired Wnt signalling is associated with osteogenic pathologies, such as osteoporosis and osteopenia. Thus, novel strategies that systemically induce the Wnt signalling pathway or inhibit its antagonists, such as sclerostin, can improve bone regeneration. However, there are concerns about carcinogenesis 87 . Another approach for systemic enhancement of bone regeneration is the use of agonists of the prostaglandin receptors EP2 and EP4, which were found to be skeletally anabolic at cortical and cancellous sites. Promising results have been seen in animal models, without adverse effects, and therefore these receptors may represent novel anabolic agents for the treatment of osteoporosis and for augmentation of bone healing 27 . Finally, new treatments for systemic augmentation of bone regeneration may come to light while researchers are trying to elucidate the alterations seen at the cellular and molecular level in conditions with increased bone formation capacity. Fibrodysplasia ossificans progressiva, a rare genetic disorder, is an example of how an abnormal metabolic condition can be viewed as evidence for systemic regeneration of large amounts of bone secondary to alterations within the BMP signalling pathway 88 , and may indicate unique treatment potentials. There are several clinical conditions that require enhancement of bone regeneration either locally or systemically, and various methods are currently used to augment or accelerate bone repair, depending on the healing potential and the specific requirements of each case. Knowledge of bone biology has vastly expanded with the increased understanding at the molecular level, resulting in development of many new treatment methods, with many others (or improvements to current ones) anticipated in the years to come. However, there are still gaps; in particular, there is still surprisingly little information available about the cellular basis for MSC-mediated fracture repair and bone regeneration in vivo in humans. Further understanding in this area could be the key to an improved and integrated strategy for skeletal repair. In the future, control of bone regeneration with strategies that mimic the normal cascade of bone formation will offer successful management of conditions requiring enhancement of bone regeneration, and reduce their morbidity and cost in the long term. Research is ongoing within all relevant fields, and it is hoped that many bone-disease processes secondary to trauma, bone resection due to ablative surgery, ageing, and metabolic or genetic skeletal disorders will be successfully treated with novel bone-regeneration protocols that may address both local and systemic enhancement to optimise outcome. Competing interests The authors declare that they have no competing interests. Authors' contributions RD contributed to the literature review and writing. EJ, DMcG and PVG contributed to the writing of specific sections of the manuscript within their main scientific interest, and critically revised the manuscript for important intellectual content. All authors read and have given final approval of the final manuscript. <p>Bone injury, healing and grafting</p>BatesPRamachandranMBasic Orthopaedic Sciences. The Stanmore GuideLondon: Hodder ArnoldRamachandran M2007123134<p>The cell and molecular biology of fracture healing</p>EinhornTAClin Orthop Relat Res1998355SupplS7219917622<p>Differential temporal expression of members of the transforming growth factor beta superfamily during murine fracture healing</p>ChoTJGerstenfeldLCEinhornTAJ Bone Miner Res20021751352010.1359/jbmr.2002.17.3.51311874242<p>Does adult fracture repair recapitulate embryonic skeletal formation?</p>FergusonCAlpernEMiclauTHelmsJAMech Dev199987576610.1016/S0925-4773(99)00142-210495271<p>Path analysis of factors for delayed healing and nonunion in 416 operatively treated tibial shaft fractures</p>AudigéLGriffinDBhandariMKellamJRüediTPClin Orthop Relat Res200543822123216131895<p>Limb-lengthening, skeletal reconstruction, and bone transport with the Ilizarov method</p>AronsonJJ Bone Joint Surg Am1997798124312589278087<p>Management of segmental defects by the Ilizarov intercalary bone transport method</p>GreenSAJacksonJMWallDMMarinowHIshkanianJClin Orthop Relat Re.1992280136142<p>Bone substitutes: an update</p>GiannoudisPVDinopoulosHTsiridisEInjury200536Suppl 3S202716188545<p>Bone morphogenetic proteins in musculoskeletal medicine</p>GiannoudisPVEinhornTAInjury200940Suppl 3S1320082783<p>The concept of induced membrane for reconstruction of long bone defects</p>MasqueletACBegueTOrthop Clin North Am2010411273710.1016/j.ocl.2009.07.01119931050<p>The effect of low-intensity pulsed ultrasound therapy on time to fracture healing: a meta-analysis</p>BusseJWBhandariMKulkarniAVTunksECMAJ200216644374419935211873920<p>Improved healing response in delayed unions of the tibia with low-intensity pulsed ultrasound: results of a randomized sham-controlled trial</p>SchoferMDBlockJEAignerJSchmelzABMC Musculoskelet Disord20101122910.1186/1471-2474-11-229295898620932272<p>Low-intensity pulsed ultrasound and pulsed electromagnetic field in the treatment of tibial fractures: a systematic review</p>WalkerNADenegarCRPreischeJJ Athl Train2007424530535214008018174942<p>The monorail method for segment bone transport</p>RaschkeMOedekovenGFickeJClaudiBFInjury199324Suppl 2S54618188331<p>The intramedullary skeletal kinetic distractor (ISKD): first clinical results of a new intramedullary nail for lengthening of the femur and tibia</p>ColeJDJustinDKasparisTDeVlughtDKnoblochCInjury200132Suppl 412913911223044<p>Bone graft materials. An overview of the basic science</p>BauerTWMuschlerGFClin Orthop Relat Res2000371102710693546<p>Long bone reconstruction with vascularized bone grafts</p>PedersonWCPersonDWOrthop Clin North Am2007381233510.1016/j.ocl.2006.10.00617145292<p>Femoral head osteonecrosis: Why choose free vascularized fibula grafting</p>KorompiliasAVBerisAELykissasMGKostas-AgnantisIPSoucacosPNMicrosurgery2010<p>Surgical techniques: how I do it? The reamer/irrigator/aspirator (RIA) system</p>GiannoudisPVTzioupisCGreenJInjury200940111231123610.1016/j.injury.2009.07.07019783249<p>Comparison of anterior and posterior iliac crest bone graft in terms of harvest-site morbidity and functional outcomes</p>AhlmannEPatzakisMRoidisNShepherdLHoltomPJ Bone Joint Surg Am200284571672010.1302/0301-620X.84B5.1257112004011<p>Physical and monetary costs associated with autogenous bone graft harvesting</p>St JohnTAVaccaroARSahAPSchaeferMBertaSCAlbertTHilibrandAAm J Orthop2003321182312580346<p>Morbidity at bone graft donor sites</p>YoungerEMChapmanMWJ Orthop Trauma19893319219510.1097/00005131-198909000-000022809818<p>Bone-grafting and bone-graft substitutes</p>FinkemeierCGJ Bone Joint Surg Am200284345446411886919<p>Is an impacted morselized graft in a cage an alternative for reconstructing segmental diaphyseal defects?</p>BullensPHBart SchreuderHWde Waal MalefijtMCVerdonschotNBumaPClin Orthop Relat Res2009467378379110.1007/s11999-008-0686-5263545119142693<p>Management of a long segmental defect at the proximal meta-diaphyseal junction of the tibia using a cylindrical titanium mesh cage</p>OstermannPAHaaseNRübberdtAWichMEkkernkampAJ Orthop Trauma200216859760110.1097/00005131-200209000-0001012352570UristMRO'ConnorBTBurwellRGBone Graft Derivatives and SubstitutesOxford: Butterworth-Heinemann Ltd1994<p>The control of fracture healing and its therapeutic targeting: improving upon nature</p>KomatsuDEWardenSJJ Cell Biochem2010109230231119950200<p>Fracture healing: the diamond concept</p>GiannoudisPVEinhornTAMarshDInjury200738Suppl 4S3618224731<p>Current concepts of molecular aspects of bone healing</p>DimitriouRTsiridisEGiannoudisPVInjury200536121392140410.1016/j.injury.2005.07.01916102764<p>Medical devices. [<url>http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/DeviceApprovalsandClearances/Recently-ApprovedDevices/default.htm</url>]</p>Food and Drug Administration<p>Formulations and delivery vehicles for bone morphogenetic proteins: latest advances and future directions</p>BlokhuisTJInjury200940Suppl 3S81120082796<p>Growth factors: beyond bone morphogenetic proteins</p>NauthAGiannoudisPVEinhornTAHankensonKDFriedlaenderGELiRSchemitschEHJ Orthop Trauma201024954354610.1097/BOT.0b013e3181ec483320736791<p>The role of growth factors and related agents in accelerating fracture healing</p>SimpsonAHMillsLNobleBJ Bone Joint Surg Br200688670170510.1302/0301-620X.88B6.1752416720758<p>The biology of platelet-rich plasma and its application in trauma and orthopaedic surgery: a review of the literature</p>AlsousouJThompsonMHulleyPNobleAWillettKJ Bone Joint Surg Br200991898799610.1302/0301-620X.91B8.2254619651823<p>Bone morphogenetic proteins in orthopaedic trauma surgery</p>ArgintarEEdwardsSDelahayJInjury2010<p>Composite glycidyl methacrylated dextran (Dex-GMA)/gelatin nanoparticles for localized protein delivery</p>ChenFMMaZWDongGYWuZFActa Pharmacol Sin200930448549310.1038/aps.2009.1519305420<p>Efficacy of minimally invasive techniques for enhancement of fracture healing: evidence today</p>PountosIGeorgouliTKontakisGGiannoudisPVInt Orthop201034131210.1007/s00264-009-0892-0289926019844709<p>Age-related osteogenic potential of mesenchymal stromal stem cells from human vertebral bone marrow</p>D'IppolitoGSchillerPCRicordiCRoosBAHowardGAJ Bone Miner Res19991471115112210.1359/jbmr.1999.14.7.111510404011<p>Effect of age and sampling site on the chondro-osteogenic potential of rabbit marrow-derived mesenchymal progenitor cells</p>HuibregtseBAJohnstoneBGoldbergVMCaplanAIJ Orthop Res2000181182410.1002/jor.110018010410716274<p>Percutaneous autologous bone-marrow grafting for nonunions. Influence of the number and concentration of progenitor cells</p>HernigouPPoignardABeaujeanFRouardHJ Bone Joint Surg Am20058771430143710.2106/JBJS.D.0221515995108<p>Bridging the gap: bone marrow aspiration concentrate reduces autologous bone grafting in osseous defects</p>JägerMHertenMFochtmannUFischerJHernigouPZilkensCHendrichCKrauspeRJ Orthop Res201129217318010.1002/jor.2123020740672<p>Ex vivo enrichment of mesenchymal cell progenitors by fibroblast growth factor 2</p>BianchiGBanfiAMastrogiacomoMNotaroRLuzzattoLCanceddaRQuartoRExp Cell Res200328719810510.1016/S0014-4827(03)00138-112799186<p>Marrow-isolated adult multilineage inducible (MIAMI) cells, a unique population of postnatal young and old human cells with extensive expansion and differentiation potential</p>D'IppolitoGDiabiraSHowardGAMeneiPRoosBASchillerPCJ Cell Sci2004117142971298110.1242/jcs.0110315173316<p>Cellular strategies for enhancement of fracture repair</p>PattersonTEKumagaiKGriffithLMuschlerGFJ Bone Joint Surg Am200890Suppl 111111918676945<p>The relevance of mesenchymal stem cells in vivo for future orthopaedic strategies aimed at fracture repair</p>McGonagleDEnglishAJonesEACurr Orthop200721426226710.1016/j.cuor.2007.07.004<p>Safety of autologous bone marrow-derived mesenchymal stem cell transplantation for cartilage repair in 41 patients with 45 joints followed for up to 11 years and 5 months</p>WakitaniSOkabeTHoribeSMitsuokaTSaitoMKoyamaTNawataMTenshoKKatoHUematsuKKurodaRKurosakaMYoshiyaSHattoriKOhgushiHJ Tissue Eng Regen Med20115214615010.1002/term.29920603892<p>Therapeutic potential of vasculogenesis and osteogenesis promoted by peripheral blood CD34-positive cells for functional bone healing</p>MatsumotoTKawamotoAKurodaRIshikawaMMifuneYIwasakiHMiwaMHoriiMHayashiSOyamadaANishimuraHMurasawaSDoitaMKurosakaMAsaharaTAm J Pathol20061691440145710.2353/ajpath.2006.060064169884417003498<p>Multilineage cells from human adipose tissue: implications for cell-based therapies</p>ZukPAZhuMMizunoHHuangJFutrellJWKatzAJBenhaimPLorenzHPHedrickMHTissue Eng20017221122810.1089/10763270130006285911304456<p>Mesenchymal progenitor cells derived from traumatized human muscle</p>JacksonWMAragonABDjouadFSongYKoehlerSMNestiLJTuanRSJ Tissue Eng Regen Med20093212913810.1002/term.149281416119170141<p>Do adipose tissue-derived mesenchymal stem cells have the same osteogenic and chondrogenic potential as bone marrow-derived cells?</p>ImGIShinYWLeeKBOsteoarthritis Cartilage2005131084585310.1016/j.joca.2005.05.00516129630<p>Comparison of mesenchymal stem cells from bone marrow and adipose tissue for bone regeneration in a critical size defect of the sheep tibia and the influence of platelet-rich plasma</p>NiemeyerPFechnerKMilzSRichterWSuedkampNPMehlhornATPearceSKastenPBiomaterials201031133572352910.1016/j.biomaterials.2010.01.08520153047<p>Human bone marrow mesenchymal stem cells in vivo</p>JonesEMcGonagleDRheumatology (Oxford)2008472126131<p>Isolation and characterization of bone marrow multipotential mesenchymal progenitor cells</p>JonesEAKinseySEEnglishAJonesRAStraszynskiLMeredithDMMarkhamAFJackAEmeryPMcGonagleDArthritis Rheum200246123349336010.1002/art.1069612483742<p>Large-scale extraction and characterization of CD271+ multipotential stromal cells from trabecular bone in health and osteoarthritis: implications for bone regeneration strategies based on uncultured or minimally cultured multipotential stromal cells</p>JonesEEnglishAChurchmanSMKouroupisDBoxallSAKinseySGiannoudisPGEmeryPMcGonagleDArthritis Rheum20106271944195420222109<p>A novel collagen/hydroxyapatite/poly(lactide-co-ε-caprolactone) biodegradable and bioactive 3D porous scaffold for bone regeneration</p>AkkouchAZhangZRouabhiaMJ Biomed Mater Res A201196A69370410.1002/jbm.a.33033<p>A conceptually new type of bio-hybrid scaffold for bone regeneration</p>TampieriALandiEValentiniFSandriMD'AlessandroTDediuVMarcacciMNanotechnology201122101510410.1088/0957-4484/22/1/01510421135464<p>Injectable nanocrystalline hydroxyapatite paste for bone substitution: in vivo analysis of biocompatibility and vascularization</p>LaschkeMWWittKPohlemannTMengerMDJ Biomed Mater Res B Appl Biomater200782249450517279565<p>Bone tissue engineering: state of the art and future trends</p>SalgadoAJCoutinhoOPReisRLMacromol Biosci20044874376510.1002/mabi.20040002615468269<p>Bone tissue engineering: hope vs hype</p>RoseFROreffoROBiochem Biophys Res Commun20022921710.1006/bbrc.2002.651911890663<p>Mesenchymal stem cells and their future in bone repair</p>JonesEAYangXBInt J Adv Rheumatol2005331521<p>Clinical application of human mesenchymal stromal cells for bone tissue engineering</p>ChatterjeaAMeijerGvan BlitterswijkCde BoerJStem Cells Int20102010215625298937921113294<p>A multi-center, randomized, clinical study to compare the effect and safety of autologous cultured osteoblast (Ossron) injection to treat fractures</p>KimSJShinYWYangKHKimSBYooMJHanSKImSAWonYDSungYBJeonTSChangCHJangJDLeeSBKimHCLeeSYBMC Musculoskelet Disord2009102010.1186/1471-2474-10-20265645519216734<p>Tissue engineered ceramic artificial joint--ex vivo osteogenic differentiation of patient mesenchymal cells on total ankle joints for treatment of osteoarthritis</p>OhgushiHKotobukiNFunaokaHMachidaHHiroseMTanakaYTakakuraYBiomaterials200526224654466110.1016/j.biomaterials.2004.11.05515722135<p>Prefabrication of vascularized bioartificial bone grafts in vivo for segmental mandibular reconstruction: experimental pilot study in sheep and first clinical application</p>KokemuellerHSpalthoffSNolffMTavassolFEssigHStuehmerCBormannKHRückerMGellrichNCInt J Oral Maxillofac Surg201039437938710.1016/j.ijom.2010.01.01020167453<p>Clinical-grade production of human mesenchymal stromal cells: occurrence of aneuploidy without transformation</p>TarteKGaillardJLatailladeJJFouillardLBeckerMMossafaHTchirkovARouardHHenryCSplingardMDulongJMonnierDGourmelonPGorinNCSensebéLSociété Française de Greffe de Moelle et Thérapie CellulaireBlood201011581549155310.1182/blood-2009-05-21990720032501<p>Conditions affecting cell seeding onto three-dimensional scaffolds for cellular-based biodegradable implants</p>WeinandCXuJWPerettiGMBonassarLJGillTJJ Biomed Mater Res B Appl Biomater2009911808719388093<p>Repair of large osteochondral defects with allogeneic cartilaginous aggregates formed from bone marrow-derived cells using RWV bioreactor</p>YoshiokaTMishimaHOhyabuYSakaiSAkaogiHIshiiTKojimaHTanakaJOchiaiNUemuraTJ Orthop Res200725101291129810.1002/jor.2042617549704<p>Mesenchymal stem cells and gene therapy</p>CaplanAIClin Orthop Relat Res2000379SupplS677011039754<p>Orthopaedic application of gene therapy</p>ChenYJ Orthop Sci2001619920710.1007/s00776010007211484110<p>Bone morphogenetic proteins and tissue engineering: future directions</p>CaloriGMDonatiDDi BellaCTagliabueLInjury200940Suppl 3S677620082795<p>Combination of bone tissue engineering and BMP-2 gene transfection promotes bone healing in osteoporotic rats</p>TangYTangWLinYLongJWangHLiuLTianWCell Biol Int20083291150115710.1016/j.cellbi.2008.06.00518638562<p>A mechano-regulation model for tissue differentiation during fracture healing: analysis of gap size and loading</p>LacroixDPrendergastPJJ Biomech20023591163117110.1016/S0021-9290(02)00086-612163306<p>Physical and biological aspects of fracture healing with special reference to internal fixation</p>PerrenSMClin Orthop Relat Res1979138175196376198<p>Effect of mechanical stability on fracture healing--an update</p>JagodzinskiMKrettekCInjury200738Suppl1S31018224731<p>Instability prolongs the chondral phase during bone healing in sheep</p>EpariDRSchellHBailHJDudaGNBone200638686487010.1016/j.bone.2005.10.02316359937<p>The course of bone healing is influenced by the initial shear fixation stability</p>SchellHEpariDRKassiJPBragullaHBailHJDudaGNJ Orthop Res20052351022102810.1016/j.orthres.2005.03.00515878254<p>The effect of mechanical stability on local vascularization and tissue differentiation in callus healing</p>ClaesLEckert-HübnerKAugatPJ Orthop Res20022051099110510.1016/S0736-0266(02)00044-X12382978<p>Initial vascularization and tissue differentiation are influenced by fixation stability</p>LienauJSchellHDudaGNSeebeckPMuchowSBailHJJ Orthop Res200523363964510.1016/j.orthres.2004.09.00615885486<p>Bone scaffolds: The role of mechanical stability and instrumentation</p>BabisGCSoucacosPNInjury200536SupplS38S4416291322<p>Growth hormone: does it have a therapeutic role in fracture healing?</p>TranGTPagkalosJTsiridisENarvaniAAHeliotisMMantalarisATsiridisEExpert Opin Investig Drugs200918788791110.1517/1354378090289306919480608<p>Parathyroid hormone as an anabolic skeletal therapy</p>RubinMRBilezikianJPDrugs200565172481249810.2165/00003495-200565170-0000516296873<p>The safety and efficacy of parathyroid hormone (PTH) as a biological response modifier for the enhancement of bone regeneration</p>TzioupisCCGiannoudisPVCurr Drug Saf20061218920310.2174/15748860677693057118690930<p>PTH analogues and osteoporotic fractures</p>VerhaarHJLemsWFExpert Opin Biol Ther20101091387139410.1517/14712598.2010.50687020629581<p>European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis (ESCEO): European guidance for the diagnosis and management of osteoporosis in postmenopausal women</p>KanisJABurletNCooperCDelmasPDReginsterJYBorgstromFRizzoliROsteoporos Int200819439942810.1007/s00198-008-0560-z261396818266020<p>The role and efficacy of denosumab in the treatment of osteoporosis: an update</p>CharopoulosIOrmeSGiannoudisPVExpert Opin Drug Saf2011<p>Wnt pathway, an essential role in bone regeneration</p>ChenYAlmanBAJ Cell Biochem2009106335336210.1002/jcb.2202019127541<p>The therapeutic potential of the Wnt signaling pathway in bone disorders</p>WagnerERZhuGZhangBQLuoQShiQHuangEGaoYGaoJLKimSHRastegarFYangKHeBCChenLZuoGWBiYSuYLuoJLuoXHuangJDengZLReidRRLuuHHHaydonRCHeTCCurr Mol Pharmacol201141142510.2174/187446721110401001420825362<p>Mutations of the noggin (NOG) and of the activin A type I receptor (ACVR1) genes in a series of twenty-seven French fibrodysplasia ossificans progressiva (FOP) patients</p>LucotteGHouzetAHubansCLagardeJPLenoirGGenet Couns2009201536219400542 Pre-publication history
__label__1
0.985417
Public Release:  Climate change projected to drive species northward New study predicts eastern Pacific species shifting poleward 30 km per decade NOAA Fisheries West Coast Region The study suggests that shifting species will likely move into the habitats of other marine life to the north, especially in the Gulf of Alaska and Bering Sea. Some will simultaneously disappear from areas at the southern end of their ranges, especially off Oregon and California. "As the climate warms, the species will follow the conditions they're adapted to," said Richard Brodeur, a NOAA Fisheries senior scientist at the Northwest Fisheries Science Center's Newport Research Station and coauthor of the study. "We're going to see more interactions between species and there will be winners and losers that we cannot foresee." The study, led by William Cheung of the University of British Columbia, estimated changes in the distribution of 28 near-surface fish species commonly collected by research surveys in the northeast Pacific Ocean. The researchers used established global climate models to project how the distribution of the fish would shift by 2050 as greenhouse gases warm the atmosphere and, in turn, the ocean surface. Brodeur cautioned that like any models, climate models carry uncertainty. While they provide a glimpse of the most likely changes in global climate, they may be less accurate when estimating more fine-scale, local changes. "Nothing is certain," he said, "but we think we have a picture of the most likely changes." Some species shifts are already being documented as West Coast waters are warming: predatory Humboldt squid from Central and South America have invaded the West Coast of North America in recent years, albacore have shifted to more northerly waters and eulachon have disappeared from warming waters at the southern end of their range. "Thinking more broadly, this re-shuffling of marine species across the whole biological community may lead to declines in the beneficial functions of marine and coastal ecosystems," said Tom Okey, a Pew Fellow in Marine Conservation at the University of Victoria and a coauthor of the study. "These declines may occur much more rapidly and in more surprising ways than our expected changes in species alone." The study anticipates warm-water species such as thresher sharks and chub mackerel becoming more prominent in the Gulf of Alaska and off British Columbia. Some predators such as sea lions and seabirds, which rear their young in fixed rookeries or colonies, may find the fish they usually prey on moving beyond predators' usual foraging ranges. "If their prey moves farther north, they either have to travel farther and expend more energy to get to them, or find something else to eat," Brodeur said. "It's the same thing for fishermen. If it gets warmer, the fish they depend on are going to move up north and that means more travel time and more fuel will be needed to follow them, or else they may need to switch to different target species. It may not happen right away but we are likely to see that kind of a trend." El Nino years, when tropical influences temporarily warm the eastern Pacific, offer a preview of what to expect as the climate warms. Shifts in marine communities may be most pronounced in high-latitude regions such as the Gulf of Alaska and Bering Sea, which the study identifies as "hotspots" of change. Cold-water species such as salmon and capelin have narrower temperature preferences than warmer water species, making them more sensitive to ocean warming and likely to respond more quickly. An intrusion of warm-water species into cooler areas could lead to significant changes in marine communities and ecosystems. The diversity of northern fish communities, now often dominated by a few very prolific species such as walleye pollock, may increase as southern species enter the region, leading to new food web and species interactions.
__label__1
0.906814
View Single Post Old 08-22-2012, 09:44 AM   #11 Rebecca Ednie Insane Embellisher Join Date: Jul 2005 Location: Mount Albert, near Toronto, Ontario Canada Posts: 792 I use a cheap light box with white fabric draped inside set up with large construction lights. They are very yellow so I always set my white balance to tungsten (light bulb symbol) and also colour correct them later as well. I'd recommend day light bulb lamps if you can. I stand my cards up and try to photograph them head on meaning as straight in front as possible. So my angle is lower for a side fold card than for a top fold one. This prevents distortion of the lines. I love seeing other peoples cards taken at an angle, for some reason that doesn't work for me unless I am photographing the details. I recommend macro mode on a POS camera or a fast (low max aperture like 1.8 or 2.4) macro lens for a DSLR camera. If you have a DSLR, setting your aperture makes a huge difference in how your picture looks. If you normally shoot in auto, try out aperture priority mode! It lets you control the ISO and aperture but figures out the rest. Low numbers like 1.8, 3.5 etc will give you that tight focus on your focal point with the rest of the card's focus blurring away from that center point. If you set it up at 4+, you will start getting more of the card in focus. All of this depends on how far away you are too as the further back, the more of the card is in focus and vice versa. These are just the settings I've used. I use small numbers for details and larger ones for the whole card and even higher numbers for photographing a 3-D project or a group of cards like 8.0. If you have a card with very dimensional bellies on it, treat it like a 3-D project if you want it all in focus. This principle works with groups of people too. Use higher apertures for larger groups to get everyone in focus. If you find you photos are blurry, especially at higher aperture numbers, use a mini tripod and self timer to take your pictures. The camera is compensating for low light caused by a small aperture (large numbers reflect a small size opening to let light through) by opening the camera's shutter for longer. Any hand shake is recorded as blur where with a faster shutter, there isn't time for the camera to record that movement because it isn't open long enough. A too small aperture can also cause blurriness. Look carefully at your photo. Is anything in focus? Even a tiny element? Perhaps the rest is blurry because you have too much of that artistic blurriness (called bokeh) where the focus is tight on one element then fades out around that. Your tight focus area may be smaller than you want or in the wrong place. A damaged lens can also cause this but if some photos are ok but others aren't, that clearly isn't the problem. If you like to add props to your photos, you can use the principles above to determine where to place them based on whether you want them in focus or not. The further away you place them, to the side or back to front, the more blurry they will appear! Rebecca Ednie Live Well, Laugh often, Love much Rebecca Ednie is offline   Reply With Quote
__label__1
0.650378
Skip to content About the Chorus The Yale Slavic Chorus is a performance group comprised of women from a variety of cultural and academic backgrounds who share a common passion for Slavic music. The Chorus sings a diverse repertoire that spans the traditions of Bulgaria, Russia, Ukraine, and Georgia, among others. Though many members of the Chorus are not native speakers of Slavic languages, the Yale Slavic Chorus works extensively with native folk musicians and vocalists to practice accurate pronunciation and style. Although primarily an undergraduate group, the Slavs, as we are affectionately known around Yale’s campus, also include other members of the Yale community. The Slavic Chorus has always been dedicated to maintaining musical vigor and excellence, seeking to empower women through song. Recent Press: The Slavs in the Republic of Georgia! A Brief History The Chorus was founded in 1969, the first year of undergraduate co-education at Yale, and was the first all-women’s group on Campus.  It was originally conducted by William Robbins, Jr., then a music major in Yale College and a member of the all-male Yale Russian Chorus. In 2015, the Chorus celebrated our 45th anniversary with a reunion concert that drew alumnae from all across the globe. Our Method and Music We are an entirely student-run, student-directed ensemble. We transcribe and arrange much of our own music, but are always looking for new songs to sing. We also learn much of our music via an oral tradition and by listening to original source recordings. During our performances, we strive to maintain the original mode of presentation, which often includes dissonant harmonies, unusual rhythms, and distinctive vocal qualities which make Slavic and Eastern European music unique and exciting. We have also begun incorporating more traditional folk dances into our performances. Amanda Crego-Emley and Charlotte Finegold Athena Wheaton and Claire Gottsegen
__label__1
0.959512
Pages Menu Categories Menu Most recent articles Flower Remedies Flower remedies are said to use the vibrational essence of flowers to balance the negative emotions that make us more vulnerable to disease. They provide a simple, natural method of establishing personal equilibrium and harmony. Reflexology is a natural therapy that treats health problems or potential problems through massage of the feet, and sometimes the hands. Reiki is a form of therapy that uses hands-on, no-touch, and visualisation techniques to improve the flow of life energy throughout the body. The word ‘Reiki’ is Japanese and literally means ‘universal life energy’. Emotional Freedom Techniques Emotional Freedom Techniques, or EFT (often known as Tapping ), is a healing approach that rebalances the body’s energy system, and has been shown to be especially effective treating unresolved emotional issues. Rather than being a single therapy, naturopathy can more accurately be described as a combination of natural therapies, including nutrition therapy, dietary supplements, hydrotherapy, and lifestyle coaching. The approaches used vary, depending on the unique needs of the individual client.
__label__1
0.971295
Travel Experts Alliance When you travel the world in search of authenticity, culture and tradition, you come to expect a certain standard of excellence from those you choose to guide you through the experience. When a professional exceeds that standard, they attain a new level of distinction. Each of the individuals below are specialists in their region of the world: they have garnered multiple nominations and awards for the quality of their work and the level of their commitment to the preservation of the Earth, her people and resources. They have contributed long hours, sweat, and the uniqueness of their particular creative processes to ensure that your journey exceeds your expectations, allows the region they operate within to retain its cultural significance, and keeps the impact on the local environment minimal. When you choose from among the Travel Experts Alliance, know that the trip of a lifetime they propose will truly be an exceptional adventure. Jalsa Urubshurow Nomadic Expeditions Specializing in Mongolia A pioneer in environmental conservation through for-profit sustainable tourism initiatives, Jalsa Urubshurow’s story began in 1952 when his parents immigrated to the US after a seemingly endless search for safe haven from Stalinist persecution in Russia. Born on the other side of the world from his ancestral heritage, Jalsa grew up speaking Mongolian and listening to the legends his father recited to him. Jalsa fought his way toward a hugely successful career as a carpenter, founding a framing business in 1989 which grew into one of the nation’s leading framing contractors. The company was eventually awarded the National Housing Quality Award, the highest honor given to any trade contractor. One year after the establishment of his framing business, Mongolia transitioned to democracy, and Jalsa’s dream of visiting his father’s homeland became a reality. During his first exploration of Mongolia’s astounding beauty, the nation’s first democratically elected Prime Minister, His Excellency Dash Byambasuren, recruited him to advise on expanding accessibility to Western travelers. Jalsa eagerly took up the challenge, and Nomadic Expeditions became a full-fledged operation in 1992. Two decades later, Nomadic Expeditions has become the premier tour operator for the region and a strong force within the sustainable tourism sector. From introducing this unique type of travel as an alternative source of income to the country’s infrastructure to creating and maintaining the Three Camel Lodge, the Gobi Desert’s only luxury eco-lodge focused on the preservation of the region and its inhabitants, the company has seen significant growth in the areas it calls home. As an operation Nomadic Expeditions has garnered multiple awards over the years, including National Geographic Adventure’s Best Adventure Travel Companies on Earth in 2009 and Jalsa himself as Condé Nast Traveler’s only Top Travel Specialist for Mongolia for six consecutive seasons (2008-2013). The company continues to grow and expand, adding several trips a year to new and exciting destinations. Jalsa’s continued exploration of Mongolia, especially now through the eyes of his son, have brought him great joy and an enduring desire to engage others personally in initiating and developing sustainable practices wherever they travel. A co-founder of the North America-Mongolia Business Council (NAMBC), a non-profit organization comprised of the CEOs of American and Canadian corporations active in Mongolia, Jalsa and his colleagues are dedicated to expanding bilateral economic, investment and trade relations between Mongolia and North America. He has also previously served on the board for the Bodhi Tree Foundation, and continues to support numerous charities and nonprofits around the world. Cherri Briggs Explore, Inc. Specializing in Africa Cherri is a member of the Explorer’s Club, a meeting point for explorers and scientists worldwide. Her passion for exploration was best exemplified in the expedition she organized and led in 2003, which was the first descent of Mozambique’s 450-mile Lugenda River. When not in Africa she operates from Explore headquarters in Steamboat Springs Colorado. Mei Zhang Specializing in China Today, WildChina is an award-winning tour operator that creates bespoke trips highlighted by rich personal interactions and superior access to experts and venues. Travel to China goes beyond a glimpse of the Great Wall, the Forbidden City, and the Terracotta Warriors. It’s about understanding what you see. A skyscraper surrounded by a multitude of cranes is an opportunity to learn about the vast differences in the lives of the migrant worker who is operating the crane and the young couple who will soon live in the skyscraper. Travel to a remote corner of one of China’s provinces is having the chance to share a cup of tea with a local Yi shaman in his home, and listen to his stories of learning the centuries-old practice from his father. Andrea Ross Journeys Within Specializing in Cambodia March 6, 2011: LA Times article “Beyond Angkor, Cambodia, a Khmer kingdom emerges from the jungle Sanjay Saxena Destination Himalaya Specializing in India Sanjay’s deep, insider’s knowledge of his homeland together with his talent for creating unique itineraries to traditional and remote destinations, make him one of the travel world’s top-ranking India and Tibet specialists. For 11 consecutive years (2003 – 2013), he has received Conde Nast’s “Top Travel Specialist” award for his exemplary work in South Asia, specifically India, Tibet, Nepal and Sri Lanka. Under his leadership Destination Himalaya has also been chosen by National Geographic Adventure for one of the “Best Outfitters on Earth” Award.
__label__1
0.763556
By Amanda Deering Amanda Deering is an undergraduate student at UCLA pursuing a degree in Psychology. She has been working at C.A.R.D. as a Research Assistant since summer of 2010. Before C.A.R.D., Amanda first learned about Autism Spectrum Disorders at the UCSB Koegel Autism Center. Outside of C.A.R.D., she is an aspiring photographer and works as a DJ on UCLA radio. What is ABA? Natural Environment Training The goal of any ABA therapist is to help improve the life of the child they work with and their family using the principles of behavior analysis. By using NET, these real life applications are addressed. Throughout this past month I have learned that as you get to know a child better, you are more equipped for NET because you know what generalized skills the child could really use and are able to use reinforcers that you understand are the child’s favorite to create these effective changes and improvements in their life. Such skills as answering the phone when it rings, asking for the child’s favorite movie when they want to watch it, and responding appropriately to a sibling make life for the child and the family a whole lot easier. What is ABA? Discrete Trial Training What is ABA? Part 1 Some Thoughts on Emotional Communication in Asperger’s Syndrome People with AS also understand emotions differently. While a typical person can quickly perceive sadness, anger, and happiness, some with AS find these emotions more difficult to comprehend. The identification of emotions may take longer, making communication more difficult. But in no way does this difference exclude them from feeling emotion. Perfect Movie Experience for Children with Autism ‘Star Wars’, ‘Back to the Future’, and ‘Wizard of Oz’ were a few of my favorite movies as a kid. There is no avoiding the fact that these movies are best watched at the theater. There is something about watching a movie on a big screen, with great sound effects and being able to share this experience with a crowd that make you feel you are part of the movie. Therapy for Autism with Dogs Ernie Els, Making a Difference in the Autism Community with Golf Yoga: The New Autism Therapy Do What You Love and Love What You Do iPads. The New Behavior Therapist? When I first saw the iPad, I must admit, I thought it was a completely unnecessary and awkward-looking device. It wasn’t until I heard how helpful it was for children with autism and other communication disorders that I understood how useful this technology could be. This makes complete sense, too; the iPad, which is simple enough for children to use, is also very appealing to the young ones who love to feel ‘adult’ by using new technology.
__label__1
0.907411
1. hikari-shimoda-01 Manga Mondays ~ Hikari Shimoda 2. harumi-yamaguchi-01 Fashion Fridays ~ Harumi Yamaguchi Harumi Yamaguchi is a seminal Japanese artist, whose illustrations evoked female equality in an era of great political and social reforms. When she embarked on her career, in the early 1970s, Yamaguchi was the only eminent illustrator working with the airbrush medium. 3. loundraw-01 Manga Mondays ~ loundraw 4. keiko-takemiya-01 Manga Mondays ~ Keiko Takemiya If you follow the Illustrator’s Lounge on Facebook or Twitter, you may have seen us mention the latest exhibition being held at the House of Illustration, Shojo Manga: The World of Japanese Girls’ Comics. Shojo is manga aimed at a teenage female readership (shojo literally means “young woman”). Shojo Manga will be the first major exhibition of the genre in the UK. So, I thought I would take this opportunity to look at one of the pioneers of shojo and a highlight of the exhibition, Keiko Takemiya. 5. tomioka-jiro-01 Manga Mondays ~ Tomioka Jiro Tomioka Jiro is an illustrator from Japan. He has worked on covers for multiple Japanese magazines, including Tokyo Ziggrat, Touch, and Frontier. He has collaborated with several Vocaloid producers, including takamatt, YM, and effe. 6. hajime-isayama-01 Manga Mondays ~ Hajime Isayama Hajime Isayama is the Japanese manga artist of Shingeki no Kyojin (Attack on Titan). The series has become a phenomenal commercial success. As of July 2015, the manga has 52.5 million copies in circulation. Its popularity was spurred on with the release of the anime adaptation of the same name.
__label__1
0.999871
"I can see that there's something over there and I know it's a chair, because I've sat in it," said Hamilton, giving her companion a pat as he sat by her side. "He's the best pair of eyes in the world." "He was the first wheelchair guide trained in the west," Hamilton said. "I'm happy for him because he's truly recognized for all he's accomplished." "No, the cats do not become jealous of him because of the awards or the honors," laughed Hamilton. A third inductee is Hunter, a German short-haired pointer who helped save a woman's life. The Veterinary Medical Association said he was recognized posthumously, as he was recently euthanized because of an advanced medical condition.
__label__1
0.986302
WORLD TO END MAY 2011 claims an organisation called eBibleFellowship. From their tract titled Judgment Day we learn that; “On May 21st, Judgment Day will begin and the rapture (the taking up into heaven of God’s elect people) will occur at the end of the 23-year great tribulation. On October 21st the world will be destroyed by fire (7000 years from the flood; 13,023 years from creation).” Link: eBibleFellowship ED. COM. As we make no apologies for being a Christian creationist site, who firmly believe Jesus is the Creator God of Genesis as well as Saviour and Lord of the Gospels, we do get asked about end times, so here’s our final comment on eBible’s claim. When Jesus’ disciples asked Him about the end of the world He told them what kinds of things would happen before the end comes and what it will be like, but He clearly stated (see Matthew 24: 36) that only the Father in Heaven knows when the end will come. No human beings or angels, i.e. no created beings including the folk at eBible know the day or hour. Therefore WE PREDICT the world will NOT end on either 21st May or 21st October 2011, and we look forward with interest to what eBible will say on 22nd October 2011. Until then check out our other Creation Research predictions in our Predictions File. (Ref. eschatology, time, prophecy) UNIVERSE IS RUN DOWN AND COMING TO ITS END according to reports in ScienceDaily 27 Jan 2010 and ABC News in Science 8 Oct 2010. Chas Egan and Charles Lineweaver from the Australian National University (ANU) Research School of Astronomy and Astrophysics have calculated the entropy of the universe and decided the universe is 30 times more run down than previously thought. Egan explained: "The universe started out in a low entropy state and, in accordance with the second law of thermodynamics, the entropy has been increasing ever since. This is important because the amount of energy available to life in the universe, including terrestrial life, depends on the entropy of the universe. We'd like to know how much energy will be available to life forms anywhere in the universe, and where this energy is. The first step in this procedure was to determine the entropy of the universe. That is what we did. The next step in the research is to work out how close the universe is to maximum entropy, how much entropy is being produced and how much time is left before the universe and all life in it dies in the inevitable heat death." In the meantime a team of physicists led by Raphael Bousso from the University of California, Berkeley, claim there is "a 50-50 chance of the universe ending in the next 3.7 billion years." However, Lineweaver of ANU is not impressed. He claims they are simply imposing a catastrophe for statistical reasons to fit a cosmology model of multiple universes popping in and out of existence like bubbles in boiling water. Links: ScienceDaily ABC ED. COM. The increase in entropy, or running down of the universe is a reminder that as time goes on things in the real world actually go from complex and organised to chaotic and disorganised. Despite the false claims that energy can come to places like earth and therefore increase available energy and therefore order, the real result is still the opposite of evolution. The One who made the universe, the Lord Jesus Christ, tells us the universe is coming to an end, and it will end with a heat death, but not the kind of slow dissipation of energy physicists mean by "heat death". The Apostle Peter tells us the earth will be destroyed by a massive fire, when everything is burnt up and the universe will disappear with a great noise. (2 Peter 3:4-10) Thus, the Big Bang theory has the explosion in the wrong place - the big bang will occur at the end, not the beginning of the universe. Jesus also said that Heaven and Earth will pass away but His word would not. Therefore, rather than trying to calculate when the universe will end, people should be preparing to live in the New Heavens and Earth that will replace the current run down universe by submitting themselves to Christ the Creator and Saviour. (Ref. astronomy, physics, thermodynamics)
__label__1
0.983923
Inca Civilization Machu Picchu Fascinating culture and Inca heritage of this beautiful country Lake Titicaca Animals of Peru Home » Featured, Lake Titicaca, Travel and Places, Visit Peru Lake Titicaca Submitted by | Lake Titicaca, surrounded by glaciers Lake Titicaca  Peru  is located in the department of Puno bordering Bolivia in the Andes Mountains, its surface is evenly distributed between Bolivia and Peru. The lake is surrounded by Andean mountain ridges and slopes varying in altitude between 4,000 and 4,200 meters or 13,100 and 13,800 feet above sea level. The lake itself is located on a high plateau ranging from 3,657 to 4,000 meters or 11,200 to 13,100 feet above sea level. Lake Titicaca is known for the deep blue beauty of its water. At this altitude temperatures average less than 15C or 59F all year round and it remains constant throughout the year. Temperatures do not drop at night or in winter as much as in other places at similar altitudes. Lake Titicaca is divided into two sub-basins, the larger one is Lago Grande and the smaller is Lago Pequeño. Both lakes are connected by the Strait of Tiquina. The bright luminescent sunlight permeates the highland Altiplano making it feel spiritual and magical. It is necessary to bring sun block as the sunlight is very intense in high altitude and the rays bounding off the lake can cause severe sunburn.  About 25 rivers deposit their water in the lake but only one, the Desaguadero River drains it, 95% or the incoming water lost through evaporation. To the local population the lake has mystical properties as it is surrounded by fertile land in the otherwise dry and windswept Altiplano. The Inca Civilization considered the lake as a sacred place. Lake Titicaca History Deep blue water of Lake Titicaca According to one of the legends of the origin of the Incas, the first Inca Manco Capac and his wife Mama Ocllo emerged from the depths of Lake Titicaca on the sacred rock on Isla Del Sol to look for a place to build an empire. Lake Titicaca was a sacred lake to the Incas. Lake Titicaca was the cradle for Peru’s ancient civilizations. The Puraka culture settled in this fertile land around 200BC and a millennium later the Tiwuanaku culture emerged and spread throughout the Altiplano and into Bolivia. Warlike tribes like the Aymaras and the Collas emerged only to be absorbed by the Incas. It was the Inca civilization that unified the many cultures and spread into this land forming the Inca Empire. The current local population is the Uros people who have populated this territory for hundreds of years, they come from the Aymara and the Quechua populations and they speak the ancient language of Aymara. It is believed that the name Titicaca means “Rock Puma” derived from the Aymara language titi: wild cat and karka: rock. Legend tells that titis used to live on the rocky islands of the lake and that they swam from the islands to the mainland in search of food. Nowadays the titi cat or the Andean cat is the most endangered cat species in the Americas. Legend of the lost city Divers from “Atahualpa 2000” expedition by Akakor Geographical Exploring. BBC News Ever since the Inca civilization inhabited this area, the lake has drawn fascination and according to local population is has mythic, almost sacred powers. Stories of Inca treasures lost by the Spanish and an underwater city have attracted many expeditions. In 1968 French explorer Jacques Cousteau undertook a one and a half month underwater exploration. The expedition did not find the lost city but brought out animal varieties not found anywhere else in the world. Related Information about Lake Titicaca Lake Titicaca Facts Lake Titicaca is the highest navigable lake in the world. It is the largest lake in South America in terms of volume. Lake Titicaca Islands Local indigenous people, the Uros, have settled in the shores of the lake for thousands of years but they also live on the islands. They live on about 40 manmade floating islands. Puno Peru Puno is located in the Altiplano of Peru, surrounded by the Andean mountain range. The origin of the Incas There are two legends explaining the origin of the Incas, the legend of Lake Titicaca and the Ayar Brothers. Back to Home page Tags: , , , , No Comment » 2 Pingbacks »
__label__1
0.73201
Distance Between Elkhart, KS and Lamar, CO How many miles? 123 Miles / 199 Km How many times? 2 hours 8 mins Distance, Gas Consumption and Emission Notes Distance from Elkhart, KS to Lamar, CO is 123Miles or 199 Km. You can get this distance about 2 hours 8 mins. If you want to planning travel with plane for 85 Miles or 137 Km, You can get this distance about 40 mins . A car with an average MPG will needs 5.69 gallons of gas to get the route between these points. The estimated cost of gas to get between Elkhart, KS and Lamar, CO is $12.92. During the route, an average car will release 111.48 pounds of CO2 to the atmosphere. Your carbon footprint is 0.91 pounds of CO2 per mile. * Average US MPG used for calculations is 21.6 MPG. * Coordinates of Elkhart, KS is 37.0080784, -101.8900393 and coordinates of Lamar, CO is 38.0870787, -102.6207387
__label__1
0.820246
Camellias – Close planting Q: I recently purchased several camellias. Each of the camellias seemed to have two trunks. I investigated and was able to separate the tangled roots into two plants. Is it OK to have them planted so close together? A: This is common nursery practice. In order to make sure that they have at least one strong plant in a pot, the nursery folks will stick two cuttings in it. Sometimes one dies but often both form roots and grow well. I saw a ‘Yuletide’ camellia last week that had three rooted cuttings in it. There’s no harm in having the plants grow close together. There’s also no harm in separating them, carefully, so you have multiple plants.
__label__1
0.997425
Call Now For A FREE Consultation (727) 451-6900 No Recovery No Fee Promise Can an Auto Accident Cause a Chiari Malformation? A Chiari malformation also commonly referred to as cerebellar ectopia or Arnold Chiari malformation. is a head injury long thought to be only congenital in nature. However, several recent studies have shown a correlation between a traumatic episode (i.e., auto accident) and symptomatic chiari. In other words, a Chiari malformation can be asymptomatic for an indefinite period of time. In fact, most individuals with a Chiari malformation have no idea. What is a Chiari Malformation? A Chiari I malformation is a structural defect in the cerebellum, the part of the brain that controls balance. Normally, the cerebellum and parts of the brain stem sit in an indented space at the lower rear of the skull, above the foramen magnum. When part of the cerebellum is located below the foramen magnum, it is called a Chiari malformation or tonsillar ectopia. Chiari malformations may develop when the bony space is smaller than normal, causing the cerebellum and brain stem to be pushed downward into the foramen magnum and into the upper spinal canal. The resulting pressure on the cerebellum and brain stem may affect functions controlled by these areas and block the flow of cerebrospinal fluid – the clear liquid that surrounds and cushions the brain and spinal cord – to and from the brain. Chiari Malformation Symptoms The symptoms most often associated with Chiari I malformations include occipital headache, neck pain, upper extremity numbness, paresthesias (“pins and needles”) and weakness. In some cases there can also be lower extremity weakness and signs of cerebellar dysfunction. Can a Chiari Malformation be Caused by Trauma In recent years, increased research about the causes of Chiari I malformations has produced some interesting results. Initially, Chiari malformation was thought to be an exclusively congenital issue that was caused by structural defects in the brain and spinal cord that occur during fetal development. However, recent research has revealed that Chiari malformations can also be related to trauma, especially whiplash. Several studies have suggested that a previously undetected Chiari I malformation can be symptomatically awakened as a result of trauma caused during a motor vehicle crash. While these studies determined that head or neck trauma is capable of “triggering” symptoms relating to Chiari I malformations, in a 2010 study, Michael D. Freeman and number of other experts set out to determine an even more intriguing question: could motor vehicle crash trauma actually be the sole cause of a Chiari I malformation? The answer is that it’s definitely possible. Chiari Malformation Following an Auto Accident Regardless of whether or not crash trauma triggers pre-existing asymptomatic Chiari I malformation or actually causes it, research indicates that symptoms of Chiari I malformations are substantially more prevalent in whiplash-injured patients. The important takeaway from this is that if you suffer head or neck trauma in an accident, especially whiplash, you may develop symptoms resulting from a Chiari I malformation. In a whiplash mechanism accident, the head moves violently forward then backwards. This is known as an acceleration deceleration mechanism injury. During such an episode the cerebellar tonsils can pass through the opening at the bottom of the skull (known as the foramen magnum) and pass into the upper part of the neck. In a seminal study published in the Journal of Brain Surgery written by Professor Michael Freeman and Dr. Ezriel Kornel, a correlation between acceleration/deceleration injuries and symptomatic chiari was found after reviewing 1200 cervical MRI films. 1 The study illustrates that a pre-existing congenital Chiari often becomes symptomatic following a motor vehicle collision. Disruption of the CSF results in symptoms including dizziness and disorientation. We have been retained on a number of Chiari malformation cases including several referred by other law firms.  Insurance carriers often rely on outdated science depicting such injuries to be congenital and will assert that trauma played no role in the pathology. However, we focus only on the symptoms and the lack thereof of symptomatology related to Chiari pre-dating an accident. In other cases we focus on an exacerbation or aggravation of a pre-exiting injury wherein it can be illustrated that the symptoms were dramatically altered as a result of a traumatic episode. In any event, Chiari cases are rarely if ever resolved in pre-suit. A claimant is generally left, with only one potential recourse; which is filing a lawsuit. In our experience, Chiari cases are heavily laden with expert testimony and we often retain a Radiologist, Neurosurgeon and Epidemiologist to illustrate the significance of this injury and the likelihood of our client presenting with such symptoms absent a traumatic event. Symptomatic Chiari is a very serious condition that often will not resolve absent surgical intervention, which generally consists of a craniotomy. A craniotomy is an extremely invasive procedure with a number of associated risks. Due to the long term prognosis of individuals with symptomatic Chiari, we must account for the need for potential future surgical intervention.
__label__1
0.992613
Photography Tours China is a great allure for many travelers from its picturesque natural views and special landscapes to cultural treasures. For photographers or shutterbugs, China offers an amazing array of subjects such as Great Wall, Guilin mountains and rivers, wild west of Sichuan, Giant Pandas, ancient towns, mysterious Tibet, ethnic flavor in southwest, etc. Besides these regular and must-see destinations, China photography tours also takes you to the urban crush to shoot a side of China which is rarely seen by most travelers. So take your camera equipment and come with us to capture the beautiful image of natural and ancient China.
__label__1
0.9965