text
stringlengths
237
516k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
17
499
date
stringlengths
20
20
file_path
stringclasses
370 values
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
58
105k
There are some other good answers which provide part of the picture, but I think there is a fundamental organising principle which has been missed. Konrad has touched on it in his answer. The reason trees, and most plants, tend to grow equally in all directions is that they have iteratively generated branching and radial symmetry which is controlled in a feedback loop of the growth promoting hormone auxin and auxin-sensitive auxin transporters. This is an elegant biological algorithm which explains all branching growth. The things Konrad identifies (phototropism, gravitropism, etc.) serve as orientation cues which help the plant determine which axes to grow along, but fundamentally the process is about auxin gradients. There are exceptions, as others have pointed out in their answers, and they usually result from severe imbalances in the orientation cues. I'll try to explain the growth process clearly (and it gives me an opportunity to try my hand at diagramming again ^_^)... Auxin is a plant hormone (actually a class of hormones, but mostly when people say auxin, they mean indole-3-acetic acid) which promotes cell elongation and division. The basic principle which allows auxin to act in the organising way it does is that auxin is produced inside cells, and proteins which export auxin from a cell develop on the side of the cell which has the highest auxin concentration (see figure below). So auxin gets transported up the concentration gradient of auxin! Thus if you get an area of high auxin concentration developing somehow, more auxin is then transported towards that area. An area of high auxin concentration relative to the surrounding tissue is called an auxin maximum (plural 'maxima'). For most of the life of the plant, auxin is produced pretty much equally in most cells. However, at the very early stages of embryo development, it gets produced preferentially along the embryonic axis (see figure below, part 1). That creates a meristem - a group of cells where cell division is taking place - at the auxin maximum at each end of the embryo. Since this particular meristem is at the apex of the plant, it is called the apical meristem, and it is usually the strongest one in the plant. So by having a meristem at each end, the embryo then elongates as cell division is only taking place at those points. This leads to part 2 of the image above, where the two meristems get so far apart that the auxin gradient is so weak as to no longer have its organising effect (area in the red square). When that happens, the auxin produced in cells in that area concentrates in a chaotic way for a short time until another center of transport is created. This happens, as the first one did, when a particular area of the tissue has a slightly higher concentration of auxin, and so auxin in the surrounding tissue is transported towards it. This leads to part 3 of the figure, in which two new meristems are created on the sides of the plant (called lateral meristems). Lateral meristems are where branches occur on plants. If you then imagine this process continuing to iterate over and over, you will see that the branches, as they elongate, will develop meristems at the tips and along the sides. The main stem will also continue elongating, and develop more lateral stems. The root will begin to branch, and those branches will branch, etc. If you can understand how this elegant system works, you understand how plants grow, and why they grow in repeating units as opposed to in a body plan like animals. It also explains why, if you cut off the tip of a stem, it promotes branching. By removing the apical meristem, you get rid of the auxin gradient and enable the creating of multiple smaller meristems which each develop into branches. So far I've explained regular branching, but the same system causes the radial symmetry which makes trees (usually) grow in all directions equally... Imagine taking a cross section through a stem and looking down all the way through it (as depicted crudely above). Just as auxin gradients act to coordinate growth along the length of the plant, they also coordinate it radially, as the maxima will tend to space themselves out as far from one another as possible. That leads to branches growing in all directions equally (on average). I welcome comments on this answer, as I think its so important to understanding plant growth that I'd like to hone my answer to make it as good as possible.
<urn:uuid:221a069c-9604-4e48-bd93-ba0406d86ad9>
CC-MAIN-2013-20
http://biology.stackexchange.com/questions/1869/how-do-trees-manage-to-grow-equally-in-all-directions/1887
2013-05-22T15:07:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960646
954
Many people don’t know that flash isn’t really all that new. While it’s only gained popularity in the past decade, flash was invented more than 25 years ago to be used as a new form of memory. Over the years though, developers instead implemented it as storage because flash memory is persistent, like disk drives. The history of flash is so interesting, our team at Fusion-io wanted to share a bit of it with the world. So we created this whiteboard video to show how flash has changed the world we live in and is now powering the digital age. Along the way, we also reveal the secrets behind the Fusion ioMemory difference. So take a few minutes and learn how flash is powering both sides of the Internet, how you can unlock its true potential, and how it’s changing the world.
<urn:uuid:a404eeb2-f537-4139-919f-eeebb0d67679>
CC-MAIN-2013-20
http://blog.c24.co.uk/2012/08/28/the-history-of-flash-in-a-flash/
2013-05-22T15:21:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94865
174
Kakadu NP lies about 75km south of Darwin and covers about 22,000 sq km. The park encompasses 6 different landforms, Savanna Woodlands, Monsoon Forests, Southern Hills and Ridges, Stone Country,Tidal Flats and Coast as well as the world renouned Floodplains and Billabongs. Each is different from the other and each provides habitat for a huge range of plants and animals, many only found in this area. The land the park now covers was once home for a number of different Aboriginal clans, a number of whom have been wiped out by disease, the impact of being displaced to the settlements and assimilation into other clans. The land is once again owned by the Aboriginal people who manage the park in trust with the Australian NP Patchwork burning of the land has long been used to control the spread of unwanted plants, clear the soil for new growth to encourage plants and wildlife that the Aboriginals harvest for food, to return each year. The European settlers now realise how important this technique is for controlling the spread of "hot fires" that not only clear the undergrowth but also burn large trees and sometimes local As we are now in the DRY season, much of the land has been burned off in June and early July. The WET season monsoons bring torrential rains that cover almost all of the NP, closing off much of the area and most of the few roads that criss cross the Flood indicators that show 2m depths are not unusual and the roads are usually at least a couple of metres above the The Savanna Woodlands are notable for a wide variety of Termite Mounds of all shapes and sizes, some up to 20ft The park also has a number of visitor centres, as well as Aboriginally run indigenous Culture centres
<urn:uuid:0bb4e1ea-1532-4daa-ac4a-308773dd5aa8>
CC-MAIN-2013-20
http://blog.mailasail.com/curious/337
2013-05-22T15:14:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951549
405
Role in aerosol formation could aid modeling of Central Valley temps, air quality NASA Earth Observatory Aerosols—and clouds seeded by them—reflect about a quarter of the Sun’s energy back to space. For all we know about climate change and the Earth’s atmosphere, it’s amazing how much more there is to learn. Earlier this month, a team of researchers led by University of Colorado’s Roy “Lee” Mauldin III announced the discovery of a brand new atmospheric compound tied to both climate change and human health. Above certain parts of the earth, they found, the new compound is at least as prevalent as OH, also called the hydroxyl radical, long thought to be the primary oxidant responsible for turning sulfur dioxide, an industrial pollutant, into sulfuric acid. The new compound, it turns out, can play an equally important role. Sulfuric acid contributes to acid rain and results in the formation of aerosols, airborne particulates associated with a variety of respiratory illnesses in humans and known to seed the formation of clouds. Continue reading → The study takes an inventory of non-carbon greenhouse gases including methane, which emits from landfills and farms, and nitrous oxide, which primarily comes from soil management and combustion. Per molecule, the study notes that these gases have a stronger muscle for trapping heat compared with carbon dioxide, but they don’t last as long in the atmosphere. “This study looks at what would happen if society decided to go after the short-lived greenhouse gases, as well as CO2.” said Jim Butler, Director of Global Monitoring at NOAA and author of the study. Short-lived is a relative term in atmospheric science. Butler said it takes decades for methane to fully run its course in the atmosphere, during which its potential to trap heat is much greater, even though its share in the atmosphere is pennies compared to that of CO2. Carbon dioxide sticks around much longer, some of it for thousands of years, said Butler. California’s regional planning authorities need to find new ways to get people to leave their cars at home. Passenger vehicles are the single largest source of greenhouse gases in California, comprising one third of all the state’s emissions. Senate Bill 375, passed in 2008, is designed to chip away at those emissions by curbing sprawl and encouraging infrastructure that gets Californians to drive less — or at least, not as far. This week the state Air Resources Board met a milestone (so to speak) in the implementation of the law by sending to California’s 18 regional planning organizations, greenhouse gas reduction targets for cars and light trucks . Now it will be up to the regions to create their own strategies for linking land use and transportation planning in ways that lure Californians out of their cars. Continue reading → KQED’s Los Angeles Bureau Chief and frequent Climate Watch contributor Rob Schmitz is spending six weeks in Japan, as part of the Abe Fellowship for Journalists. In the weeks to come he’ll file a series of special reports on Japan’s extraordinary strides in energy efficiency–and what we might learn from them. Saturday night, on my way home from an interview, I witnessed one of the more interesting orchestrated movements of humanity the world has to offer. I shot this video when I was changing trains at Shibuya station, one of Tokyo’s busiest. The intersection shows how well Japan engineers pedestrian movement–but how well will it engineer its residents’ greenhouse gas emissions? Hatoyama makes his climate change pledge. Photos: Rob Schmitz He told a packed house that Japan will aim to reduce its greenhouse gases by 25% from 1990 levels by 2020. “In my personal opinion, that’s impossible,” Hidetoshi Nakagami told me last week. Nakagami is President of the Jyukankyo Research Institute and holds a coveted seat on the advisory committee to Japan’s powerful Ministry of Economy, Trade, and Industry, or METI. “Hatoyama’s pledge is pure politics,” he said. “It’s not practical, it’s not possible, and there’s not enough time.” Nakagami is not a pessimist. He played a large role in creating Japan’s very successful Top Runner program, a 1997 policy that searches for the most efficient model of any given electrical appliance and then makes that model the industry standard, requiring other companies to adhere to it when making new models of the same appliance. The program was one of Japan’s most ambitious energy efficiency measures, and Nakagami had to fight against Japan’s largest companies in order to help craft the policy into law. While Nakagami would like to see a one-quarter reduction in greenhouse gases from 1990 levels in the next decade, he says it’ll cost the average Japanese dearly. When former Prime Minister Taro Aso pledged to cut Japan’s greenhouse gases by 15% of 2005 levels, Nakagami’s institute estimated that the effort would cost each Japanese household, on average, 70,000 yen–a little over USD $700–a year. Even that,says Nakagami, would be a tall order in this economy. In the end, Hatoyama may not fill this order. His historic pledge, which during his campaign, seemed to have no strings attached to it, now has an important caveat. At Monday’s forum, he told the audience that Japan will embark on this journey as long as other major countries also set similar ambitious targets. Japan's future hanging in the balance. After the forum concluded, I walked outside into Tokyo’s rush hour: pedestrians everywhere, taxis speeding by me. I stopped at a Shinto shrine built among enormous glass skyscrapers. In front stood an Omikuji shrine, where believers tie a paper copy of their fortune, with hopes that it’ll come true. Hundreds of paper fortunes rattled in the hot, summer wind. I wondered if one of them was Hatoyama’s. Reactions are coming in to The EPA’s long-awaited finding today that carbon dioxide and five other greenhouse gases pose a threat to “the public health and welfare.” One California environmental group actually used the word “Duh” in its official response. After two years of study, prodded by a Supreme Court decision, the federal agency finds that CO2, methane, oxides of nitrogen and two other industrial gases should be regulated as pollutants under the Clean Air Act. A sampling of reactions: “‘Duh’ may not be a scientific term, but it applies here. Today, common sense prevailed over pressure from Big Oil and other big polluters to deny the obvious in order to maintain the status quo on energy. EPA has embraced the basic facts on global warming that scientists around the world have acknowledged for years.” “While the federal government was asleep at the wheel for years, we in California have known greenhouse gases are a threat to our health and to our environment – that’s why we have taken such aggressive action to reduce harmful emissions and move toward a greener economy. Two years after the Supreme Court declared greenhouse gas emissions a pollutant, it’s promising to see the new administration in Washington showing signs that it will take an aggressive leadership role in fighting climate change that will lead to reduced emissions, thousands of new green jobs and a healthier future for our children and our planet.” “Today’s action by the EPA is the beginning of a regulatory barrage that will destroy jobs, raise energy prices for consumers, and undermine America’s global competitiveness,” Senator Inhofe said. “It now appears EPA’s regulatory reach will find its way into schools, hospitals, assisted living facilities, and just about any activity that meets minimum thresholds in the Clean Air Act. Rep. John Dingell was right: the endangerment finding will produce a ‘glorious mess.’ “This finding was expected, but long overdue because the previous administration respected neither the science nor the law. The consequence of this finding is that EPA will now begin the task of reducing these emissions through the permitting process provided by the Clean Air Act. One way or the other, the clear and present danger of endlessly dumping pollutants into the atmosphere must be confronted. We will either find a way to build a future for our children based on clean energy and sustainable jobs, or we will face a very unsentimental foe unarmed – a climate that makes life unsustainable. The choice is clear, and the new Administration is following the wisest path forward.” California moved to regulate carbon emissions three years ago, when state lawmakers passed the Global Warming Solutions Act of 2006, also known as AB 32. But many specific regulations required by that law have yet to take effect. Much of the debate over addressing climate change hinges on the cost of proposed mitigation efforts. Some say we can’t afford the extraordinary measures required to cut greenhouses gases, particularly in the current economic train wreck. What gets less attention is the cost of doing nothing. This has been a controversial idea since the Stern Review called attention to the issue in 2006. That report concluded that unless one percent of global GDP was diverted to mitigate the worst effects of climate change, the world could lose up to 5% of global GDP each year and the total damage could claim as much as 20%. A set of new reports out of the University of Oregon inserts fresh numbers into the debate. According to researchers, three western states are each likely to lose more than $3 billion a year in climate change-related costs by 2020, if nothing is done to reduce greenhouse gas emissions. By 2080, the projected annual costs range from $9-to-$18 billion for each state. The reports, which focus on Washington,Oregon, and New Mexico, assume a business-as-usual scenario where both carbon emissions and temperature continue to rise at rates similar to those seen in recent years. Under these conditions, these states (and California, according to the prevalent research) can expect more severe droughts and floods, less snowfall, more wildfires and habitat loss, and a higher incidence of climate-associated health problems and deaths. In New Mexico, the study’s authors expect summer temperatures to climb 12.6 degrees above current averages by 2080, spiking air-conditioning costs, health-care complications, and the state’s death rate. By 2020, annual climate-related health care costs in New Mexico alone are expected to top $1.3 billion. California’s temperatures, under business-as-usual scenarios, are widely expected rise between six and ten degrees by the end of century. Even in a relatively cool state like Washington, health care impacts would make up $421 million, or 32%, of total annual climate-related costs, under this pr0jection. The study attributed the largest costs (more than $1 billion annually in each state) to inefficient consumption of energy, a projection that might not pan out, given the Obama Adminstration’s focus on green technology and clean energy efforts. Other costs cited by the study include reduced salmon populations and food production, lost recreational opportunities (sell your snowboard now), and more intense and frequent wildfires and storms.
<urn:uuid:4cb11736-7f60-4b6f-8ddf-7085c3467287>
CC-MAIN-2013-20
http://blogs.kqed.org/climatewatch/tag/greenhouse-gases/
2013-05-22T15:22:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947982
2,384
|Title||A coon alphabet (J) | |Author||Kemble, E.W. (Edward Windsor), 1861-1933 | |Publisher Location||United States -- New York -- New York | |Publication Date||1898 | |Image Production Process||Planographic prints--lithographs| |Notes||Illustrated with uncolored lithographic prints.| An alphabet book with rhymes written in imitation of southern Black English of the 19th century and illustrations that portray blacks in negative stereotypical roles. At the end of each rhyme, the black person usually ends up being hurt somehow, either due to their insinuated stupidity, malice or laziness. "J is fo[r] Joseph, a wicked young lad, he fooled wid [with] his brudder [brother]--and made his Ma mad." The first illustration depicts a woman washing clothes in a tub. In front of her, a very young boy holds onto a rocket while another young boy lights the rocket's fuse with a cigar held in his mouth. The second illustration (not shown) depicts the woman sitting in the wash tub, soaking wet, and shaking the young boy. In the background, the very young boy is flying through the air, still holding onto the rocket. |Contextual Notes||Right after the Civil War, books were published that attempted to refute the "immorality" of slavery. While some of these books purported to document the kind and reasonable treatment of slaves in the South, others, such as this book, portrayed blacks as deserving of their ill-treatment. All of these books relied on racial stereotypes. Racial stereotypes persisted in the United States long after the Civil War and these children's books were also especially popular in England. At about this same time, silent movies were showing white actors in blackface and featuring minstrel shows.| Edward Windsor Kemble was a self-taught artist well-known for his cartoons of soldiers, Indians, and blacks in publications such as Harper's, Century, and Leslie's. |Subjects (LCSH)||African Americans -- Southern States -- Caricatures and cartoons | |Category||Discrimination and bigotry| |Digital Collection||Children's Historical Literature Collection | |Digital ID Number||CHL1122 | |Repository||University of Washington Libraries, Special Collections Division | |Repository Collection||Children's Historical Literature Collection. NC1429.K4 C66 1898 | |Physical Description|| leaves: illustrated; 27.5 x 20.5 cm. | |Digital Reproduction Information||Photographed from original book in TIFF format using a Canon EOS Digital Rebel XTi/EOS 400D, resized and enhanced using Adobe Photoshop, and imported as JPEG2000 using Contentdm's software JPEG2000 Extension. 2009. | |Exhibit Checklist||Exhibit checklist L.142 |
<urn:uuid:1a921dc2-34e6-46ba-8995-1baf9319924b>
CC-MAIN-2013-20
http://content.lib.washington.edu/cdm4/item_viewer.php?CISOROOT=/childrens&CISOPTR=788&CISOBOX=1&REC=19
2013-05-22T15:00:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903103
600
What kind of rock is the moon made of? The composition of the rocks on the moon based on samples of lunar rocks are volcanic in origin. The rocks are basalts, similar to the kind of volcanic rock found on Earth. The lunar basalts are rich in iron and magnesium, and they also contain glassy structures that are indicative of rapid cooling. However, unlike Earth basalts, the lunar samples contain no water and a lower percentage of volatiles (elements or compounds with low melting and boiling temperatures) relative to refractories (higher melting and boiling temperatures). Get More 'Curious?' with Our New PODCAST: - Podcast? Subscribe? Tell me about the Ask an Astronomer Podcast - Subscribe to our Podcast | Listen to our current Episode - Cool! But I can't now. Send me a quick reminder now for later. - Why is the Moon so bright? - Why are there more maria on the near side of the Moon? - How can we tell what the interiors of planets are like? How to ask a question: If you have a follow-up question concerning the above subject, submit it here. If you have a question about another area of astronomy, find the topic you're interested in from the archive on our site menu, or go here for help. This page has been accessed 72737 times since April 29, 2002. Last modified: June 4, 2003 9:21:24 PM Ask an Astronomer is hosted by the Astronomy Department at Cornell University and is produced with PHP and MySQL. Warning: Your browser is misbehaving! This page might look ugly. (Details)
<urn:uuid:92a45303-93fe-49c5-8af3-a9ffb2f0c038>
CC-MAIN-2013-20
http://curious.astro.cornell.edu/question.php?number=47
2013-05-22T15:27:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927152
346
HPV (human papillomavirus) is a family of viruses that causes these conditions: The interval between infection with HPV and changes to the cervix, also known as cervical dysplasia, can be a few months to many years. These changes are detectable by Pap and HPV testing. The interval to cervical cancer (if not treated appropriately) is usually 10-20 years. It is only when HPV infects the cervix for many years that it can cause cervical cancer. However, cervical cancer is nearly 100% preventable if you have regular Pap tests, as advised by your medical provider. Any sexually active person, regardless of gender or sexual orientation, is at risk for HPV. HPV viruses that cause genital warts and HPV-related cervical changes are sexually transmitted—in fact, they are the most common sexually transmitted infection (STI) among college students. Most people contract HPV through sexual intercourse (anal or vaginal), but it can be passed through any sexual contact. Thus, it is possible for even those who have never had vaginal or anal sex to have HPV infections. As many as 75% of sexually active men and women will have an HPV infection during their lifetimes. In a study done here at the University of Washington, over one-third of young women who did not have evidence of HPV infection at the start of the study were infected after 24 months. An equal percentage of women who became sexually active for the first time during the study period were infected after by 24 months. Smoking can greatly increase your risk of abnormal cervical changes, known as cervical dysplasia, and cancer. If you have multiple sexual partners, this also increases your risk of HPV infection. Knowing a new partner less than 8 months before having sexual contact may also increase your chance of contracting HPV. A vaccine is available that can protect you against some of the most common types of HPV. If you haven't yet been exposed to the virus, it can prevent you from ever getting infected. It can also prevent genital warts. Yes. Within two years, 90% of those infected will have cleared the virus. However, a few will have HPV for much longer. The average duration of cervical infections is about 8 months. No. Most often, both partners are infected by the same virus and develop immunity to that strain of the virus. Neither partner is in danger of reinfection by the same virus once their bodies have fought it off. However, all sexually active people are at risk for new infections with different subtypes of HPV. If you have a cervix, a Pap test is the best way to check for the abnormal cellular changes that can lead to cancer.. Talk to your healthcare provider about when to start screening and how often you will need to have Pap testing. Schedule an appointment with Hall Health for a Pap test. There is currently no test to determine whether biological males have HPV. Our knowledge about HPV infections is growing rapidly. What we know changes constantly as new information is added to our body of knowledge. At the same time, there is a great deal of misinformation about HPV out there, especially on the web. Below are sources of additional—and reliable—information on HPV infections: American Sexual Health Association (ASHA)'s page on HPV, which includes information for male partners of those diagnosed with HPV The Centers for Disease Control and Prevention's (CDC) page on all things HPV You can call the National STD Hotline for more information about HPV or other STDs at 1-800-227-8922, 24 hours a day, 7 days a week The American College of Obstetricians and Gynecologists' Frequently Asked Questions on HPV For additional information, call one of Hall Health's Consulting Nurses. Authored by: Charles Petty, MD, and Ingrid Helsel, RN
<urn:uuid:3c6fda36-e670-4cc7-8ffa-849680f76f06>
CC-MAIN-2013-20
http://depts.washington.edu/hhpccweb/content/clinics/family-health/all-about-hpv-human-papillomavirus
2013-05-22T15:35:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969676
788
Compiled and edited by Charles J. Kappler. Washington : Government Printing Office, 1904. |Lands ceded to the United States.| |Land ceded to be surveyed, etc.| |Payment of debts due by Indians.| |Breaking upground, etc.| |Horses and presents.| |$200,000 to be invested for Indians.| |Blacksmiths' and gunsmith's establishments to be removed, etc.| |Removal of Indians.| |United States to pay expenses of making treaty.| |Treaty binding when ratified.| Articles of a treaty made at the city of Washington, between Carey A. Harris, Commissioner of Indian Affairs, thereto authorized by the President of the United States, and the confederated tribes of Sacs and Foxes, by their chiefs and delegates. The Sacs and Foxes make to the United States the following cessions: First. Of a tract of country containing 1,250,000 (one million two hundred and fifty thousand) acres lying west and adjoining the tract conveyed by them to the United States in the treaty of September 21st, 1832. It is understood that the points of termination for the present cession shall be the northern and southern points of said tract as fixed by the survey made under the authority of the United States, and that a line shall be drawn between them, so as to intersect a line extended westwardly from the angle of said tract nearly opposite to Rock Island as laid down in the above survey, so far as may be necessary to include the number of acres hereby ceded, which last mentioned line it is estimated will be about twenty-five miles. Second. Of all right or interest in the land ceded by said confeder ated tribes on the 15th of July 1830, which might be claimed by them, under the phraseology of the first article of said treaty. In consideration of the cessions contained in the preceding article, the United States agree to the following stipulations on their part: First. To cause the land ceded to be surveyed at the expense of the United States, and permanent and prominent land marks established, in the presence of a deputation of the chiefs of said confederated tribes. Second. To pay the debts of the confederated tribes, which may be ascertained to be justly due, and which may be admitted by the Indians, to the amount of one hundred thousand dollars ($100,000) provided, that if all their just debts amount to more than this sum, then their creditors are to be paid pro rata upon their giving receipts in full; and if said debts fall short of said sum, then the remainder to be paid to the Indians. And provided also, That no claim for depredations shall be paid out of said sum. Third. To deliver to them goods, suited to their wants, at cost, to the amount of twenty-eight thousand five hundred dollars ($28,500.) Fourth. To expend, in the erection of two grist mills, and the support of two millers for five years, ten thousand dollars ($10,000.) Fifth. To expend in breaking up and fencing in ground on the land retained by said confederated tribes, and for other beneficial objects, twenty-four thousand dollars ($24,000.) Sixth. To expend in procuring the services of the necessary number of laborers, and for other objects connected with aiding them in agriculture, two thousand dollars ($2,000) a year, for five years. Seventh. For the purchase of horses and presents, to be delivered to the chiefs and delegates on their arrival at St. Louis, four thousand five hundred dollars ($4,500,) one thousand dollars ($1,000) of which is in full satisfaction of any claim said tribe may have on account of the stipulation for blacksmiths in the treaty of 1832. Eighth. To invest the sum of two hundred thousand dollars ($200,000) in safe State stocks, and to guarantee to the Indians, an annual income of not less than five per cent. the said interest to be paid to them each year, in the manner annuities are paid, at such time and place, and in money or goods as the tribe may direct. Provided, That it may be competent for the President to direct that a portion of the same may, with the consent of the Indians, be applied to education, or other purposes calculated to improve them. The two blacksmith's establishments, and the gunsmith's establishment, to which the Sacs and Foxes are entitled under treaties prior to this, shall be removed to, and be supported in the country retained by them, and all other stipulations in former treaties, inconsistent with this, or with their residence, and the transaction of their business on their retained land are hereby declared void. The Sacs and Foxes agree to remove from the tract ceded, with the exception of Keokuck's village, possession of which may be retained for two years, within eight months from the ratification of this treaty. The expenses of this negotiation and of the chiefs and delegates signing this treaty to this city, and to their homes, to be paid by the United States. This treaty to be binding upon the contracting parties when the same shall be ratified by the United States. In witness whereof the said Carey A. Harris, and the undersigned chiefs and delegates of the said tribes, have hereunto set their hands at the city of Washington, this 21st October A. D. 1837. C. A. Harris. Sacs or Saukes: Kee-o-kuck, The Watchful Fox, principal chief of the confederated tribes, Wau-cai-chai, Crooked Sturgeon, a chief, A-shee-au-kon, Sun Fish, a chief, Pa-nau-se, Shedding Elk, Wau-wau-to-sa, Great Walker, Pa-sha-ka-se, The Deer, Appan-oze-o-ke-mar, The Hereditary Chief, (or He who was a Chief when a Child,) Waa-co-me, Clear Water, a chief, Kar-ka-no-we-nar, The Long-horned Elk, Nar-nar-he-keit, the Self-made Man, As-ke-puck-a-wau, The Green Track, Wa-pella, the Prince, a principal chief, Qua-qua-naa-pe-pua, the Rolling Eyes, a chief, Paa-ka-kar, the Striker, Waa-pa-shar-kon, the White Skin, Wa-pe-mauk, White Lyon, Nar-nar-wau-ke-hait, the Repenter, (or the Sorrowful,) Po-we-sheek, Shedding Bear, a (principal chief,) Con-no-ma-co, Long Nose Fox, a chief,(wounded,) Waa-co-shaa-shee, Red Nose Fox, a principal chief Fox tribe, (wounded,) An-non-e-wit, The Brave Man, Kau-kau-kee, The Crow, Kish-kee-kosh, The Man with one leg off. Signed in presence of— Chauncey Bush, Secretary. Joseph M. Street, U. S. Indian Agent. Joshua Pilcher, Indian Agent. J. F. A. Sanford. S. C. Stambaugh. P. G. Hambaugh. Antoine Le Claire, U. S. Indian Interpreter. (To the Indian names are subjoined marks.)
<urn:uuid:990d01c4-ec9d-4546-b1af-5455f453026a>
CC-MAIN-2013-20
http://digital.library.okstate.edu/kappler/Vol2/treaties/sau0495.htm
2013-05-22T15:00:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938135
1,663
Date of this Version 19th Annual Sheldon Statewide Exhibition, Sheldon Memorial Art Gallery, University of Nebraska-Lincoln, 2005-2006. Photographers create images that look like paintings, and painters make paintings that look like photographs. Who is imitating whom and why? Long before photography was invented painters who could depict realistic imagery were held in high esteem. When photography was first invented, its ability to capture reality was also greatly admired. Over time, however, its status declined and eventually it was viewed as merely a mechanical tool with little artistic value. Henry Peach Robinson (1830-1901) popularized the emulation of painting and encouraged artificiality in photography. It was believed that if a photograph were made to look like a painting it would be more acceptable as a fine art form. This approach called pictorialism increased the popularity of photography and Robinson's followers continued to create sentiment and mood in their work. Alfred Stieglitz (1864-1946) further elevated the status of photography when he established Camera Notes and the Photo-Secession group. Slowly, museums began collecting and exhibiting photography, as did collectors. Some photographs that were being exhibited were mistaken for paintings, which was considered a compliment to those who saw themselves as pictorialists, but for those who had adopted a pure method of photography it was an insult. With the invention of photography painters were freed from the need to capture reality resulting in the exploration of abstraction in art. The Abstract Expressionist movement evolved and flourished. The tables began to turn in the 1960s when the Pop Art movement challenged its predecessor by creating recognizable images that made a more direct comment about American popular culture. Pop artists such as Andy Warhol and Ed Ruscha began to incorporate photography into their art. The Photo-Realist movement directly followed in the footsteps of Pop Art and took its reliance on and reference to the photograph one step further. Emerging in the 1970s, these artists main objective is to create images of everyday objects that are "photo-real" in their appearance. This exhibition explores why photographers create art that takes on the qualities of painting and why painters go through the painstaking process of creating a painting that looks more like photography than the photograph itself.
<urn:uuid:b631fd33-d30f-495b-8603-bfcbd734a449>
CC-MAIN-2013-20
http://digitalcommons.unl.edu/sheldonpubs/84/
2013-05-22T15:36:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.984057
456
In the U.S., most human cases of tick-borne Lyme disease occur in the Northeast—with a smaller cluster in the Midwest—even though the bacteria that cause it are equally common in ticks in both regions. A new study by researchers at the University of Georgia Odum School of Ecology, published in the August issue of the journal Epidemics, combines ecology and immunology to offer an explanation for this puzzling disparity. The researchers, led by James Haven, a postdoctoral associate in the Odum School, used information about how Lyme disease behaves and ecological data about ticks to create a model that sheds light on this well-documented but poorly understood pattern. They found the timing of the tick lifecycle—which appears to be driven by the local seasonal variability in temperature—plays an important role in determining which of the two types of Lyme disease thrive in a given area and how severe the disease outbreaks tend to be. “In the Northeast, the difference between summer and winter temperatures is not as extreme as it is in the Midwest,” Haven said. In the Midwest, tick larvae and nymphs tend to emerge at roughly the same time, while in the Northeast larvae emerge after nymphs—in some cases more than a month later. “Where the variability is big, there’s a lot of overlap between when the tick stages are active,” he said, citing work done by the Yale School of Public Health’s Anne Gatewood and her colleagues. The other aspect at play is the differences in the two types of Lyme disease strains, said study co-author Andrew Park, assistant professor in the Odum School and College of Veterinary Medicine’s department of infectious diseases. One group of Lyme disease strains is considered persistent and is most commonly found in ticks in the Northeast. When it’s first contracted, it is relatively less infectious. It gains its advantage by remaining in the host’s system for a long time and, as a consequence, has a greater opportunity to spread within the body, leading to classic Lyme disease symptoms. The other type, more prevalent in Midwestern ticks, is just the opposite. It is highly infectious at first, so much so that it alerts the host’s immune system, which attacks and rapidly clears it from the host. This type is less likely to cause severe Lyme disease. Park said the team undertook the study because they wanted to understand how rapidly cleared types of Lyme disease survive and thrive in different areas across North America. By combining information about disease dynamics and ecological data, they were able to do just that. “If we hadn’t taken the time to combine the tick ecology with laboratory-based disease duration data, we wouldn’t have understood what, to many people, is strictly a medical problem,” he said. To understand why the two regions favor different strains of the disease, the researchers delved into the ecology of ticks. The tick lifecycle consists of three stages—larva, nymph and adult. Tick larvae hatch in the late spring and feed once, usually on a small mammal such as a mouse or bird. They then remain dormant for about a year until they emerge as nymphs, at which point they seek their next single blood meal. They become adults several months later, in the fall. If a larval tick’s host happens to be infected with Lyme disease, the tick often becomes infected too. Then the nymph transmits the bacteria to its second host, often a mammal such as a rodent, dog or—increasingly—human. The researchers incorporated data about climate, the timing of tick lifecycles and disease behavior—how its infectivity changes over time—into one model to attempt to explain patterns of Lyme disease across regions. The result, Park said, was an explanation of the mechanism behind some of the patterns observed in nature. When the larvae and nymphs appear together, as in the Midwest, the rapidly cleared type of Lyme does well. It starts out highly infectious, so if a larval tick feeds on an animal that’s just been infected, before the host’s immune system has cleared the disease, it will become infected too. When the nymphs emerge earlier than the larvae, the rapidly cleared type of Lyme disease is at a disadvantage. If a nymph transmits the rapidly cleared bacteria to an animal host, it will have been flushed out of the host’s system by the time a larval tick appears on the scene to feed. On the other hand, if a nymph infects an animal with the persistent type, a larval tick that feeds on that host will become infected weeks—or even months—later. The study’s other coauthor was Krisztian Magori, a former postdoctoral associate in the Odum School, now at the School of Forestry and Wildlife Sciences, Auburn University. Research funding was provided by the James S. McDonnell Foundation. Ecological and inhost factors promoting distinct parasite life-history strategies in Lyme borreliosis Volume 4, Issue 3, August 2012, Pages 152-157
<urn:uuid:73b2f846-4cde-4624-bf6d-e822ac9fa9d3>
CC-MAIN-2013-20
http://ecology.uga.edu/newsItem.php?New_model_combines_ecological_immunological_data_to_explain_puzzling_patterns_of_Lyme_disease-201/
2013-05-22T15:21:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961599
1,062
Sally Ride Science Camps for Girls (Day Camp) At Caltech and MIT, we offer special day-only sessions of the Sally Ride Science Camp. These programs feature a convenient 5-day schedule, with classes from approximately 9am to 5pm each day. Please note that while day camp students attend a truncated version of the program, they do learn the full curriculum offered at the overnight camp. Beginning and intermediate Majors are offered. (Note: advanced 8th graders should consider the overnight programs rather than the day camps.) 4th - 6th Grade Campers - Intro to Marine Science: Students will experiment with the physical and chemical properties of water, explore major marine ecosystems and phyla, and dissect a fish! - Intro to Engineering: Students will learn the basic principles of physics and design while building bridges, towers, hovercrafts and automobiles that must survive performance trials and unexpected obstacles. - Astronomy: Students will simulate the big bang theory, study the surface of the sun and measure its diameter, explore light diffraction and build their own water rockets! - Marine Biology: Students will investigate animal adaptations in a creative creature project, dissect a fish, learn how temperature and salinity create ocean currents and create an “oil spill” to see how human actions impact the planet. Enrichment Activities outside of class give girls an opportunity for informal science learning, as well as leadership and problem-solving training, through workshops, experiments and recreational activities. Students will also attend a mid-program excursion to a local science venue, and meet many new friends.
<urn:uuid:704898e3-d1b6-494b-9d44-6d468e51873b>
CC-MAIN-2013-20
http://educationunlimited.com/camp/32/sally-ride-science-camp.html
2013-05-22T15:16:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919833
327
A Fresnel imager is a proposed ultra-lightweight design for a space telescope that uses a Fresnel array as primary optics instead of a typical lens. It focuses light with a thin opaque foil sheet punched with specially shaped holes, thus focusing light on a certain point by using the phenomenon of diffraction. Such patterned sheets, called Fresnel zone plates, have long been used for focusing laser beams, but have so far not been used for astronomy. No optical material is involved in the focusing process as in traditional telescopes. Rather, the light collected by the Fresnel array is concentrated on smaller classical optics (e.g. 1/20th of the array size), to form a final image. The long focal lengths of the Fresnel imager (a few kilometers) require operation by two-vessel formation flying in space at the L2 Sun-Earth Lagrangian point. In this two spacecraft formation-flying instrument, one spacecraft holds the focussing element: the Fresnel interferometric array; the other spacecraft holds the field optics, focal instrumentation, and detectors. - A Fresnel imager with a sheet of a given size has vision just as sharp as a traditional telescope with a mirror of the same size, though it collects about 10% of the light. - The use of vacuum for the individual subapertures eliminates phase defects and spectral limitations, which would result from the use of a transparent or reflective material. - It can observe in the ultraviolet and infrared, in addition to visible light. - It achieves images of high contrast, enabling observation of a very faint object in the close vicinity of a bright one. - Since it's constructed using foil instead of mirrors, it is expected to be more lightweight, and therefore less expensive to launch, than a traditional telescope. - A 30-metre Fresnel imager would be powerful enough to see Earth-sized planets within 30 light years of Earth, and measure the planets' light spectrum to look for signs of life, such as atmospheric oxygen. The Fresnel imager could also measure the properties of very young galaxies in the distant universe and take detailed images of objects in our own solar system. The concept has been successfully tested in the visible, and awaits testing in the UV. An international interest group is being formed, with specialists of the different science cases. A proposal for a 2025-2030 mission has been submitted to ESA Cosmic Vision call. In 2008 Laurent Koechlin of the Observatoire Midi-Pyrénées in Toulouse, France, and his team planned to construct a small ground-based Fresnel imager telescope by attaching a 20-centimetre patterned sheet to a telescope mount. Koechlin and his team completed the ground-based prototype in 2012. It uses a piece of copper foil 20cm square with 696 concentric rings as the zone plate. Its focal length is 18 metres. They were able to resolve the moons of Mars from the parent planet with it. See also - ^ a b L. Koechlin, D. Serre, and P. Duchon. "High resolution imaging with Fresnel interferometric arrays:suitability for exoplanet detection". Laboratoire d’Astrophysique de Toulouse-Tarbes. p. 12 Chapter 9, Paragraph 1. Retrieved 8 September 2009. - ^ L. Koechlin, D. Serre, and P. Duchon. "High resolution imaging with Fresnel interferometric arrays:suitability for exoplanet detection". Laboratoire d’Astrophysique de Toulouse-Tarbes. p. 1 Chapter 1, Paragraph 2. Retrieved 8 September 2009. "The focal length of such a Fresnel array can vary from 200 m to 20 km, depending on the array type and wavelength used." - ^ a b c d David Shiga. "Telescope could focus light without a mirror or lens". NewScientist.com. - ^ a b Laurent Koechlin. "The UV side of galaxy evolution with FRESNEL imagers". Laboratoire d’Astrophysique de Toulouse-Tarbes. Université de Toulouse. Retrieved 8 September 2009. - ^ "The Fresnel interferometric imager". harvard.edu. - ^ a b c Laurent Koechlin,Denis Serre,Paul Deba,Truswin Raksasataya,Christelle Peillon. "The Fresnel Interferometric Imager, Proposal to ESA Cosmic Vision 2007". pp. 2–3. Retrieved 9 September 2009. - ^ a b c "Proposed Telescope Focuses Light Without Mirror Or Lens". science.slashdot.org. - ^ Twinkle, twinkle, little planet , The Economist , Jun 9th 2012. Accessed June 2012. Further reading - http://www.ast.obs-mip.fr/users/lkoechli/w3/publisenligne/PropalFresnel-CosmicVision_20070706.pdf The Fresnel Interferometric Imager, Proposal to ESA Cosmic Vision 2007 - http://www.ast.obs-mip.fr/users/lkoechli/w3/FresnelArraysPosterA4V3.pdf Fresnel interferometric Arrays as Imaging interferometers, L.Koechlin, D.Serre, P.Deba, D.Massonnet - http://www.ast.obs-mip.fr/users/lkoechli/w3/publisenligne/aa2880-05.pdf High resolution imaging with Fresnel interferometric arrays: suitability for exoplanet detection, L. Koechlin, D. Serre, and P. Duchon - http://www.ast.obs-mip.fr/users/lkoechli/w3/publisenligne/papierFresnelV1.pdf Imageur de Fresnel pour observations à haute Résolution Angulaire et haute dynamique, L.Koechlin, D.Serre, P.Deba
<urn:uuid:97b234cf-eb4c-4f67-bbaa-e6c470a01d57>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Fresnel_Imager
2013-05-22T15:08:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.786129
1,318
In Danish folklore, a helhest (Danish "Hel horse") is a three-legged horse associated with Hel. Various Danish phrases are recorded that refer to the horse. The Helhest is associated with death and illness, and it is mentioned in folklore as having been spotted in various locations in Denmark. The horse figures into a number of Danish phrases as recent as the 19th century, such as "han går som en helhest" ("he walks like a hel-horse") for a male who "blunders in noisily". The helhest is sometimes described as going "around the churchyard on his three legs, he fetches Death", and from Schleswig, a phrase is recorded that, in time of plague, "die (corrected by Grimm from der) Hel rides about on a three-legged horse, destroying men". 19th century scholar Benjamin Thorpe connects the Danish phrase "he gave death a pack of oats" when an individual survives a near-fatal disease to notions of the Helhest, considering the oats either an offering or a bribe. According to folklore, the Aarhus Cathedral yard at times features the Hel-horse. A tale recorded in the 19th century details that, looking through his window at the cathedral one evening, a man yelled "What horse is outside?" A man sitting beside him said "It is perhaps the Hel-horse." "Then I will see it!" exclaimed the man, and upon looking out the window he grew deathly pale, but would not detail afterward what he had seen. Soon thereafter he grew sick and died. At the Roskilde Cathedral, people in former times would spit on a narrow stone where a Helhest was said to be buried. Legend dictates that "in every churchyard in former days, before any human body was buried in it, a living horse was interred. This horse re-appears and is known by the name of 'Hel-horse.'" 19th century scholar Jacob Grimm theorizes that, prior to Christianization, the helhest was originally the steed of the goddess Hel. See also - Sleipnir, the eight-legged steed owned by the god Odin, which is rode to Hel in Norse mythology - Valravn, a raven of the slain recorded in Danish folklore - Horse sacrifice, a common ritual among Indo-European peoples - Grimm (1883:844). - Thorpe (1851:209). - Vicary (1884:110). - Grimm, Jacob (James Steven Stallybrass Trans.) (1883). Teutonic Mythology: Translated from the Fourth Edition with Notes and Appendix by James Stallybrass. Volume II. London: George Bell and Sons. - Thorpe, Benjamin (1851). Northern Mythology, Compromising the Principal Traditions and Superstitions of Scandinavia, North Germany, and the Netherlands: Compiled from Original and Other Sources. In three Volumes. Scandinavian Popular Traditions and Superstitions, Volume 2. Lumley. - Vicary, J. F. (1884). A Danish Parsonage. London: Kegan Paul, Trench & Co.
<urn:uuid:ee65fa17-4114-4839-a3fd-25c6a3d2d21d>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Helhest
2013-05-22T15:30:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953581
660
OPQRST is an mnemonic used by persons performing first aid, or medical providers, in order to facilitate taking a patient's symptoms and history in the event of an acute illness. It is specifically adapted to elicit symptoms of a possible heart attack. Each letter stands for an important line of questioning for the patient assessment. This is usually taken along with vital signs and the SAMPLE history and would usually be recorded by the person delivering the aid, such as in the "Subjective" portion of a SOAP note, for later reference. "PQRST" (omitting "O") is sometimes used instead. The term "OPQRST-AAA" adds "aggravating/alleviating factors", "associated symptoms", and "attributions/adaptations". The parts of the mnemonic are: - Onset of the event - What the patient was doing when it started (active, inactive, stressed), whether the patient believes that activity prompted the pain, and whether the onset was sudden, gradual or part of an ongoing chronic problem. - Provocation or Palliation - Whether any movement, pressure (such as palpation) or other external factor makes the problem better or worse. This can also include whether the symptoms relieve with rest. - Quality of the pain - This is the patient's description of the pain. Questions can be open ended ("Can you describe it for me?") or leading. Ideally, this will elicit descriptions of the patient's pain: whether it is sharp, dull, crushing, burning, tearing, or some other feeling, along with the pattern, such as intermittent, constant, or throbbing. - Region and Radiation - Where the pain is on the body and whether it radiates (extends) or moves to any other area. This can give indications for conditions such as a myocardial infarction, which can radiate through the jaw and arms. Other referred pains can provide clues to underlying medical causes. - The pain score (usually on a scale of 0 to 10). Zero is no pain and ten is the worst possible pain. This can be comparative (such as "... compared to the worst pain you have ever experienced") or imaginative ("... compared to having your arm ripped off by a bear"). If the pain is compared to a prior event, the nature of that event may be a follow-up question. The clinician must decide whether a score given is realistic within their experience - for instance, a pain score 10 for a stubbed toe is likely to be exaggerated. This may also be assessed for pain now, compared to pain at time of onset, or pain on movement. There are alternative assessment methods for pain, which can be used where a patient is unable to vocalise a score. One such method is the Wong-Baker faces pain scale. - Time (history) - How long the condition has been going on and how it has changed since onset (better, worse, different symptoms), whether it has ever happened before, whether and how it may have changed since onset, and when the pain stopped if it is no longer currently being felt. See also - ^ a b Pollak, Andrew N.; Benjamin Gulli, Les Chatelain, Chris Stratford (2005). Emergency Care and Transportation of the Sick and Injured, 9th Ed. Sudbury, MA: Jones and Bartlett. pp. 148–149. ISBN 0-7637-4738-6. - ^ Thomas SA (2003). "Spinal stenosis: history and physical examination". Phys Med Rehabil Clin N Am 14 (1): 29–39. PMID 12622480. - ^ Richard Lapierre (2005). Kaplan EMT-Basic Exam (Kaplan Emt-Basic Exam). Kaplan. p. 62. ISBN 0-7432-6417-7. - ^ Montgomery J, Mitty E, Flores S (2008). "Resident condition change: should I call 911?". Geriatr Nurs 29 (1): 15–26. doi:10.1016/j.gerinurse.2007.11.009. PMID 18267174. - ^ Ryan CW (1996). "Evaluation of patients with chronic headache". Am Fam Physician 54 (3): 1051–7. PMID 8784174. - ^ "umed.med.utah.edu". Retrieved 2008-03-31. - ^ "Simple secondary survey study sheet". Retrieved 2008-03-31. - ^ Limmer, Daniel; Michael F. O'Keefe, Edward T. Dickinson, et al. (2005). Emergency Care, 10th Ed. Upper Saddle River, NJ: Pearson/Prentice hall. p. 274. ISBN 0-13-114233-X.
<urn:uuid:a87832a9-50f4-4441-8b4d-2d94885a4917>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/OPQRST
2013-05-22T15:28:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.868367
999
A shell is a payload-carrying projectile which, as opposed to shot, contains an explosive or other filling, though modern usage sometimes includes large solid projectiles properly termed shot (AP, APCR, APCNR, APDS, APFSDS and proof shot). Solid shot may contain a pyrotechnic compound if a tracer or spotting charge is used. Originally it was called a "bombshell", but "shell" has come to be unambiguous in a military context. "Bombshell" is still used figuratively to refer to a shockingly unexpected happening or revelation. All explosive- and incendiary-filled projectiles, particularly for mortars, were originally called grenades, derived from the pomegranate, whose seeds are similar to grains of powder. Words cognate with grenade are still used for an artillery or mortar projectile in some European languages. Solid cannonballs (“shot”) did not need a fuse, but hollow balls (“shells”) filled with something, such as gunpowder to fragment the ball, needed a fuse, either impact (percussion) or time. Percussion fuses with a spherical projectile presented a challenge because there was no way of ensuring that the impact mechanism hit the target. Therefore shells needed a time fuse that was ignited before or during firing and burnt until the shell reached its target. Early reports of shells include Venetian use at Jadra in 1376 and shells with fuses at the 1421 siege of St Boniface in Corsica. These were two hollowed hemispheres of stone or bronze held together by an iron hoop. Written evidence for early explosive shells in China appears in the early Ming Dynasty (1368–1644) Chinese military manual Huolongjing, compiled by Jiao Yu (fl. 14th to early 15th century) and Liu Ji (1311–1375) sometime before the latter's death, a preface added by Jiao in 1412. As described in their book, these hollow, gunpowder-packed shells were made of cast iron. Yi jang-son made Bigyukjincheonroi in the reign of Seonjo of Joseon. (WP:CIRCULAR and WP:RSUE) Bigyukjincheonroi is a time shell that consisted of wooden tube wound with time fuse made of thread, iron scrap and cap. Its time fuse can be set the time by length of thread. It was used in Japanese invasions of Korea. They were usually fired from the family of wan-gu (Hangul: 완구; Hanja: 碗口; literally "bowl-mouth") mortars. An early problem was that until 1672 there was no means of measuring the time precisely enough—clockwork fuses did not yet exist. The burning time of the powder fuse was subject to considerable trial and error. Early powder burning fuses had to be loaded fuse down to be ignited by firing or a portfire put down the barrel to light the fuse. Other shells were wrapped in bitumen cloth which would ignite during the firing and in turn ignite a powder fuse. However, by the 18th Century it was discovered that the fuse towards the muzzle could be lit by the flash through the windage between shell and barrel. Nevertheless, shells came into regular use in the 16th Century, for example a 1543 English mortar shell filled with 'wildfire'. About 1700 shells began to be employed for horizontal fire from howitzers with a small propelling charge and in 1779 experiments demonstrated that they could be used from guns with heavier charges. They became usual with field artillery early in the 19th Century. By this time shells were usually cast iron, but bronze, lead, brass and even glass were tried. See the article on Artillery fuse for more information. Cast-iron spherical common shell (so named because they were used against "common" [usual] targets) were in use up to 1871. Typically the thickness of the metal body was about 1/6 their diameter and they were about 2/3s the weight of solid shot of the same calibre. In order to ensure that shells were loaded with their fuses towards the muzzle they were attached to wooden bottoms called 'sabots'. In 1819 a committee of British artillery officers recognised that they were essential stores and in 1830 Britain standardised sabot thickness as half inch. The sabot also intended to reduce jamming during loading and the rebounding of the shell as it traveled along the bore on discharge. Mortar shells were not fitted with sabots. Rifling was invented by Jaspard Zoller, a Viennese gun maker at the end of the 15th Century, and it was realised that twisted rifling to spin an elongated projectile would greatly improve its accuracy. This was known to artillerists but its application to artillery was beyond the available technology until around the mid 19th Century. English inventor notable Armstrong, Whitworth and Lancaster [?] and the latter's rifled guns were used in the Crimean War. Armstrong's rifled breech-loading cannon was a key innovation and adopted for British service in 1859. Also in the 1850s rifled guns were developed by Major Cavelli in Italy, Baron Wahrendorff and Krupp in Germany and the Wiard gun in the United States. However, rifled barrels required some means of engaging the shell with the rifling. Lead coated shells were used with Armstrong guns, but were not satisfactory so studded projectiles were adopted. However these did not seal the gap between shell and barrel. Wads at the shell base were tried, without success, in 1878 the British adopted a copper 'gas-check' at the base of their studded projectiles, and in 1879 tried a rotating gas check to replace the studs, leading to the 1881 automatic gas-check. This was soon followed by the Vavaseur copper driving band as part of the projectile. The driving band rotated the projectile, centred it in the bore and prevented gas escaping forwards. A driving band has to be soft but tough enough to prevent stripping by rotational and engraving stresses. Copper is generally most suitable but cupro nickel or gilding metal are also used. The first pointed armour piercing shell was introduced by Major Palliser in 1863, it was made of chilled cast iron with an ogival head of 11⁄2 calibres radius. However, during 1880–1890 steel shells and armour began to appear and it was realised that steel bodies for explosive filled shells had advantages - better fragmentation and resistance to the stresses of firing. These were cast and forged steel. Shells have never been limited to an explosive filling. An incendiary shell was invented by Valturio in 1460. The carcass was invented in 1672 by a gunner serving Christoph van Galen, Prince Bishop of Munster, initially oblong in an iron frame or carcass (with poor ballistic properties) it evolved into a spherical shell. Their use continued well into the 19th Century. In 1857 the British introduced an incendiary shell (Martin's) filled with molten iron, which replaced red hot shot used against ships, most notably at Gibraltar in 1782. Two patterns of incendiary shell were used by the British in World War 1, one designed for use against Zeppelins. Similar to incendiary shells were star shells, designed for illumination rather than arson. Sometimes called lightballs they were in use from the 17th Century onwards. The British adopted parachute lightballs in 1866 for 10, 8 and 51⁄2 inch calibres. The 10-inch wasn't officially declared obsolete until 1920! Smoke balls also date back to the 17th Century, British ones contained a mix of saltpetre, coal, pitch, tar, resin, sawdust, crude antimony and sulphur. They produced a 'noisome smoke in abundance that is impossible to bear'. In the 19th Century British service they were made of concentric paper with thickness about 1/15th of total diameter and filled with powder, saltpetre, pitch, coal and tallow. They were used to 'suffocate or expel the enemy in casemates, mines or between decks; for concealing operations; and as signals. During the First World War, shrapnel shells and explosive shells inflicted terrible casualties on infantry, accounting for nearly 70% of all war casualties and leading to the adoption of steel helmets on both sides. Shells filled with poison gas were used from 1917 onwards. Frequent problems with shells led to many military disasters when shells failed to explode, most notably during the 1916 Battle of the Somme. The calibre of a shell is its diameter. Depending on the historical period and national preferences, this may be specified in millimetres, centimetres, or inches. The length of gun barrels for large cartridges and shells (naval) is frequently quoted in terms of the ratio of the barrel length to the bore size, also called calibre. For example, the 16"/50 caliber Mark 7 gun is 50 calibers long, that is, 16"×50=800"=66.7 feet long. Some guns, mainly British, were specified by the weight of their shells (see below). Due to manufacturing difficulties the smallest shells commonly used are around 20 mm calibre, used in aircraft cannon and on armoured vehicles. Smaller shells are only rarely used as they are difficult to manufacture and can only have a small explosive charge. The largest shells ever fired were those from the German super-railway guns, Gustav and Dora, which were 800 mm (31.5") in calibre. Very large shells have been replaced by rockets, guided missile, and bombs, and today the largest shells in common use are 155 mm (6.1"). Gun calibres have standardized around a few common sizes, especially in the larger range, mainly due to the uniformity required for efficient military logistics. Shells of 105, 120, and 155 mm diameter are common for NATO forces' artillery and tank guns. Artillery shells of 122, 130 and 152 mm, and tank gun ammunition of 100, 115, or 125 mm calibre remain in use in Eastern Europe and China. Most common calibres have been in use for many years, since it is logistically complex to change the calibre of all guns and ammunition stores. The weight of shells increases by and large with calibre. A typical 150 mm (5.9") shell weighs about 50 kg, a common 203 mm (8") shell about 100 kg, a concrete demolition 203 mm (8") shell 146 kg, a 280 mm (11") battleship shell about 300 kg, and a 460 mm (18") battleship shell over 1500 kg. The Schwerer Gustav supergun fired 4.8 and 7.1 tonne shells. Old-style British classification by weight During the 19th Century the British adopted a particular form of designating artillery. Guns were designated by nominal standard projectile weight while Howitzers were designated by barrel calibre. British Guns and their ammunition were designated in pounds, e.g., as "two-pounder" shortened to "2-pr" or "2-pdr". Usually this referred to the actual weight of the standard projectile (shot, shrapnel or HE), but, confusingly, this was not always the case. Some were named after the weights of obsolete projectile types of the same calibre, or even obsolete types that were considered to have been functionally equivalent. Also, projectiles fired from the same gun, but of non-standard weight, took their name from the gun. Thus, conversion from "pounds" to an actual barrel diameter requires consulting a historical reference. Since the creation of NATO new British guns are designated by calibre. There are many different types of shells. The principal ones include: The most common shell type is high explosive, commonly referred to simply as HE. They have a strong steel case, a bursting charge, and a fuse. The fuse detonates the bursting charge which shatters the case and scatters hot, sharp case pieces (fragments, splinters) at high velocity. Most of the damage to soft targets such as unprotected personnel is caused by shell pieces rather than by the blast. The term "shrapnel" is sometimes used to describe the shell pieces, but shrapnel shells functioned very differently and are long obsolete. Depending on the type of fuse used the HE shell can be set to burst on the ground (percussion), in the air above the ground (time or proximity), or after penetrating a short distance into the ground (percussion with delay, either to transmit more ground shock to covered positions, or to reduce the spread of fragments). Early high explosives used before and during World War I in HE shells were Lyddite (picric acid), PETN, TNT. However, pure TNT was expensive to produce and most nations made some use of mixtures using cruder TNT and ammonium nitrate, some with other compounds included. These fills included Ammonal, Schneiderite and Amatol. The latter was still in wide use in World War II. From 1944 to 1945 RDX and TNT mixtures became standard. Notably "Composition B" (cyclotol). The introduction of 'insensitive munition' requirements, agreements and regulations in the 1990s caused modern western designs to use various types of plastic bonded explosives (PBX) based on RDX. The percentage of shell weight taken up by its explosive fill increased steadily throughout the 20th Century. Less than 10% was usual in the first few decades, by World War II leading designs were around 15%. However, British researchers in that war identified 25% as being the optimal design for anti-personnel purposes, based on recognition that far smaller fragments than hitherto would give the required effects. This was achieved by 1960s designed 155mm L15 shell developed as part of the FH-70 program. The key requirement for increasing the HE content without increasing shell weight was to reduce the thickness of shell walls, this required improvements in high tensile steel. Mine shell The mine shell is a particular form of HE shell developed for use in small caliber weapons such as 20 mm to 30 mm cannon. Small HE shells of conventional design can contain only a limited amount of explosive. By using a thin-walled steel casing of high tensile strength, a larger explosive charge can be used. Most commonly the explosive charge also was a more expensive but higher-detonation-energy type. The mine shell concept was invented by the Germans in the Second World War primarily for use in aircraft guns intended to be fired at opposing aircraft. Mine shells produced relatively little damage due to fragments, but a much more powerful blast. The aluminium structures and skins of Second World War aircraft were readily damaged by this greater level of blast. The earliest naval and anti-tank shells had to withstand the extreme shock of punching through armour plate. Shells designed for this purpose sometimes had a greatly strengthened case with a small bursting charge, and sometimes were solid metal, i.e. shot. In either case, they almost always had a specially hardened and shaped nose to facilitate penetration. This resulted in armour-piercing (AP) projectiles. A further refinement of such designs improved penetration by adding a softer metal cap to the penetrating nose giving armour-piercing, capped (APC) design. The softer cap dampens the initial shock that would otherwise shatter the round. The best profile for the cap is not the most aerodynamic; this can be remedied by adding a further hollow cap of suitable shape: APCBC (APC + ballistic cap). AP shells with a bursting charge were sometimes distinguished by appending the suffix "HE". At the beginning of the Second World War, solid shot AP projectiles were common. As the war progressed, ordnance design evolved so that APHE became the more common design approach for anti-tank shells of 75 mm caliber and larger, and more common in naval shell design as well. In modern ordnance, most full caliber AP shells are APHE designs. Armour-piercing, discarding-sabot Armour-piercing, discarding-sabot (APDS) was developed by engineers working for the French Edgar Brandt company, and was fielded in two calibers (75 mm/57 mm for the Mle1897/33 anti-tank cannon, 37 mm/25 mm for several 37 mm gun types) just before the French-German armistice of 1940. The Edgar Brandt engineers, having been evacuated to the United Kingdom, joined ongoing APDS development efforts there, culminating in significant improvements to the concept and its realization. British APDS ordnance for their QF 6 pdr and 17 pdr anti-tank guns was fielded in March 1944. The armour-piercing concept calls for more penetration capability than the target's armour thickness. Generally, the penetration capability of an armor piercing round increases with the projectile's kinetic energy and also with concentration of that energy in a small area. Thus an efficient means of achieving increased penetrating power is increased velocity for the projectile. However, projectile impact against armour at higher velocity causes greater levels of shock. Materials have characteristic maximum levels of shock capacity, beyond which they may shatter, or otherwise disintegrate. At relatively high impact velocities, steel is no longer an adequate material for armor piercing rounds. Tungsten and tungsten alloys are suitable for use in even higher velocity armour piercing rounds, due to their very high shock tolerance and shatter resistance, and to their high melting and boiling temperatures. They also have very high density. Energy is concentrated by using a reduced-diameter tungsten shot, surrounded by a lightweight outer carrier, the sabot (a French word for a wooden shoe). This combination allows the firing of a smaller diameter (thus lower mass/aerodynamic resistance/penetration resistance) projectile with a larger area of expanding-propellant "push", thus a greater propelling force and resulting kinetic energy. Once outside the barrel, the sabot is stripped off by a combination of centrifugal force and aerodynamic force, giving the shot low drag in flight. For a given caliber the use of APDS ammunition can effectively double the anti-tank performance of a gun. Armour-piercing, fin-stabilized, discarding-sabot An Armour-Piercing, Fin-Stabilised, Discarding Sabot (APFSDS) projectile uses the sabot principle with fin (drag) stabilisation. A long, thin sub-projectile has increased sectional density and thus penetration potential. However, once a projectile has a length-to-diameter ratio greater than 10 (less for higher density projectiles), spin stabilisation becomes ineffective. Instead, drag stabilisation is used, by means of fins attached to the base of the sub-projectile, making it look like a large metal arrow. Large calibre APFSDS projectiles are usually fired from smooth-bore (unrifled) barrels, though they can be and often are fired from rifled guns. This is especially true when fired from small to medium calibre weapon systems. APFSDS projectiles are usually made from high-density metal alloys such as tungsten heavy alloys (WHA) or depleted uranium (DU); maraging steel was used for some early Soviet projectiles. DU alloys are cheaper and have better penetration than others as they are denser and self-sharpening. Uranium is also pyrophoric and may become opportunistic incendiaries especially as the round shears past the armor exposing non-oxidized metal, but both the metal's fragments and dust contaminate the battlefield with toxic hazards. The less toxic WHAs are preferred in most countries except the USA, UK, and Russia. Armour-piercing, composite rigid Armour-piercing, composite rigid (APCR) is a British term, the US term for the design is high velocity armor piercing (HVAP) and German, Hartkernmunition. The APCR projectile is a core of a high-density hard material such as tungsten carbide surrounded by a full-bore shell of a lighter material (e.g., an aluminium alloy). Most APCR projectiles are shaped like the standard APCBC shot (although some of the German Pzgr. 40 and some Soviet designs resemble a stubby arrow), but the projectile is lighter: up to half the weight of a standard AP shot of the same calibre. The lighter weight allows a higher velocity. The kinetic energy of the shot is concentrated in the core and hence on a smaller impact area, improving the penetration of the target armour. To prevent shattering on impact, a shock-buffering cap is placed between the core and the outer ballistic shell as with APC rounds. However, because the shot is lighter but still the same overall size it has poorer ballistic qualities, and loses velocity and accuracy at longer ranges. The APCR was superseded by the APDS which dispensed with the outer light alloy shell once the shot had left the barrel. The Germans used an APCR round, the Panzergranate 40 (Pzgr.40) "arrowhead" shot, for their 5 cm Pak 38 antitank guns in 1942, and it was also developed for their 75 and 88 mm antitank and tank guns, and for anti-tank guns mounted in German aircraft. Shortages of the key component, tungsten, led to the Germans dropping the use of APCR during late World War II because tungsten was more efficiently used in industrial applications such as machine tools. The concept of a heavy, small-diameter penetrator encased in light metal would be later employed in small-arms armor-piercing incendiary and HEIAP rounds. Armour-piercing, composite non-rigid Armour-piercing, composite non-rigid (APCNR), the British term, but the more common terms are squeeze-bore and tapered bore and are based on the same projectile design as the APCR - a high density core within a shell of soft iron or other alloy, but it is fired by a gun with a tapered barrel, either a taper in a fixed barrel (Gerlich design in German use; original development efforts in the late 1930s in Germany, Denmark and France) or a final added section as in the British Littlejohn adaptor. The projectile is initially full-bore, but the outer shell is deformed as it passes through the taper. Flanges or studs are swaged down in the tapered section, so that as it leaves the muzzle the projectile has a smaller overall cross-section. This gives it better flight characteristics with a higher sectional density and the projectile retains velocity better at longer ranges than an undeformed shell of the same weight. As with the APCR the kinetic energy of the round is concentrated at the core on impact. The initial velocity of the round is greatly increased by the decrease of barrel cross-sectional area toward the muzzle, resulting in a commensurate increase in velocity of the expanding propellant gases. The Germans deployed their initial design as a light anti-tank weapon, 2,8 cm schwere Panzerbüchse 41, early in the Second World War, and followed on with the 4.2 cm Pak 41 and 7.5 cm Pak 41. Although HE rounds were also put into service, they weighed only 93 grams and had low effectiveness. The British used the Littlejohn squeeze-bore adaptor which could be attached or removed as necessary. The adaptor extended the usefulness of armoured cars and light tanks which could not fit any gun larger than the QF 2 pdr. Although a full range of shells and shot could be used, changing the adaptor in the heat of battle was highly impractical. The APCNR was superseded by the APDS design which was compatible with non-tapered barrels. High-explosive, anti-tank HEAT shells are a type of shaped charge used to defeat armoured vehicles. They are extremely efficient at defeating plain steel armour but less so against later composite and reactive armour. The effectiveness of the shell is independent of its velocity, and hence the range: it is as effective at 1000 metres as at 100 metres. The speed can even be zero in the case where a soldier simply places a magnetic mine onto a tank's armor plate. A HEAT charge is most effective when detonated at a certain, optimal, distance in front of the target and HEAT shells are usually distinguished by a long, thin nose probe sticking out in front of the rest of the shell and detonating it at the correct distance, e.g., PIAT bomb. HEAT shells are less effective if spun (i.e., fired from a rifled gun). Discarding-sabot shell A discarding-sabot shell (DSS) is (in principle) the same as the APDS shot but applied to high-explosive shells. It is a means to deliver a shell to a greater range. The design of the sub-projectile carried inside the sabot can be optimised for aerodynamic properties and the sabot can be built for best performance within the barrel of the gun. The principle was developed by a Frenchman, Edgar Brandt, in the 1930s. With the occupation of France, the Germans took the idea for application to anti-aircraft guns—a DSS projectile could be fired at a higher muzzle velocity and reach the target altitude more quickly, simplifying aiming and allowing the target aircraft less time to change course. High-explosive, squash-head or high-explosive plastic High-explosive, squash-head (HESH) is another anti-tank shell based on the use of explosive. Developed by the British inventor Sir Charles Dennistoun Burney in World War II for use against fortifications. A thin-walled shell case contains a large charge of a plastic explosive. On impact the explosive flattens, without detonating, against the face of the armour, and is then detonated by a fuze in the base of the shell. Energy is transferred through the armour plate: when the compressive shock reflects off the air/metal interface on the inner face of the armour, it is transformed into a tension wave which spalls a "scab" of metal off into the tank damaging the equipment and crew without actually penetrating the armour. HESH is completely defeated by spaced armour, so long as the plates are individually able to withstand the explosion. It is still considered useful as not all vehicles are equipped with spaced armour, and it is also the most effective munition for demolishing brick and concrete. HESH shells, unlike HEAT shells, are best fired from rifled guns. Another variant is the high-explosive plastic (HEP). Proof shot A proof shot is not used in combat but to confirm that a new gun barrel can withstand operational stresses. The proof shot is heavier than a normal shot or shell, and an oversize propelling charge is used, subjecting the barrel to greater than normal stress. The proof shot is inert (no explosive or functioning filling) and is often a solid unit, although water, sand or iron powder filled versions may be used for testing the gun mounting. Although the proof shot resembles a functioning shell (of whatever sort) so that it behaves as a real shell in the barrel, it is not aerodynamic as its job is over once it has left the muzzle of the gun. Consequently it travels a much shorter distance and is usually stopped by an earth bank for safety measures. The gun, operated remotely for safety in case it fails, fires the proof shot, and is then inspected for damage. If the barrel passes the examination "proof marks" are added to the barrel. The gun can be expected to handle normal ammunition, which subjects it to less stress than the proof shot, without being damaged. Shrapnel shells Shrapnel shells were an early (1784) anti-personnel munition which delivered large numbers of bullets at ranges far greater than rifles or machine guns could attain - up to 6,500 yards by 1914. A typical shrapnel shell as used in World War I was streamlined, 75 mm (3 inch) in diameter and contained approximately 300 lead-antimony balls (bullets), each around 1/2 inch in diameter. Shrapnel used the principle that the bullets encountered much less air resistance if they travelled most of their journey packed together in a single streamlined shell than they would if they travelled individually, and could hence attain a far greater range. The gunner set the shell's time fuze so that it was timed to burst as it was angling down towards the ground just before it reached its target (ideally about 150 yards before, and 60–100 feet above the ground). The fuze then ignited a small "bursting charge" in the base of the shell which fired the balls forward out of the front of the shell case, adding 200 – 250 ft/second to the existing velocity of 750–1200 ft/second. The shell body dropped to the ground mostly intact and the bullets continued in an expanding cone shape before striking the ground over an area approximately 250 yards × 30 yards in the case of the US 3 inch shell. The effect was of a large shotgun blast just in front of and above the target, and was deadly against troops in the open. A trained gun team could fire 20 such shells per minute, with a total of 6,000 balls, which compared very favourably with rifles and machine-guns. However, shrapnel's relatively flat trajectory (it depended mainly on the shell's velocity for its lethality, and was only lethal in a forward direction) meant that it could not strike trained troops who avoided open spaces and instead used dead ground (dips), shelters, trenches, buildings, and trees for cover. It was of no use in destroying buildings or shelters. Hence it was replaced during World War I by the high-explosive shell which exploded its fragments in all directions and could be fired by high-angle weapons such as howitzers, hence far more difficult to avoid. Cluster shells Cluster shells are a type of carrier shell or cargo munition. Like cluster bombs, an artillery shell may be used to scatter smaller submunitions, including anti-personnel grenades, anti-tank top-attack munitions, and landmines. These are generally far more lethal against both armor and infantry than simple high-explosive shells, since the multiple munitions create a larger kill zone and increase the chance of achieving the direct hit necessary to kill armor. Most modern armies make significant use of cluster munitions in their artillery batteries. However, in operational use submunitions have demonstrated a far higher malfunction rate than previously claimed, including those that have self-destruct mechanisms. This problem, the 'dirty battlefield", led to the Ottawa Treaty. Artillery-scattered mines allow for the quick deployment of minefields into the path of the enemy without placing engineering units at risk, but artillery delivery may lead to an irregular and unpredictable minefield with more unexploded ordnance than if mines were individually placed. Signatories of the Ottawa Treaty have renounced the use of cluster munitions of all types where the carrier contains more than ten submunitions. Chemical shells contain just a small explosive charge to burst the shell, and a larger quantity of a chemical agent such as a poison gas. Signatories of the Chemical Weapons Convention have renounced such shells. Non-lethal shells Not all shells are designed to kill or destroy. The following types are designed to achieve particular non-lethal effects. They are not completely harmless: smoke and illumination shells can accidentally start fires, and impact by the discarded carrier of all three types can wound or kill personnel, or cause minor damage to property. The smoke shell is designed to create a smoke screen. The main types are bursting (those filled with white phosphorus WP and a small HE bursting charge are best known) and base ejection (delivering three or four smoke canisters, or material impregnated with white phosphorus). Base ejection shells are a type of carrier shell or cargo munition. Base ejection smoke is usually white, however, coloured smoke has been used for marking purposes. The original canisters were non-burning, being filled with a compound that created smoke when it reacted with atmospheric moisture, modern ones use red phosphorus because of its multi-spectral properties. However, other compounds have been used, in World War II Germany used oleum (fuming sulphuric acid) and pumice. Media related to Star shell at Wikimedia Commons Modern illuminating shells are a type of carrier shell or cargo munition. Those used in World War I were shrapnel pattern shells ejecting small burning 'pots'. A modern illumination shell has a fuze which ejects the "candle" (a pyrotechnic flare emitting white or infrared light) at a calculated altitude, where it slowly drifts down beneath a heat resistant parachute, illuminating the area below. These are also known as starshell or star shell. Coloured flare shells have also been used for target marking purposes. The carrier shell is simply a hollow carrier equipped with a fuze which ejects the contents at a calculated time. They are often filled with propaganda leaflets (see external links), but can be filled with anything that meets the weight restrictions and is able to withstand the shock of firing. Famously, on Christmas Day 1899 during the siege of Ladysmith, the Boers fired into Ladysmith a carrier shell without fuze, which contained a Christmas pudding, two Union Flags and the message "compliments of the season". The shell is still kept in the museum at Ladysmith. Aerial firework bursts are created by shells. In the United States, consumer firework shells may not exceed 1.75 inches in diameter. Unexploded shells The fuze of a shell has to keep the shell safe from accidental functioning during storage, due to (possibly) rough handling, fire, etc., it also has to survive the violent launch through the barrel, then reliably function at the correct time. To do this it has a number of arming mechanisms, which are successively enabled under the influence of the firing sequence. Sometimes, one or more of these arming mechanisms fails, and if the fuze is installed on an HE shell, it fails to detonate on impact. More worrying and potentially far more hazardous are fully armed shells on which the fuze fails to initiate the HE firing. This may be due to shallow, low velocity or soft impact conditions. Whatever the reason for failure, such a shell is called a blind or unexploded ordnance (UXO). The older term, "dud", is discouraged because it implies that the shell cannot detonate. Blind shells often litter old battlefields and depending on the impact velocity may be buried some distance into the earth, all remain potentially hazardous. For example, antitank ammunition with a piezoelectric fuze can be detonated by relatively light impact to the piezoelectric element, and others, depending on the type of fuze used can be detonated by even a small movement. The battlefields of the First World War still claim casualties today from leftover munitions. Modern electrical and mechanical fuzes are highly reliable: if they do not arm correctly they keep the initiation train out of line, or if electrical in nature, discharge any stored electrical energy. Guided shells Guided or "smart" ammunition have been developed in recent years, but have yet to supplant unguided munitions in all applications. M982 Excalibur. A GPS guided artillery shell. M712 Copperhead approaches a target tank SMArt 155. An anti-armor shell containing two autonomous, sensor-guided, fire-and-forget submunitions. Range enhancing technologies See also - "Etymology of grenade". Etymonline.com. 1972-01-08. Retrieved 2013-02-27. - Hogg pg 164 - Needham, Joseph. (1986). Science and Civilization in China: Volume 5, Chemistry and Chemical Technology, Part 7, Military Technology; the Gunpowder Epic. Taipei: Caves Books Ltd. Page 24–25, 264. - "Explosive Find: Excavated Bomb Suggests Early Start for Artillery - SPIEGEL ONLINE - News - International". Spiegel.de. Retrieved 2013-02-27. - Hogg pg 164 - 165 - Hogg pg 165 - Hogg pg 80 - 83 - Hogg pg 165 - 166 - Hogg pg 171 - 174 - Hogg pg 174 - 176 - "Pictures of African Americans During World War II". Archives.gov. Retrieved 2013-02-27. - Popular Science, December 1944, pg 126 illustration at bottom of page on working principle of APCBC type shell - Drawing below photograph on the referred page illustrates the APCNR principle: Popular Science "Tapered Bore Gives This German Gun Its High-Velocity" p. 132 - Shirokorad A. B. The God of War of the Third Reich. M. AST, 2002 (Широкорад А. Б. - Бог войны Третьего рейха. — М.,ООО Издательство АСТ, 2002., ISBN 978-5-17-015302-2) - I.V. Hogg & L.F. Thurston, "British Artillery Weapons & Ammunition". London: Ian Allan, 1972. Page 215. - Douglas T Hamilton, "Shrapnel Shell Manufacture. A Comprehensive Treatise.". New York: Industrial Press, 1915, Page 13 - Douglas T Hamilton, "High-explosive shell manufacture; a comprehensive treatise". New York: Industrial Press, 1916 - Douglas T Hamilton, "Shrapnel Shell Manufacture. A Comprehensive Treatise". New York: Industrial Press, 1915 - Hogg, OFG. 1970. “Artillery: its origin, heyday and decline”. London: C Hurst and Company. |Wikimedia Commons has media related to: Artillery ammunition| - "What Happens When a Shell Bursts," Popular Mechanics, April 1906, p. 408 - with photograph of exploded shell reassembled - World War II propaganda leaflets: A website about airdropped, shelled or rocket fired propaganda leaflets. Example artillery shells for spreading propaganda. - Artillery Tactics and Combat during the Napoleonic Wars - 5 inch 54 caliber naval gun (5/54) shell.
<urn:uuid:835ecff9-631a-4cfa-bc8e-bd3b512d8a49>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Shell_(projectile)
2013-05-22T15:07:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952231
8,069
|This article does not cite any references or sources. (February 2008)| Stimulation is the action of various agents (stimuli) on nerves, muscles, or a sensory end organ, by which activity is evoked; especially, the nervous impulse produced by various agents on nerves, or a sensory end organ, by which the part connected with the nerve is thrown into a state of activity. The word is also often used metaphorically. For example, an interesting or fun activity can be described as "stimulating", regardless of its physical effects on nerves. (Note)Stimulate means act as stimulus to, stimulus means things that rouses to activity, now rouses means exciting, stirring. Stimulation in general refers to how organisms perceive incoming stimuli. As such it is part of the stimulus-response mechanism. Simple organisms broadly react in three ways to stimulation: too little stimulation causes them to stagnate, too much to die from stress or inability to adapt, and a medium amount causes them to adapt and grow as they overcome it. Similar categories or effects are noted with psychological stress with people. Thus, stimulation may be described as how external events provoke a response by an individual in the attempt to cope. Psychologically, it is possible to become habituated to a degree of stimulation, and then find it uncomfortable to have significantly more or less. Thus one can become used to an intense life, or television, and suffer withdrawal when they are removed, from lack of stimulation, and it is possible to also be unhappy and stressed due to additional abnormal stimulation. It is hypothesized and commonly believed by some that psychological habituation to a high level of stimulation ("over-stimulation") can lead to psychological problems. For example, some food additives are hypothesized to cause children to become prone to over-stimulation, and ADHD is, theoretically, a condition in which over-stimulation is a part. However, ADHD is believed to be a heterogeneous disorder with genetic, environmental and psychosocial causes among which food additives may the part. It is also hypothesized that long term over-stimulation can result eventually in a phenomenon called "adrenal exhaustion" over time, but this is neither medically accepted nor proven at this time.
<urn:uuid:f06fdf73-53ee-4a5f-89ac-8b80596aaeeb>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Stimulation
2013-05-22T15:28:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961104
457
Alternate name: Carpet Foxtail Cactus Family: Cactaceae, Cactus view all from this family Description This small, rare cactus, whose dense spines make the stem look cobwebbed, may comprise up to 100 stems in a single mounded plant. Habit: succulent native perennial shrub. Height: to 6 in (15 cm) tall. Stem: spherical to cylindrical, grooved when mature, 0.6-3 in (15-75 mm) tall, 0.4-1 in (10-25 mm) diameter. Leaf: tiny spine, white or cream with brown tip, becoming gray, held close to stem, 0.05-0.1 in (1-2.5 mm) long; 30-90 radial spines per areole, central spines often absent. Flower: brownish-pink funnel, to 0.6 in (15 mm) wide, to 0.5 in (12 mm) tall; opening in the morning, . Fruit: small berry, cylindrical to club-shaped, about 0.5 in (12 mm) long, green or red. Endangered Status The Lee Pincushion Cactus is on the U.S. Endangered Species List. It is classified as threatened in New Mexico. This plant has been the victim of poaching in the Guadalupe Mountains where it lives. Despite its spines, this cactus is a highly prized plant, often taken from its desert habitat by rare-plant collectors. A second threat to the species, which has now been addressed, came from maintenance of nearby Carlsbad National Park. Flower March to June. Habitat Chihuahuan desert scrub to conifer woodlands, rock outcrops (rarely alluvial rubble), usually narrowly confined to cracks in limestone; 2000-8500 ft (600-2600 m); also cultivated as an ornamental. Range Found only in the Guadalupe Mountains of New Mexico, near Carlsbad Caverns National Park. Discussion Some authorities list this plant at the species level as Coryphantha sneedii, Escobaria sneeedii or Escobaria leei. Also known as Lee's pincushion cactus. A federal threatened plant; listed as endangered in New Mexico.
<urn:uuid:67feb3c8-70df-45a4-9720-42b55827daf9>
CC-MAIN-2013-20
http://enature.com/fieldguides/detail.asp?shapeID=1155&curGroupID=11&lgfromWhere=&curPageNum=8
2013-05-22T15:29:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898383
493
Given the wide use of BPA in so many products we encounter every day, it is probably impossible to completely eliminate your exposure to this potentially harmful chemical. Still, you can lower your exposure—and your risk of possible health problems associated with BPA—by taking a few simple precautions. In 2007, the Environmental Working Group hired an independent laboratory to conduct an analysis of BPA in many different canned foods and beverages. The study found that the amount of BPA in canned food varies widely. Chicken soup, infant formula and ravioli had the highest concentrations of BPA, for example, while condensed milk, soda and canned fruit contained much less of the chemical. Here are a few tips to help you lower your exposure to BPA: - Eat Fewer Canned Foods The easiest way to lower your intake of BPA is to stop eating so many foods that come into contact with the chemical. Eat fresh or frozen fruits and vegetables, which usually have more nutrients and fewer preservatives than canned foods, and taste better, too. - Choose Cardboard and Glass Containers Over Cans Highly acidic foods, such as tomato sauce and canned pasta, leach more BPA from the lining of cans, so it’s best to choose brands that come in glass containers. Soups, juices and other foods packaged in cardboard cartons made of layers of aluminum and polyethylene plastic (labeled with a number 2 recycling code) are safer than cans with plastic linings containing BPA. - Don't Microwave Polycarbonate Plastic Food Containers Polycarbonate plastic, which is used in packaging for many microwaveable foods, may break down at high temperatures and release BPA. Although manufacturers are not required to say whether a product contains BPA, polycarbonate containers that do are usually marked with a number 7 recycling code on the bottom of the package. - Choose Plastic or Glass Bottles for Beverages Canned juice and soda often contain some BPA, especially if they come in cans lined with BPA-laden plastic. Glass or plastic bottles are safer choices. For portable water bottles, stainless steel is best, but most recyclable plastic water bottles do not contain BPA. Plastic bottles with BPA are usually marked with a number 7 recycling code. - Turn Down the Heat To avoid BPA in your hot foods and liquids, switch to glass or porcelain containers, or stainless steel containers without plastic liners. - Use Baby Bottles That Are BPA-Free As a general rule, hard, clear plastic contains BPA while soft or cloudy plastic does not. Most major manufacturers now offer baby bottles made without BPA. - Use Powdered Infant Formula Instead of Pre-mixed Liquid A study by the Environmental Working Group found that liquid formulas contain more BPA than powdered versions. - Practice Moderation The fewer canned foods and beverages you consume, the less your exposure to BPA, but you don’t have to cut out canned foods altogether to reduce your exposure and lower your potential health risks. In addition to eating less canned food overall, limit your intake of canned foods that are high in BPA.
<urn:uuid:3446624e-f74b-4d1c-84d4-56e5f3a9333d>
CC-MAIN-2013-20
http://environment.about.com/od/healthenvironment/a/bpa_tips.htm
2013-05-22T15:27:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926647
656
The first thing that anyone ever learns in a programming language is to make a program that spits out "Hello World". I must admit that I think that this is pretty dumb. How about, for your first program, we instead make a program that spits out: Hello, my name is _____ Oh my god. This actually worked! It's a little more accurate with regard to the feelings inspired by running a program you wrote yourself, and, since that's what you'll probably say anyway, now you'll sound like you are reading your program's output instead of rejoicing. Writing this program might be deceptively difficult. Not because it is a difficult thing to understand, but because of all of the infrastructure that goes along with this first program. You have to have Python working, you have to edit a text file to give to Python, and then you have to execute that program you wrote. I regret to inform you that this step has the potential to be the most frustrating one. There's a lot of things that people take for granted with computers, and I've been doing them for a while, and it's easy to forget what is initially unintuitive after everything becomes more second nature. Some of the below will only apply to you if you aren't working on a Macintosh. The Mac has a decidedly different setup as compared to a Windows computer or a Linux computer. Let's start editing the file... Start your editor, (notepad or edit on Windows, pico, gnotepad, emacs or vi on Linux) and create a new file called first.py. Then, type or paste in the following: print "My name is Jongleur" print "Oh my god.", "This actually worked!" You might want to consider substituting your name in for mine, though. To execute your program, go to the directory you saved it in, and type . If everything worked properly, you should see the output above appear on your screen. If not, then something has gone wrong with your setup. This kind of thing can be tricky to debug in a general way, but if the window just flashes on and then disappears, then add the line to the bottom of the file. If that's not the problem, then the easiest thing for you to do would be to /msg me and I'll try to help you. If you did get output, then hooray! Read it aloud. You just made the computer do something that it couldn't do previously. It's not very useful, but it could be! Now try changing around the text between the quotes and rerunning your program. If you would like to make the program execute without having to type python in front, you can do one of the following: - On Linux (or any other variant of Unix) - Add the following line to the top of your program: #!/usr/bin/env python. Then, on the command line, type chmod +x first.py - On Windows - This can get complicated, unfortunately. What I recommend is creating a file called first.bat that conatins one line: python first.py. Now, try typing first in that directory. It should run your program. - On a Mac This is a harder question than you think. It is difficult because you have no CLI. For now, stick to the IDE that came with python. Sorry. That advice was for Mac OS9 and below. With OSX, just follow the Linux instructions, because OSX is a UNIX, just like Linux. Alright, how could this be useful? Well, perhaps you use the command line a lot. If so, then there are probably some commands you always mistype. (I know that I often type sl instead of ls or dri instead of dir) You could create a file called dri. Then whenever you type dri, instead of seeing an error message, you could see something like "Learn to type!!!! You meant to type dir". Now all of a sudden, you have created something useful! You now have a program that will give you negative feedback every time you mistype a command. Let's analyze what's going on here after you hit enter on the command line... - First, you are calling the python interpreter on your file. This seems like cheating, but it's easier than compiling and it gives more instant feedback. - Now, the interpreter looks at the first line of your prgram and runs it. It is pretty clear what it does - it prints out everything between the quotes. - We tell it to print nothing. So it spits out a blank line. - We tell it to print two things on the same line. We separate those two things with commas. You can actually pass an (almost) infinite amount of things to print as long as you put commas between them. Note that it automatically put a space between the first thing and the second thing. - It reaches the end of the file and so quits. Now for some terminology. The things between the quotes are referred to as strings. A string is simply a bunch of characters between two quotes. You can actually use ' instead of " as long as you use the same quote mark at the other end of the string. print is a function or command. The way python works is by you telling it "do this, then do this". "this" is almost always a function. You can define your own functions, but not for a few lessons. Now, play around! Change the strings! Try printing numbers! Try printing numbers without using quotes! Go mad with your newfound power! At some point in your playing around, you will probably screw up. This is natural and good. Fear of failure is something that is completely unacceptable in coding. When you screw up you should see somthing like the following: File "first.py", line 2 SyntaxError: invalid syntax Notice how python tries to help you. It tells you what line it got confused at, and it points out the exact character it got confused at. Look at that line - I screwed up by putting a : instead of a ". You should also notice how painless failure was. This is good. Most programs take a while to get right, and if failure was painful, the only coders would be members of the Jim Rose Circus It's initially hard to see that relationship between what you just did, and, say, Microsoft Word, Linux, or Mozilla. I promise, however, that the steps you have taken to get to your current point have gotten you more than halfway to being able to understand how those giant behemoths work. After playing around for a little while longer, proceed to the next lesson. In case you are out of ideas, here are some things you can try. Put \g in the string you print out. Try putting them in one at a time. The \ character is called an escape character. It, combined with the character that follows it, does something special. Since there is no "make a beep" key on your keyboard, we have to use the backslash to generate one. It also allows you to put quote marks in your string without prematurely ending it by using \". Now that you know how to make newlines, tabs, quotes, and beeps, your negative-feedback program can get even more annoying! Learn to Program | >>
<urn:uuid:001a9cd0-6262-4424-b9eb-5509ab51d0c4>
CC-MAIN-2013-20
http://everything2.com/title/Learn+to+Program%253A+Producing+Output
2013-05-22T15:29:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953031
1,548
"The National Socialist German Workers' Party (German:Nationalsozialistische Deutsche Arbeiterpartei (help·info), abbreviated NSDAP), commonly known in English as the Nazi Party, was a political party in Germany between 1920 and 1945. Its predecessor, the German Workers' Party (DAP), existed from 1919 to 1920. The term Nazi is German and stems from Nationalsozialist, due to the pronunciation of Latin -tion- as -tsion- in German (rather than -shon- as it is in English), with German Z being pronounced as 'ts'. The party was founded out of the far-right racist völkisch German nationalist movement and the violent anti-communist Freikorps paramilitary culture that fought against the uprisings of communist revolutionaries in post-World War I Germany. Advocacy of a form of socialism by right-wing figures and movements in Germany became common during and after World War I, influencing Nazism. Arthur Moeller van den Bruck of the Conservative Revolutionary movement coined the term "Third Reich", and advocated an ideology combining the nationalism of the right and the socialism of the left. Prominent Conservative Revolutionary member Oswald Spengler's conception of a "Prussian Socialism" influenced the Nazis. The party was created as a means to draw workers away from communism and into völkisch nationalism. Initially, Nazi political strategy focused on anti-big business, anti-bourgeois, and anti-capitalist rhetoric, although such aspects were later downplayed in order to gain the support of industrial entities, and in 1930s the party's focus shifted to anti-Semitic and anti-Marxist themes. I just thought I would give you, the reader, some information from Wikipedia about the Nationalist movement and its origins. I also felt a need to educate you about the Nationalist movement in this country because of the recent pronouncements of a prominent conservative. Michael Savage,conservative radio host, told host Aaron Klein Sunday, “We need a nationalist party in the United States of America.” The Tea Party, he said, isn’t cutting it: “They need to restructure their party. They need a charismatic leader, which they don’t have.” Savage went on to underscore the notion that the Tea Party isn’t as strong as it could be because of a lack of leadership. “When you say Tea Party no one knows who the leader is because there is no leader,” he said. “No man has stepped forward who can lead that party.” Unfortunately for Savage, he doesn’t think he could take the helm. He believes the role will require “enormous resources and enormous energy,” which he doesn’t have at his age. The new party, he said, needs to be united: Somebody has to bring them all together, unite them like King David did the ancient tribes of Israel. And there is no King David out there. Who’s the King David? Tell me who is going to do it?” Saved blasted the current Republican party, calling it an “appendage of the Democrat machine.” “It’s a game being played against the American people,” he said. “You’ve got the drunk Boehner on the one side, and the quasi-pseudo-crypto Marxist on the other, who is really just enjoying the ride in Hawaii right now, representing his factions.” [Source] Someone should tell Mr. Savage that there is a nationalist party in America. He is upset with the Tea Party because they lack a "charismatic leader", but everything else is in place. Still, I predict that the Tea Party will die a slow death after 2016. Why?Because Barack Obama will no longer be the president of these divided states. Sadly, I think that we all know what drove Tea Party passions over the past four years. The Nationalists know what drove it (the passion) as well. So I am watching the Deadskins Seahawks playoff game, and I swear that Washington's coach is going to get RG3 killed. Dude is taking hit after hit and he is still in the game on bad wheels. Now that we know that this same coach lied about RG3's condition and sent him into a game earlier this year, watching him lie there on the field just makes me kind of shake my head. One thing we do know, with RG3 being the "good guy" that he is; we will not hear a single complaint from him.
<urn:uuid:9c739a33-376d-4013-810f-ba39d82b10c0>
CC-MAIN-2013-20
http://field-negro.blogspot.co.uk/2013/01/the-new-nationalist-and-coach-lies.html
2013-05-22T15:20:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96909
964
The Rolling Stones, well before they became seniors themselves, may have said it best: “What a drag it is getting old.” Time undeniably takes a toll on the organs, brain, and muscles. Fortunately, though, there are ways to get an edge on age. Research into what scientists call “optimizing function” in old age is an especially hot topic now as the first of nearly 80 million baby boomers begin their march this year into 65-plus seniordom. Learning tactics for aging well is particularly critical for people with diabetes, who are at a high risk of becoming disabled. Here is what science has to say about staying able-bodied for the long haul. The goal for many people as they age is maintaining their independence—in other words, avoiding disability. Being able to cook, shop, clean, and do all of life’s other little chores is essential for people who want to stay at home and on their own. The basic body functions that are necessary to accomplish these tasks include reaching, grasping, stooping, lifting, and, probably most important, walking. Functional disability means having trouble performing these activities of daily living and is a common side effect of aging in all living creatures. “Declining function of aging is a phenomenon that is consistent across species,” says Marco Pahor, MD, professor and chair of the Department of Aging and Geriatric Research at the University of Florida. “Worms, as they age, move slower. Rats move slower. It’s likely due to a very basic biological mechanism related to energy metabolism.” These age-related biological changes lead to the loss and shrinkage of muscle that makes basic tasks more difficult. Another reality of getting older is that it makes people more likely to develop diabetes, itself a leading cause of functional disability. A 2004 study in Diabetes Care found that people with diabetes are 2.4 times as likely to become functionally disabled as those without the disease, probably because of the effects of diabetes complications. “You may develop diabetic neuropathy [nerve damage], so you don’t walk so well,” says Eric Rackow, MD, professor of medicine at the New York University School of Medicine. People with diabetes “can develop eye problems, retinopathy, so they don’t see so well,” he adds. “If you don’t walk so well and you don’t see so well, those are functional limitations.” Another factor in disability is depression, which also is more common among people with diabetes. The 2004 study found that people with both diabetes and depression are 7.2 times as likely to develop functional disability as people without either diabetes or depression. A Two-Way Street Yet a common misconception about aging is that life is just a plodding march down a one-way path of deterioration. Just because someone becomes disabled in old age doesn’t mean he or she will stay that way. “About 70 to 80 percent of older persons will regain independence after an illness or hospitalization,” says Thomas Gill, MD, director of the Yale Program on Aging. “These same folks will likely have subsequent disability episodes. Older persons are moving in and out of disability.” So rehabilitation is a possible outcome, even a likely one. The truth is that for most people who become disabled, there’s typically a trigger. “A lot of the work in the field is focused on risk factors that make a person vulnerable for developing bad functional outcomes,” says Gill. “It’s not those risk factors themselves that cause disability; bad things happen. It’s usually an illness or an injury, or something along those lines.” About 7 in 10 cases of disability among older people can be attributed to a particular event, says Gill, usually something that requires hospitalization. Heart attacks and strokes commonly lead to being hospitalized and subsequently disabled. This is one more reason that people with diabetes, who are prone to cardiovascular disease, are at such high risk. Gill adds that falls, while less common, are the event most likely to precipitate disability. How to avoid these unpleasant events and outcomes? The first step is to accept the real possibility that they can happen. “It’s a common phenomenon in health that people tend not to recognize their problems,” says Rackow. “Number 1, people really need to understand their status . . . either by self-assessment or by going to a doctor.” One way to measure functionality is the “get up and go” test. The person being assessed sits in a chair, crosses his or her arms so they can’t be used for support, and then tries to stand up. Someone who can’t stand has “substantial weakness in the lower extremities,” says Gill. After standing up, the person is told to walk. How far, fast, and steadily the person is able to walk can be used to predict disability risk. After evaluation, it’s time to make a plan for preventing disability. For people with diabetes, keeping blood glucose under control can help keep the nerves and eyes healthy, which in turn reduces the risk of taking a nasty spill. Blood fats and blood pressure should also be kept within target ranges to lower risk for the heart attacks and strokes that typically end in hospitalization. Avoiding falls is a key to aging well. A big contributor to falls is a condition called postural hypotension. This basically means getting dizzy upon standing or sitting up because of a drop in blood pressure. Medications can cause postural hypotension, and since older people with diabetes often take several medications, they can be at risk for the condition. To reduce their risk of falling, Gill recommends that his patients flex their ankles 10 times before standing, and then hold still for a count of 10 before taking a step. Creating a safe home environment can also prevent injuries that lead to disability. “Don’t have a throw rug in the hallway,” says Rackow, who is also president and CEO of SeniorBridge, a company that manages the care of older people in their own homes. “Have a grab bar in the bathroom and a toilet seat that is high, so people can have an easier time getting up.” Reducing clutter, having a good stepping stool, and keeping walking areas free of cords or other obstacles can also increase safety, as can using a cane or walker. The best way to stay strong and independent, most experts agree, is to exercise. “Exercise slows the consequences of aging. In terms of physical function, disability, and mobility, physical activity is the most promising intervention,” says Pahor. No medication being tested to slow the effects of aging, he says, has been “as astonishing as physical activity.” And Gill says that people who exercise regularly before becoming disabled are the ones more likely to eventually regain their independence. A major National Institute on Aging clinical trial, the Life (Lifestyle Interventions and Independence for Elders) study, is under way to test the hypothesis that exercise is king. The target population is seniors at high risk for becoming disabled; a quarter of those recruited so far have diabetes. Participants will either simply receive education on successful aging or undergo an exercise program that includes 30 minutes of walking five days a week, plus resistance training with ankle weights. After about four years, the researchers will count how many people have become disabled to see if the exercise helped. “This is designed to be the definitive study,” Gill says. Whether your idea of exercise for health is walking around your neighborhood or practicing yoga or tai chi, any foray into physical activity by older people should be initiated gradually, says Pahor, who is heading up the Life study. Participants in the study are encouraged to walk at a rate they feel is challenging but not too strenuous. “For younger people, you can look for a target heart rate” to set a walking pace, Pahor says. “In older people, there is no objective measure; heart rate regulation is compromised for many reasons, so it is not a reliable indicator of exertion.” Exercise may one day be joined by medication as a means of staving off disability. Trials are currently testing whether testosterone (in men) or aspirin can keep seniors functional by preventing muscle loss or easing blood flow, respectively. Other drugs are in the pipeline, too, but all are in an early stage of development. Gill says researchers want to know whether medications alone can be effective or need to be paired with exercise to work. Aging is inevitable, of course, but people can still exercise, so to speak, some control over their future. Good old physical activity remains science’s best formula for preventing, slowing, and reversing disability in old age. Staying strong may prove the Rolling Stones wrong after all.
<urn:uuid:cf5bfaa7-22a7-40ab-a551-e92a1dc5f3e1>
CC-MAIN-2013-20
http://forecast.diabetes.org/magazine/features/secrets-aging-well
2013-05-22T15:28:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966337
1,861
Shavuot, the biblical Festival of Weeks, arrives on May 29 this year, with a special urgency. Holidays on the Jewish calendar often speak to us with particular force at pivotal moments in our communal lives – Passover, for example, with its theme of freedom, or Yom Kippur with its call for repentance. This year, we need to be reminded of Shavuot, the spring harvest festival with its often-overlooked — or suppressed — teachings about the rights of the poor and the dangerous seduction of wealth. Tradition teaches that Shavuot marks the anniversary of the giving of the Torah at Mount Sinai. In fact, the biblical text that introduces Shavuot makes no mention of Sinai. As mandated in Leviticus, chapter 23, the holiday is a celebration of the “first fruits” of the wheat harvest, to be observed seven weeks plus a day after the Passover sacrifice. Traditional commentaries also teach that Shavuot is unique among the major biblical festivals because it is not assigned any distinctive ritual comparable to the Seder meal of Passover or the makeshift booths of Sukkot. But that, too, has it backward. When Leviticus lays out the five major holy days of the year, only Shavuot comes with a specific code of behavior. The text spells it out as plain as day: “When you reap the harvest of your land, you shall not reap all the way to the edges of your field, or gather the gleanings” — the bits that fall to the ground — “of your harvest; you shall leave them for the poor and the stranger. I am the Lord.” The message of Shavuot is that the harvest you’re celebrating isn’t yours alone. Part of your crop belongs by right to people you don’t even know, simply because they don’t have as much. And if we restate this as a broad principle, as most of us agree the Bible is supposed to be read, the rule is this: A portion of one’s income shall be redistributed to the poor. Nor is this to be taken as a recommendation of charity or generosity. It’s intended as a legal obligation, “a law for all time, in all your dwellings” — not just on the farm, and not just in the Middle East — “throughout your generations.” It’s almost as if the ancients knew we were going to try to wiggle out of it. If you listen to the sort of folks who like to thump their Bibles, you might wonder what happened to the sacred principle of private property. Leviticus takes that on a chapter later, in the portion that’s read in synagogues a few days before Shavuot. Every 50 years, the book says, all land purchases are annulled and the property reverts to its original owners. Large property holdings may be amassed only temporarily. Enduring wealth, like enduring poverty, is impermissible. Conservatives in Washington these days like to dismiss taxes and regulation as “socialism.” But if you read your Bible, that’s just a fancy name for traditional values.
<urn:uuid:a76bf1cf-4fd0-4d65-ba25-e45560c0bfa9>
CC-MAIN-2013-20
http://forward.com/articles/106300/leave-the-gleanings/
2013-05-22T15:14:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947451
682
More than three hundred members of Hancock’s six communal families met here on Sundays in good weather at the height of Shakerism in this area. They worshiped privately in themornings but welcomed visitors to public afternoon meetings, always hoping to attract new converts. Many famous people, including Charles Dickens and James Fenimore Cooper, observed and wrote about Shaker worship, which consisted of singing and dancing. The sect’s name came from motions that early members made in spiritual ecstasy. Men in rows on the east faced women in rows on the west. Hymns were traditionally sung without instruments for the sake of simplicity. The original Meetinghouse was begun in 1786, and subsequently enlarged with a gable roof. The Ministry’s living quarters were on the second floor, and a large open space allowed for worship services on the first floor. Two doors on the front of the building served as separate entrances for Brothers and Sisters coming to meeting. No longer used, the building was razed in 1938. The present Meetinghouse was built by the Shirley, Massachusetts Shaker community in 1793, purchased and moved to Hancock in 1962. It was built in 1792-3 by Moses Johnson, who built the Hancock Meetinghouse to a similar gambrel-roofed design. Shaker religious laws stipulated that Meetinghouses “should be painted white without, and of a bluish shade within.” Historic analysis has revealed seven layers of paint. The 1793 Prussian blue paint color was replicated in 2005. The building’s construction enabled the large open dance floor. Hanging platforms enabled the raising or lowering of candles. Note the absence of a cross, alter stained glass or other religious symbols. Built-in benches around the walls were for worldly visitors.
<urn:uuid:7f2d73ca-5302-44b8-8a7e-08f5fe54b591>
CC-MAIN-2013-20
http://hancockshakervillage.org/museum/historic-architecture/meetinghouse/
2013-05-22T15:28:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967205
362
Starting from this claim that the king of Persia respected the Jewish' God. The only surviving text is the Jewish text. Are there independent sources of ancient Persia's record corroborating this? If you are expecting a source against Darius the Mede, as noted in the Bible, then you may be sorely disappointed since there are no primary sources that have made any connection between the king as noted in the story of Daniel and any living king of Persia. If you can't find a primary source on the king who issued the edict then you are going to have a hard time finding material for or against something like that proclaimed by any king of the time. Flavius Josephus is the only one who mentions Darius the Mede; Josephus' research is more focused on Jewish History of the time so it's not going to be an independent corroboration, and even his mention is unable to be linked to a known ruler. The only record of the time you might be referring to I have seen is for Cyrus the Great, where he allowed the Jews to return home and practice their religion. Although his general attitude was to allow worship in countries he conquered and ruled, there is an overview of the Jews in the Achaemenid period at Iranica Online which gives an overview of their history in Babylon. If you read in between the Jewish sources you can see the other historical records there and the overview seems to be one of general neglect on Jewish communities so long as they went along with the government at the time.
<urn:uuid:802f5c41-b3f3-4f7f-aee0-516f75289205>
CC-MAIN-2013-20
http://history.stackexchange.com/questions/810/are-there-any-independent-historical-records-of-ancient-persia-allowing-freedom
2013-05-22T15:27:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97626
306
Easter Games and Activities Pen and Paper Activities By Alecia Dixon Average User Rating: How Many Eggs? - Large glass jar filled with chocolate eggs - Small pieces of paper - Easter basket How To Play: Have children write their name on a small piece of paper and their guess as to how many chocolate eggs are in the jar. The child who guesses the right number or comes closest takes the jar of candy home to share with their family. - Easter Basket - Papers with Easter objects written on them How To Play: Put papers with Easter objects written on them into basket. Divide children into two groups. Flip a coin to see which team goes first. Invite a child from the playing team to approach the chalkboard, draw a slip of paper and read it to themselves. On your mark, the child should then draw the object in hopes that his/her team members will guess the object on their paper. If the team guesses correctly before time runs out, they score a point. If the playing team does not guess correctly, the other team has five seconds to try to come up with the correct answer. If they guess correctly, they score a point and it is their turn to play. This game can be simplified for young players utilizing words such as: carrot, bunny, candy. Make more difficult for older players by using short phrases: chocolate bunnies taste good, marshmallow chicks are yellow, etc. - A cut out bunny for each child - Cotton balls (for tail) How To Play: Everyone makes their own version of the Easter Bunny. Hang them up for decoration and if you wish have a Beauty Contest. Recognize the most creative, prettiest, funniest, etc.
<urn:uuid:32e6340f-b0b7-4449-8dab-40a1de7c789f>
CC-MAIN-2013-20
http://holidays.kaboose.com/easter-pen-paper-activities.html
2013-05-22T15:35:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943573
357
Demystifying the Chinese Economy AbstractChina was the largest and one of the most advanced economies in the world before the eighteenth century, yet declined precipitately thereafter and degenerated into one of the world's poorest economies by the late nineteenth century. Despite generations' efforts for national rejuvenation, China did not reverse its fate until it introduced market-oriented reforms in 1979. Since then it has been the most dynamic economy in the world and is likely to regain its position as the world's largest economy before 2030. Based on economic analysis and personal reflection on policy debates, Justin Yifu Lin provides insightful answers to why China was so advanced in pre-modern times, what caused it to become so poor for almost two centuries, how it grew into a market economy, where its potential is for continuing dynamic growth and what further reforms are needed to complete the transition to a well-functioning, advanced market economy. Download InfoTo our knowledge, this item is not available for download. To find whether it is available, there are three options: 1. Check below under "Related research" whether another version of this item is available online. 2. Check on the provider's web page whether it is in fact available. 3. Perform a search for a similarly titled item that would be available. Bibliographic InfoThis book is provided by Cambridge University Press in its series Cambridge Books with number 9780521181747 and published in 2011. Contact details of provider: Web page: http://www.cambridge.org Other versions of this item: You can help add them by filling out this form. CitEc Project, subscribe to its RSS feed for this item. - Prasad, Eswar & Ye, Lei (Sandy), 2012. "The Renminbi's Role in the Global Monetary System," IZA Discussion Papers 6335, Institute for the Study of Labor (IZA). - Leon Berkelmans & Hao Wang, 2012. "Chinese Urban Residential Construction to 2040," RBA Research Discussion Papers rdp2012-04, Reserve Bank of Australia. - Justin Yifu Lin, 2012. "New Structural Economics : A Framework for Rethinking Development and Policy," World Bank Publications, The World Bank, number 2232. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Ruth Austin). If references are entirely missing, you can add them using this form.
<urn:uuid:94d7d028-4208-416f-b91d-9393c56ceb3c>
CC-MAIN-2013-20
http://ideas.repec.org/b/cup/cbooks/9780521181747.html
2013-05-22T15:07:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.883081
519
Body Fat % Calculator: Body fat measurements are now recognized as superior methods for measuring "weight loss." When a person declares that they want to "lose weight," what they often mean is that they want to lose fat. So now that you've had your body fat percentage measured, what does the number really mean? First, your body fat percentage is simply the percentage of fat that your body contains. If you are 150 pounds and 10% fat, it means that your body consists of 15 pounds fat and 135 pounds lean body mass (bone, muscle, organ tissue, blood and everything else). A certain amount of fat is essential to bodily functions. Fat regulates body temperature, cushions and insulates organs and tissues and is the main form of the body's energy storage. The following table describes body fat ranges and their associated categories: Basal Metabolic Rate (BMR) Calculator: Your BMR, or basal metabolic rate (metabolism), is the energy (measured in calories) expended by the body at rest to maintain normal bodily functions. Food & Calorie Search: Search our food database, containing over 50,000 food items, for their calorie content Calculate your pace per mile or per kilometer.
<urn:uuid:8fc61820-fe5a-4096-a144-6fefe1fa32a2>
CC-MAIN-2013-20
http://iron90.com/iron_system.php?page_id=54
2013-05-22T15:28:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93979
256
Part two in Latoya's series: Because so many women of color have such little wealth other than the value of a vehicle, the rest of the paper uses the definition of wealth that excludes vehicles in order to capture the economic vulnerability experienced by women of color. Excluding vehicles, single black women have a median wealth of $100 and Hispanic women $120 respectively, while their same-race male counterparts have $7,900 and $9,730. The median wealth of single white women is $41,500. To put it another way, single black and Hispanic women have one penny of wealth for every dollar of wealth owned by their male counterparts and a tiny fraction of a penny for every dollar of wealth owned by white women. With so little in reserve, half of all single black and Hispanic women could not afford to take an unpaid sick day or to even have a major appliance repaired without going into debt. The precarious financial situation of women of color is also evident when looking at those with zero or negative wealth, (negative wealth occurs when the value of one's assets is lower than the value of their debts). Nearly half of all single black and Hispanic women have zero or negative wealth (see Figure 2). Pre-retirement wealth disparities for women of color affect them drastically in their retirement years. According to federal poverty standards, poverty rates for people age 65 and over are highest for women of color. In 2007 16.7% of white women living alone were poor, but 26% of Asian women living alone, 38.5% of black women living alone, and 41.1% of Hispanic women living alone were poor. 21 What does it mean when we talk about the difference between wealth and income? These two terms are not to be conflated. Someone can be a high earner, but still have no wealth at all – it is as simple as spending more than you earn. It doesn't matter what the money is spent on – it can go up your nose, on your feet, to your landlord or thrown in mass amounts on a stage. However, if you manage to make a million dollars a year, and you spend $1.5 million, you are not wealthy. Not even close. This is why this median figure of $5 is so important to understand. At various points in the course of the report, the data for women of color (again, defined as black and Latina, unless otherwise indicated) tends to fall around zero or five dollars, depending on the unit of measurement. It is also important to understand the difference between a median number and an average number. I emailed report author Mariko Chang to clarify why the median number was generally used in the report: In wealth research, it is conventional to use the median instead of the average for the following reason: Because wealth is so unequally distributed, with a few people owning extremely large amounts of wealth and the rest owning much smaller amounts, the few very wealthy people pull the average higher. The median, on the other hand is a better indicator of the wealth of the more "typical" case. (If we rank people or households on a continuum from least wealth to most wealth, the median is the point at which half have more wealth and half have less.) Because the median is a better indicator of the more typical case, people and organizations that study wealth report the median (although some report both). Since today is Friday, we are going to ease up on the data and instead take a moment to reflect: how did you learn your lessons about wealth, income, and money? Monday: Differences in financial starting points and class mobility This post was republished from the blog Racialicious, with permission.
<urn:uuid:6119070e-246b-4799-9140-cb0c8fb97878>
CC-MAIN-2013-20
http://jezebel.com/5492099/women-of-color-and-wealth--looking-at-the-wealth-gap-part-2?tag=women-of-color
2013-05-22T15:14:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967431
754
Why do Hurricanes Move?Hurricanes are "steered" by the prevailing wind currents that surround the storm from the surface to 50,000 feet or more. The storms move in the direction of these currents and with their average speed. The movement of a hurricane affects the speed of the winds that circulate about the center. On one side of the storm, where the circulating winds and the entire storm are moving in the same direction, the wind speed is increased by the forward movement of the storm. On the opposite side of the storm, the circulating wind speed is decreased by the forward motion. In the Northern Hemisphere, the right side of a hurricane, looking in the direction in which it is moving, has the higher wind speeds and thus is the more dangerous part of the storm. The average tropical cyclone moves from east to west in the tropical trade winds that blow near the equator. When a storm starts to move northward, it exchanges easterly winds for the westerly winds that dominate the temperate region. When the steering winds are strong, it is easier to predict where a hurricane will go. When the steering winds are weak, a storm seems to take on a mind of its own, following an erratic path that makes forecasting very difficult. The major steering wind influence of most U.S. hurricanes is an area of high pressure known as the Bermuda High. This high pressure dome is over the eastern Atlantic Ocean in the winter, but shifts westward during the summer months. The clockwise rotation of air associated with high pressure zones is the driving force that causes many hurricanes to deviate from their east-to-west movement and start northward. Sometimes this is favorable: huricanes never reaches the shore, and blow out into the Atlantic Ocean. Other times, hurricanes south of the U.S. are steered northward directly towards the coastline. Because Hurricane movement can be very erratic, scientists have increasingly been called to track them. NASA has been on the forefront on the design, development and deployment of Earth remote sensing spacecraft design to do just this. GOES - I/M MissionsOver the past 30 years scientists have stated a need for continuous, dependable, and high-quality observations of the Earth and its environment. The new generation Geostationary Operational Environmental Satellites (GOES I through M) provide half-hourly observations to fill that need. The instruments on board the satellites measure atmospheric temperature, winds, moisture, and cloud cover. The GOES I-M series of satellites is owned and operated by the National Oceanic and Atmospheric Administration (NOAA). NASA manages the design, development, and launch of the spacecraft. Once the satellite is launched and checked out, NOAA assumes responsibility for it. Each satellite in the series carries two major instruments: an Imager and a Sounder. These instruments acquire high resolution visible and infrared data, as well as temperature and moisture readings from the atmosphere. They continuously transmit this information to ground terminals where it is processed for rebroadcast to primary weather services, both in the US and around the world. For more information about GOES visit their website. ESE Kids Only Home | ESE Homepage | NASA Air | Natural Hazards | Land | Water | People Hot Links | Games | FAQ | Site Index | Glossary Updated: January 22, 2003 The following movie shows Hurricane Andrew move from the Atlanic Ocean to the Gulf of Mexico in 1992.
<urn:uuid:168c609e-1a5b-44ed-9e19-e4093a1d4684>
CC-MAIN-2013-20
http://kids.mtpe.hq.nasa.gov/archive/hurricane/movement.html
2013-05-22T15:36:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941706
704
1. Transmetropolitan Gonzo Doxe; "Go Ahead: Look It Up." Universal emotions like anger, sadness and happiness are expressed nearly the same in both music and movement across cultures, according to new research. The researchers found that when Dartmouth undergraduates and members of a remote Cambodian hill tribe were asked to use sliding bars to adjust traits such as the speed, pitch, or regularity of music, they used the same types of characteristics to express primal emotions. What’s more, the same types of patterns were used to express the same emotions in animations of movement in both cultures. “The kinds of dynamics you find in movement, you find also in music and they’re used in the same way to provide the same kind of meaning,” said study co-author Thalia Wheatley, a neuroscientist at Dartmouth University. The findings suggest music’s intense power may lie in the fact it is processed by ancient brain circuitry used to read emotion in our movement. “The study suggests why music is so fundamental and engaging for us,” said Jonathan Schooler, a professor of brain and psychological sciences at the University of California at Santa Barbara, who was not involved in the study. “It takes advantage of some very, very basic and, in some sense, primitive systems that understand how motion relates to emotion.” Why people love music has been an enduring mystery. Scientists have found that animals like different music than humans and that brain regions stimulated by food, sex and love also light up when we listen to music. Musicians even read emotions better than nonmusicians. Past studies showed that the same brain areas were activated when people read emotion in both music and movement. That made Wheatley wonder how the two were connected. To find out, Wheatley and her colleagues asked 50 Dartmouth undergraduates to manipulate five slider bars to change characteristics of an animated bouncy ball to make it look happy, sad, angry, peaceful or scared. “We just say ‘Make Mr. Ball look angry or make Mr. Ball look happy,’” she told LiveScience. To create different emotions in “Mr. Ball,” the students could use the slider bars to affect how often the ball bounced, how often it made big bounces, whether it went up or down more often and how smoothly it moved. Another 50 students could use similar slider bars to adjust the pitch trajectory, tempo, consonance (repetition), musical jumps and jitteriness of music to capture those same emotions. The students tended to put the slider bars in roughly the same positions whether they were creating angry music or angry moving balls. To see if these trends held across cultures, Wheatley’s team traveled to the remote highlands of Cambodia and asked about 85 members of the Kreung tribe to perform the same task. Kreung music sounds radically different from Western music, with gongs and an instrument called a mem that sounds a bit like an insect buzzing, Wheatley said. None of the tribes’ people had any exposure to Western music or media, she added. Interestingly, the Kreung tended to put the slider bars in roughly the same positions as Americans did to capture different emotions, and the position of the sliders was very similar for both music and emotions. The findings suggest that music taps into the brain networks and regions that we use to understand emotion in people’s movements. That may explain why music has such power to move us — it’s activating deep-seated brain regions that are used to process emotion, Wheatley said. “Emotion is the same thing no matter whether it’s coming in through our eyes or ears,” she said. i was working with children painting big pictures on walls in a Palestinian refugee camp in Beirut last week, and this was the view from my window. if you wanna, check out http://turpsmagazine.tumblr.com/ for other drawings and nightmares Where has this been all my life!? “said” is an invisible word. people don’t notice it. they notice quotes and the nouns that address who is saying it. my personal rule is to use a word other than “said” if you otherwise can’t tell the emotion that is being portrayed in speech. overuse of alternatives just makes you look like you’re trying too hard. writing is about style but, like art, you have to know some basics before delving into your novel.
<urn:uuid:eab4ec7c-8e9c-47bf-b0ea-84426dc249ce>
CC-MAIN-2013-20
http://kodiakpendragon.tumblr.com/page/2
2013-05-22T15:15:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954732
946
[ Previous | Table of Contents | Next ] Winthrop envisioned for his "City on a Hill" a tightly knit, unified community centered around Boston. But almost immediately upon arrival, the colonists began to disperse along the hills and rivers of New England. One of the most important migrations from the mother colony was led by the great Thomas Hooker to Hartford, Connecticut. Hooker was the most famous of all the English preachers to make the journey to New England. He was a learned scholar, widely published, and his preaching had electrified the English countryside, winning converts by the thousands. As Perry Miller recounts in his book, Errand Into the Wilderness, Samuel Collins, an agent of Archbishop Laud, warned in 1629 that Hooker had become too powerful, and threatened to undermine the established church: "I . . . have seen the people idolizing many new ministers and lecturers; but this man surpasses them all for learning. . . [and] gains more and far greater followers than all before him." Hooker was forced into exile. He traveled first to Holland and then, following the example of the Mayflower Pilgrims, made the holy pilgrimage to Massachusetts Bay. But even after his departure from England, Collins acknowledged that Hooker's "genius" still "haunts all the pulpits." Hooker and Winthrop were good friends, which is why Winthrop was so bitterly disappointed when Hooker petitioned the General Court to allow his congregation to move to Connecticut. Winthrop argued that Hooker was breaking the covenant by leaving the colony. Moreover, said Winthrop, it was unwise for Christians to so divide themselves, leaving themselves open to attack from the Indians and perhaps even the British Navy:"The departure of Mr. Hooker would not only draw many from us, but also divert many friends who would come to us." But Winthrop apparently lost the argument, at least as far as Hooker's people were concerned. Whether Hooker got permission from the Massachusetts General Court to leave is not clear. In his May 1636 journal entry, Winthrop notes, without elaboration, that "Mr. Hooker, Pastor of the Church at Newtown, and most of his congregation, went to Connecticut." As Perry Miller points out, Hooker was more radical in his Protestant beliefs than Winthrop. Though he was not himself an avowed Separatist, he had many Separatist followers. Hooker's views on the congregational polity were essentially democratic, and are explained in his great work, A Survey of the Summe of Church Discipline. We have only fragments of his Survey, as the bulk of the manuscript was lost in a trip across the Atlantic on the way to England for publication. It was intended as a manifesto, explaining Hooker's views on a congregational polity. His hope was to persuade the Church of England to organize itself along congregational instead of episcopal lines, and also to explain to English Church officials what he was doing in New England, which in essence was to demonstrate how he believed a truly Christian society ought to operate. The English Church, predictably, condemned as heretical Hooker's apologetics on behalf of "liberty of conscience. Thomas Hooker is considered by many to have played the role of John the Baptist for Thomas Jefferson in the sense that he laid the foundation for American republican democracy. Again, though, Hooker's primary concern was not politics, but the establishment of assemblies of worship resembling the churches found in the Book of Acts. Indeed, this was the consistent pattern behind the settlement of New England, with each colony attempting to create a more pristine Christian society, and each founder, usually a minister, trying to "out-Protestantize" everyone else. Hooker, for example, apparently felt that Winthrop's efforts in Massachusetts Bay had fallen short of the mark. According to Cotton Mather, "The very spirit of his [Hooker's] ministry lay in the points of the most practical religion, and the grand concern of a sinner's preparation for, and implantation in, and salvation by, the glorious Lord Jesus Christ." By May 1637, the inhabitants of Connecticut were holding their own General Court. Hooker, unlike Bradford and Winthrop, did not keep a journal. So the facts of his Hartford ministry are fragmentary, derived from letters and notes taken by those who heard him. His most famous sermon, delivered before the Connecticut General Court on May 31, 1638, inspired the Fundamental Orders of Connecticut, which was the first written constitution in America, and very much resembles our own Federal Constitution. Direct quotes are impossible to reconstruct exactly, as they exist in a barely decipherable journal, written by 28-year-old Henry Wolcott. But the essence of Hooker's Election Day sermon was as follows: On January 14, 1639, the Fundamental Orders of Connecticut were adopted. The deliberations of the assembly have perished, but, as Marion Starkey points out in her book, The Congregational Way, the principles are a mirror of the mind of Thomas Hooker. The Fundamental Orders included many provisions essential to free and open government. Each town was to have proportional representation, and each was to send its elected representatives to the government in Hartford. In the event that the governor failed to call a meeting of the General Court, or attempted to govern contrary to established laws, the freemen were entitled to "meet together and choose to themselves a moderator," after which they "may proceed to do any act of power which any other general court may do." This was an important affirmation that the power of government resided with the people, not the magistrates. Always stressed was the voluntary nature of the covenant the people were entering into, that the purpose of government was to serve, not rule, the people, and that to do this the administration of government must be regular and orderly, not arbitrary: "The Word of God requires that to maintain the peace and union of such people there should be an orderly and decent government established according to God, in order to dispose of the affairs of the people." The Fundamental Orders of Connecticut was the most advanced government charter the world had ever seen in terms of guaranteeing individual rights. But while certainly the effect of the charter was to ensure the establishment of free and democratic government, its primary purpose in the minds of the people of Connecticut was to establish a commonwealth according to God's laws and to create an environment conducive to spreading the Gospel: We "therefore associate and conjoin ourselves to be as one public state or commonwealth; and to do for ourselves and our successors and such as shall be adjoined to us at any time hereafter, enter into a combination and confederation together, to maintain and preserve the liberty and purity of the Gospel of our Lord Jesus which we now profess, as also the discipline of the churches, which according to the truth of the said Gospel is now practiced amongst us; as also in our civil aifairs to be guided and governed according to such laws..." The Fundamental Orders of Connecticut represented the first known time when a working government was framed com pletely independently, without a charter or some other concession from a previously existing regime, but by the people themselves. It provided for regular elections, while setting strict limits on the power of those elected. In Massachusetts, the franchise was limited to proven church members, "visible saints." In Connecticut, however, voters merely had to be inhabitants of "honest conversation," according to Perry Miller, though they could not be Quakers, Jews, or Atheists. Elected officials had to be property owners, believers in the Trinity, and of good behavior. And the governor had to be a member in good standing of an approved congregation. Today, these requirements would seem severe, but in the 17th century such an easygoing regime was unprecedented. Hooker criticized other New England congregations for being too quick to censure and excommunicate. His impulse was always to lower standards for church membership, believing it was far better to let in a few "hypocrites" than mistakenly to exclude true Christians. "He that will estrange his affection, because of the difference of apprehension in things difficult, he must be a stranger to himself one time or other," wrote Hooker in the preface to his Survey. "If men would be tender and careful to keep off offensive expressions, they might keep some distance in opinion in some things without hazard to truth or love." Hooker thought church discipline should be as uncoercive as possible. During his entire ministry only one person was excommunicated. All this, however, does not mean that Hooker was a theological liberal; far from it. Hooker believed with Paul the Apostle that Scripture was absolutely inerrant - "All Scripture is inspired by God . . . "(2 Tim. 3:16)-and that for every act of church government a specific chapter and verse must be cited. No church officer was to act according to his own discretion, but had to point to a biblical mandate. This approach lent itself to the creation of constitutional government. For God is not a capricious ruler, but spells out in clear terms the laws His people are to follow and the conditions for eternal life. By reading the Bible, one can know exactly where one stands with regard to salvation, or damnation. Though Hooker was a democrat who believed fervently in protecting the people's right to be wrong, he was not anything approaching a moral relativist. Not only did he believe there was a definite right and wrong, he also believed there was one, and only one, way to Heaven. In a long sermon on the prodigal son, Hooker described man without Christ as damned and undeserving of mercy. "If the Lord may damn him, He may, and if He will save him, He may." Moreover, good works could not save man from his fiery fate, because every good work would be canceled out by a hundred sins: "You cleave to these poor beggarly duties and (alas) you will perish for hunger." "The Devil slides into the heart unexpected and unseen because he comes under a color of duties exactly performed . . . Salvation comes from faith in Christ, or as John 14:6 says: "Jesus said to him, ‘I am the way, and the truth, and the life; no one comes to the Father, but through Me."' But, as Hooker put it, Christ came "not to call the righteous, that is men who look loftily in regard to what they do Christ came to call and save the poor broken-hearted sinners." Hooker was convinced, however, that more sinners would be saved in a forgiving society than under the more regimented Massachusetts Bay, and certainly more than in England under the watchful eye of Archbishop Laud, where outward appearance (rather than genuine conversion) determined whether one was an Anglican in good standing. Though God Himself is infinitely humane, as illustrated in the Gospel when Jesus forgives Mary Magdalene and heals the sick, Hooker warned that "mercy will never save you unless it rules you too." This idea became the underlying principle of Connecticut's government. It also established a tradition of American generosity and mercy unparalleled in history. German soldiers at the end of World War II, for example, threw their rifles down and eagerly surrendered to the American side rather than risk capture by the Russians. And after defeating them in war, America rebuilt Japan and West Germany's industrial base at enormous expense. No other people have been as magnanimous toward its enemies as has the American people, though we have often paid dearly for what most of the world would consider hopeless naivete. Mercy, forgiveness, and almost limitless charity are distinctly American characteristics that can be traced to the heart of Puritan society in 17th-century New England. The Puritan respect for the importance of the individual soul - which included the non-Christian soul - was essential for the development of American constitutional democracy. Jesus taught that with God not one sparrow is forgotten, which is a major difference between Christianity and the pantheistic religions of the East, such as Hinduism, in which nature is god. In pantheistic religions, all of nature is part of one unified living organism, in contrast to the Christian view in which every individual is sacred and distinct, and is, therefore, to be treated with utmost reverence. This was especially true of Puritanism, the focus of which was the conversion experience, and the personal relationship to Christ. For the Puritan, the soul was the stage on which the spiritual drama occurred, and where, in the end, he was saved or damned depending on the decision he made to accept or reject Christ's offer of salvation. The decision for Christ marked a crucial turning point, and was the beginning of a transformation of the individual. Some, indeed Christ Himself, called this transforming experience being "born again." It was the first step on a radically new journey. Thus, we can see why Thomas Hooker thought "liberty of conscience" so critical to the biblical commonwealth. Salvation was a matter between the individual and God, not the individual and the state. One did not move one inch closer to Heaven when forced to pray. The role of the state, in Hooker's mind, was to permit God's grace to easily penetrate the individual heart, to create social conditions in which all Christians could be priests; and this brings up another point of vital importance. These Protestants did not want to eliminate the priesthood, as is commonly suggested; they wanted to expand the franchise to include all believers. The Protestant believes he has direct access to God through Scripture and prayer, and that the Holy Spirit will steer him clear from serious error. As John's Gospel states: "When He, the Spirit of truth, comes, He will guide you into all the truth" (John 16:13). It is everyone's duty, thought Hooker, to preach and baptize. Moreover, if all believers could be priests, then all people could also be rulers. If the spiritual franchise could be expanded, then so could the political franchise. The existence of Hooker's colony, its ability to attract settlers, and its constitutional protection of individual liberties put pressure on Massachusetts Bay to adopt a more formal constitution of its own. Winthrop, of course, wanted regular elections. But once elected, he thought it was up to the magistrates to make decisions according to their own interpretation of the Bible. Hooker, however, took issue with Winthrop, thinking it gave the Bay government too much discretion. Hooker recognized that many civil matters were not explicitly covered in Scripture, which meant that laws had to be carefully crafted after much deliberation and by following Scriptural principles: "That in the matter which is referred to the judge, the sentence should lie in his breast, or be left to his discretion, according to which he should go, I am afraid it is a course which wants for both safety and warrant," said Hooker. "I must confess, I look at it as a way which leads directly to tyranny, and so to confusion, and must plainly profess, it was my liberty, I should choose neither to live nor leave my posterity under such a government." The voters of Massachusetts agreed with Hooker, that the colony needed to codify a formal body of law, provide for due process, and delineate specific penalties for particular offenses. As John Cotton put it, "If you tether a beast at night, he knows the length of the tether by morning." At first Winthrop resisted the movement to further restrict the government on the grounds that magistrates ought to have flexibility to deal with situations as they came up. But in the end he gave in, acknowledging in a journal entry in 1639 that "the people . .. desired a body of laws, and thought their condition very unsafe while so much power rested in the discretion of magistrates." The Massachusetts "Body of Liberties," drawn up by Nathaniel Ward of Ipswich, was passed in 1641. It included 98 specific propositions, the purpose of which were to protect what they considered to be the sanctity of life, liberty, property, and reputation - foreshadowing the Bill of Rights. This historic charter declared it a violation of common law to impose taxation without representation; said that no one shall be deprived of life, liberty, or property without due process of law; and guaranteed the right of the accused to be tried by a jury of one's peers. The "Body of Liberties" also forbade cruel and unusual punish-ment, the mistreatment of animals, and the beating of one's wife, "unless it be in his own defense upon her assault"! In 1644, the Bay adopted the secret ballot, with Indian corn representing aye votes and beans signifying the nayes. The immense contribution the Puritans of New England made to the world's understanding of how to write a constitution cannot be overstated. When one studies the precise nature of the laws crafted by these early assemblies, and considers the sophisticated level of political discourse, one cringes in shame to witness the sheer ignorance displayed in congressional debates today. We hear many references to lofty phrases like "inalienable rights" and "general welfare" by our demagogic politicians who have little demonstrated understanding of what these terms mean. The Puritans of colonial America had an understanding of freedom that was far in advance of our own. They saw, for example, that the spirit of freedom and the spirit of Christianity reinforced each other. But they also understood that religious and civil authority operated in separate spheres. They had started to recognize, before any other nation, that the Holy Spirit did not need His power enhanced by government officials; that to have the government engaged in the regulation of matters religious was more often than not to put hypocrites in charge of the moral health of the people, and thus actually undermined the cause of Christianity. Winthrop and Hooker consistently pointed out that Jesus is perfectly content with the power He already wields. He did not ride into Jerusalem with an army, He came on a donkey. His authority rests in His Spirit convicting men's hearts, not in His wish to see people burned at the stake for disputing an ecclesiastical pronouncement. Though it is true that much New England law came directly from the Old and New Testaments,1 the tendency of the Puritans was to erect a wall of separation between the responsibilities of church and state, to paraphrase Jefferson. They saw the roles of magistrates and clergy as distinct, which is why ministers in Puritan New England were prohibited from holding a civil office. But the Puritans also believed that liberty would not survive unless it was firmly grounded in a healthy fear of God and a spirit of Christian charity. For if a man is not restrained by fear for his soul, what is to prevent him from pursuing his own interests at the expense of everyone else? A large chasm exists between what is lawful and what is ethical. And while the policeman and the courts can punish people for committing egregious offenses against society, only religion can regulate the more subtle area of morals. Government's highest responsibility is to safeguard liberty, while salvation is the supreme aim of the individual. Moreover, for a people to remain free, they must vigilantly attend to matters concerning their character and souls. The citizen must pursue with diligence not just his own interests, but the interests of his neighbor. John Winthrop, as much as any, embodied this ideal. He was charitable beyond reasonable expectation, and frequently sacrificed his own welfare for the good of the community. When the Massachusetts treasury was out of funds, he donated the proceeds from the sale of his Groton Manor to pay public expenses. When he saw others in need, he gave them money, food, and shelter from his own resources. His generosity toward others was unsurpassed, yet he was so frugal and austere concerning his own comforts that his friends often called his attention to Paul's admonition to Timothy, who apparently had a similar disposition: "No longer drink water," says Paul, "but use a little wine for the sake of your stomach . . . " (1 Tim. 5:23). John Winthrop was a superior man of impeccable character. William Hubbard, an early historian of Massachusetts Bay, provided us an appropriate summary of Winthrop's life: He "had done good in Israel, having spent not only his whole estate. . . but his bodily strength and life in the service of the country." Winthrop saw clearly that as self-sacrifice was an essential Christian trait, so self-sacrifice was also vital to the preservation of liberty and independence. What we see emerging in early New England, almost un- noticed, was an utterly new political culture. Alexis de Tocqueville saw this clearly, and compares the conditions of Europe with those of New England in 1650: "Everywhere on the Continent at the beginning of the 17th century absolute monarchies stood triumphantly on the ruins of the feudal or oligarchic freedom of the Middle Ages. Amid the brilliance of the literary achievements of Europe, then, the conception of rights was perhaps never more completely misunderstood at any other time; liberty had never been less in men's minds. And just at that time these very principles, unknown to or scorned by the nations of Europe, were proclaimed in the wilderness of the New World, where they were to become the watchwords of a great people." * * * The Puritans knew that conditions of political and economic well-being depended on an educated population. The American belief that every citizen must have a certain amount of education, and a certain degree of literacy and mathematical competency, is a Puritan legacy. In Europe, education, especially advanced education, was limited to the extreme upper crust of society. The lower classes, it was thought, were unfit to be put through schools. Education in Europe was to be reserved for the ruling class. Oxford and Cambridge were England's only two universities. In the Puritan mind literacy was important not only to ensure a reasonably informed electorate, essential for the survival of democratic government; but it also played an important role in the individual's walk with the Lord. The Puritans stressed the individuals personal relationship with Jesus. To read the Bible or follow the logic of a sermon requires a certain familiarity with basic concepts. That a religious movement, which shunned philosophy, was strictly fundamentalist, and believed completely in the inerrancy of Scripture, produced the most educated nation of people the world had ever seen is one of the remarkable paradoxes, and lessons, of history. The building of schools was one of the first orders of business when Winthrop and his followers arrived in Massachusetts. Puritan dissidents from Cambridge and Oxford provided excel-lent teachers. By 1640, there were 113 men with university educations living in New England, and 71 in Massachusetts. This was a much larger concentration of educated men than could be found in England, or anywhere else. The Puritans thought it vi-tally important that every congregation have a learned pastor who could inspire, point out doctrinal errors, and defeat the forces of darkness. The Massachusetts School Act of 1647 stated:"It being the chief project of that old deluder Satan to keep men from knowledge of the Scriptures, as in former times keeping them in an unknown tongue, so that in these latter times, by persuading them from the use of tongues, so that at least, the true sense and meaning of the original might be clouded with false glosses of saint seeming deceivers; and that learning may not be buried in the grave of our forefathers, in church and commonwealth, the Lord assisting our endeavors." Harvard was founded in 1636, after newly arrived John Harvard donated 777 pounds and a library of 400 volumes for the purpose of training Puritan ministers. King Charles had effectively purged the Puritans from the English universities. Hence the need to establish a second Cambridge. "The main end of the scholar's life and studies," said the Harvard Rules and Precepts, "is to know God and Jesus Christ which is eternal life. Therefore, to lay Christ in the bottom is the only foundation of all sound knowledge and harmony." In addition to reading the Scriptures, history, literature, and theology, Harvard students were expected to achieve proficiency in mathematics and the sciences: calculus, geometry, astronomy. Knowledge in every area, for the Puritan, far from undermining his religious faith, served to magnify God's glory and always shed new light on the maguificence of His creation. Scholarship and scientific inquiry aided the Harvard student in searching out the Holy Spirit. The founding of Harvard was yet another Puritan chal- lenge to royal authority. In England, the king had monopolistic power over the granting of degrees, as Oxford and Cambridge were arms of the government and private alternatives were illegal. Harvard had no royal charter, and hence no authority in King Charles' mind to award college diplomas. But the Puritans in the Bay did not recognize the King's education monopoly, and Harvard granted its first degree in 1642. The fact that Harvard continued to award college diplomas despite official protests from England was, in effect, an affirmation of New England's independence from Crown rule. Harvard's existence was a perpetual source of irritation to royal authorities, particularly since more than half of its graduates during the 17th century became ministers of a dissident faith. The construction of a printing press in 1639 in Harvard Yard allowed the proliferation of publications, mostly sermons, Psalm books, and almanacs. This material was viewed as subversive by officials in England, as it clearly did not conform to the Book of Common Prayer. Obvious from the start was the Puritan penchant for rebellion against British rule, and particularly against impositions from the English Church. But King Charles could do little about transgressions in New England, as he had his hands more than occupied with the increasingly powerful Puritan movement at home. As a result, New England operated for the most part as an independent nation, and continued to evolve into a new and distinctive society, consciously and defiantly creating itseff. New England's reputation as a center of American learning is a legacy of the Puritan stress on diffusing knowledge as broadly as possible. Yale was established in 1701, also for the purpose of training Congregational clergy, in response to the emergence at Harvard2 of what some thought to be erroneous Arminian theology (that opposed strict Calvinist predestination, but favored election and salvation by grace).3 The two universities found themselves in competition with each other for students and introduced for the first time market forces to higher learning. The effect was to encourage very low tuition. It is more profitable for an institution to educate many at less cost per student than to admit only the few who could afford an expensive education. Hence, college tuition in the colonies was about one-tenth what it cost to educate someone at Oxford or Cambridge. At Dartmouth, established by another Congregational minister to bring the Gospel to the Indians, it was common for students to work their way through college. The result of this uniquely American approach to education was to spread, rather than deepen, knowledge. And while the quality of education was less than what, say, an earl or duke might receive from Oxford, it brought the possibility of higher education to the general population-in contrast to England and the Old World, where the opportunity for schooling of even the most elementary soft was non-existent for the great majority of people. In the Puritan view, the purpose of education was not to groom the children of a ruling aristocracy in order to set them apart from the general population, but to supply the communities with knowledgeable ministers, doctors, teachers, lawyers, businessmen, and civic leaders. Education in New England was not to be a special privilege limited to a few, but a vehicle to elevate everyone in the community. The proliferation of schools and the availability of instruction made it impossible for a powerful aristocracy to establish itself in New England. It should be obvious why this development, Puritan in origin, was so essential to the emergence of democracy in America. As Tocqueville observed, "In America it is religion which leads to enlightenment and the observance of divine laws which leads men to liberty." Government by the people requires an educated people, which helps explain why democratic experiments in the Third World in recent years have mostly failed. [ Previous | Table of Contents | Next ] Published by the Christian Defense Fund. © Copyright 1997 by the Christian Defense Fund. All rights reserved. © Copyright 1988, Benjamin Hart
<urn:uuid:c9c32aa0-4ef5-4889-9b8a-e5f78ac00a76>
CC-MAIN-2013-20
http://leaderu.com/orgs/cdf/ff/chap07.html
2013-05-22T15:15:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97734
5,869
I am an amateur naturalist trying to learn something about everything living in my garden. My garden, along with the rest of the UK, is currently buried beneath a thick blanket of snow. Aside from a collection of tits on my bird feeder there's little sign of life. For this posting therefore, I'm falling back to a photo of a crane fly I took in summer (photo 1. Click on photo's to enlarge). Dozens of them swarmed out from amongst some garden ivy I happened to disturb at the time. I know almost nothing about flies (diptera). Along with beetles, ichneumon wasps and various other orders of insect however, they strike me as offering rather "good value" to any amateur naturalist keen to make a genuine contribution to science since a) there are very large numbers of them (15,000 species of fly in Europe alone) b) they can display a rich and complex behaviour (see my posting on P. nobilitatus for example) and c) for countless species almost nothing is known. Take hoverflies for example. After hundreds of years of intense study by armies of naturalists there can be few countries in the world whose natural history has been so well catalogued as Britain's. Further, there are few flies as conspicuous as hoverflies. Yet even for these, a staggering 40% of the larvae of the 265 British hoverfly species of are simply unknown. In the U.S. its 93%. (This, at least, was the situation persisting in 1993 when my copy of 'Colour Guide to Hoverfly Larvae' (G.E.Rotheray) was published). Anyway, we're not here to discuss hoverflies. On to the star of today's show... What makes a crane fly a crane fly? Well, firstly flies (=the order diptera) are separated from other insects by the presence of two vestigial wings called halteres. I've ringed these in photo 2. Next, the order diptera is separated into two sub-orders of flies called the nematocera and the brachycera. The split is based on the structure of the antennae - the nematocera all have long, thread-like antennae with more than five segments. You can see this in photo 3. The sub-order nematocera gets further subdivided into more than 70 families of fly, the crane flies amongst them. If you want a fuller flavour of how these families are separated, the redoubtable Field Studies Council has made a key to the families of fly by D.M. Unwin freely available here. It used to be that all crane flies were clumped together in one family called the Tipulidae. At some point however it was decided to split this family into four, the Tipulidae, Pediciidae, Limoniidae and the Cylindrotomidae. I read on the (searchable) Catalog of the Craneflies of the World that worldwide there are more than 10,000 species of Limoniidae, 4000 Tipulidae nearly 500 Pediciidea and 70-odd Cylindrotomidae. Separating these families is a tricky job. To complicate matters further, recent DNA studies are casting doubt on the very existence of the Limoniidae as a family. The whole thing is rather confusing for the amateur (=me!) and for a long time whilst preparing this posting I despaired of being able to identify my crane fly. Fortunately, rescue was at hand in the form of an excellent set of test keys from Alan Stubbs I found on the Dipterists Forum website. Before discussing these keys I need say something about how dipterists characterise the wings of flies: An influential theory, originally due to Comstock and Needham in the 1890's, is that way back in prehistory, a first primeval insect wing evolved. Exactly how this first wing appeared is still uncertain but a current theory is that it evolved from the multi-branched external gills seen on the larvae of some aquatic insects such as mayflies. The veins in insect wings are hypothesised to be modifications of these gill 'tubes' (trachea). Anyway, assuming the existence of this ancestral wing, Comstock and Needham named the veins in it the Costa (C), the Radius (R), the Media (M), the Cubitus (Cu) and the Anal veins (A). As these veins fanned out through the ancestral wing they forked. So, for example, the radius vein, R, is supposed to have forked into five sub-veins called (logically enough) R1 to R5. No modern fly has retained all the veins of the ancestral wing, over millenia evolution has caused different families of fly to lose different veins. Which veins a fly has retained however is a very important clue to its identification. Photo 4 shows the wing of my crane fly labelled up with the help of Alan Stubbs' keys above according to the Comstock Needham system. There was one more piece of information I needed, namely whether my crane fly's palps (=little facial apendages) were long or short. Photo 5 shows they're short. Armed with this information I was finally able to identify my cranefly as Limonia nebeculosa. Final 'clinchers' were the presence of 3-coloured bands on my fly's femurs (enlarge to see these in photo 1) and the sort of smudgy 'hoop' on the wing I've delineated with the dashed white line in photo 4. Today's posting was a bit technical in places but I am pleased to have identified my first fly to species level. Only another 250,000 to go! Averof M., Cohen S.M., Nature 385, 627-630, 1997 Evolutionary origin of insect wings from ancestral gills. Matthew J., et.al. Phylogenetic synthesis of morphological and molecular data reveals new insights into the higher-level classification of Tipuloidea (Diptera), Systematic Entomology (2010), DOI: 10.1111/j.1365-3113.2010.00524.x
<urn:uuid:4dd762c4-d37c-4522-aedf-655a859b1ff8>
CC-MAIN-2013-20
http://lifeonanoxfordlawn.blogspot.com/2010/12/crane-fly-in-family-limonia-nebeculosa.html
2013-05-22T15:00:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945409
1,294
India's melting Himalayan glaciers are a sign of India's booming coal industry, but a technology partnership with the US would be high-impact and low-carbon. Boston; and Ahmedabad, India As the world focuses on climate change this week in Copenhagen, Denmark, delegates are set to negotiate complex deals to avoid a future world of melted Arctic ice sheets and significantly higher sea levels. But there is another ice-rich region the world must also take into consideration: the Himalayan glaciers. India's "water towers" are beginning to melt. For the subcontinent already challenged by growing water needs and on the heels of a boom in coal use, the stakes are high. As the Indian economy continues to grow despite global conditions, Indian coal power (largely driving this growth) is expected to increase 600 percent by the 2030s. This will put India on a path to rival the United States as one of the world's largest users of coal-based power generation, paving the way for a surge in carbon dioxide emissions. Given this daunting scenario, and regardless of the outcome in Copenhagen, India has no choice but to transform into the world's most innovated in climate technology and clean energy. Cash isn't the problem – it's the lack of a comprehensive, long-term plan and India's long-term use of coal power. India and the US should collaborate to actively leverage and focus engineering talent and financial resources to create cleaner low-cost energy technology. Both governments should promote collaborations spurring both demonstration of US intellectual property in India as well as technological innovations from India deployed in the US.
<urn:uuid:4244f1c2-27d4-477f-9d8a-7fd37042a915>
CC-MAIN-2013-20
http://m.csmonitor.com/Commentary/Opinion/2009/1207/p09s01-coop.html
2013-05-22T15:01:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942681
328
Yet the humans’ skin could not be too realistic. It was well known that as depictions of humans became more lifelike, audiences would perceive them as more appealing — until the realism reached a certain point, close to human but not quite, when suddenly the depictions would be perceived as repulsive. The phenomenon, known as the "uncanny valley," had been hypothesized by a Japanese robotics researcher, Masahiro Mori, as early as 1970. No one knew precisely why it happened, but the sight of nearly human forms seemd to trigger some primeval aversion in onlookers. Thus, the minute details of human skin, such as pores and hair follicles, were left out of The Incredibles’ characters in favor of a deliberately cartoonlike appearance.
<urn:uuid:772f9d00-28a5-4f3c-bd56-5b755618eb4f>
CC-MAIN-2013-20
http://marginalrevolution.com/marginalrevolution/2008/05/the-uncanny-val.html
2013-05-22T15:07:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.99064
155
Today's column answers a question from a reader that I thought others might be interested in too. She was looking through her recipes and was going to bake a product that called for cream of tartar and wondered what role cream of tartar played in the baking and cooking process. She wondered if it was a leavening agent and if it was similar to other leavening agents, like baking soda and baking powder. The chemical name for cream of tartar is potassium hydrogen tartarate. It's an acid salt and is obtained when tartaric acid is half neutralized with potassium hydroxide, transforming it into a salt. Cream of tartar is most often used as a stabilizer in our baking and cooking processes. It gives more volume to beaten egg whites and it produces a creamier texture in sugary desserts such as candy and frosting because it inhibits the formation of crystals. If you are baking or cooking and don't have cream of tartar, you could substitute white vinegar, however you may not be satisfied with the results, since the substitution of a liquid for a dry ingredient may not work in some recipes. Cream of tartar is also the acidic ingredient in some brands of baking powder. That leads to another question, is there a difference between baking powder and baking soda? The answer to that is "yes." Baking powder contains sodium bicarbonate, the acidifying agent (cream of tartar) and also a drying agent to keep it from clumping and reacting until you are ready to use it. Baking powder becomes active when moisture is incorporated into it. Baking soda, on the other hand, is pure sodium bicarbonate. When baking soda is combined with moisture and an acidic ingredient (such as yogurt, chocolate, buttermilk, honey), the resulting chemical reaction produces carbon dioxide, which expands with heat and causes baked goods to rise. The reaction begins immediately upon mixing the ingredients, so you need to bake the product right away, or it will not rise properly. You can substitute baking powder for baking soda, although you will need more baking powder and it may affect the taste). You can't substitute baking soda when a recipe calls for baking powder, though, because baking soda lacks the acidity. You can make your own baking powder though by combining baking soda and cream of tartar. Mix two parts of cream of tartar with one part of baking soda to make baking powder. So, there's your food science tidbit for the day! Happy baking! Thank you reader for an interesting question.
<urn:uuid:79c26d50-6a9e-4279-8afa-4b2aef890b8f>
CC-MAIN-2013-20
http://marshallindependent.com/page/content.detail/id/533588/Leavening.html?nav=5007
2013-05-22T15:36:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955613
521
Begun in 1971 and formalized into a month-long celebration in 2005, National Preservation Month celebrates America's diverse and irreplaceable heritage. This year's theme, This Place Matters, encourages citizens to visit historic places and buildings in their neighborhoods to learn the history of the spaces and what is being done to preserve these areas. Through the National Trust for Historic Preservation's web site citizens can upload pictures and videos of places that matter to them. The State Library of Massachusetts is fortunate to reside within an historic place - The Massachusetts State House. Citizens can celebrate National Preservation Month by visiting the historic building, in person or virtually. The State Library holds many resources about the history of the State House building, the most prolific being The State House Historic Structure Report, a three volume set that details the various additions to the current State House, as well as preservation goals for the future. This resource can be viewed in the Special Collections Department of the library, located in room 55 of the State House. - A Tour of the Massachusetts State House - Evolution of the State House - National Trust for Historic Preservation
<urn:uuid:e8c1e009-c7ad-4f8b-a07c-db4821f8a4a6>
CC-MAIN-2013-20
http://mastatelibrary.blogspot.com/2008/05/may-is-national-preservation-month.html
2013-05-22T15:27:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934447
222
Three Hands on a Clock Date: 07/03/2001 at 10:01:57 From: Lucas Subject: A question from trig principle At what time after 12:00 are the three hands on the clock on top of each other? (At 12:00, the hour hand, the minute hand, and the second hand are all pointing to 12). I need to know all the way to a fraction of second. Help! Date: 07/03/2001 at 14:39:34 From: Doctor Rob Subject: Re: A question from trig principle Thanks for writing to Ask Dr. Math, Lucas. The minute hand moves 12 times as fast as the hour hand, and the second hand moves 60 times as fast as the minute hand. That leads to the formulas for the angle traversed by the three hands after t hours time: Hour hand: 30*t degrees Minute hand: 360*t degrees Second hand: 21600*t degrees For the hour hand and the minute hand to coincide, it must be that, for some integer n, 30*t = 360*t - 360*n. n is the number of whole revolutions the minute hand makes in t hours. It is the greatest integer less than or equal to t. Solving for t, one gets t = 12*n/11, for some integer n. For the minute and second hand to coincide, it must be that, for some integer m, 360*t - 360*n = 21600*t - 360*m. m is the number of whole revolutions the second hand makes in t hours. It is the greatest integer less than or equal to 60*t. Solving for t, one gets t = (m-n)/59, for some integer m. That means that when all three hands coincide, there must be integers m and n such that 12*n/11 = (m-n)/59, 12*59*n = 11*(m-n). Now since 11 is a prime number and isn't a divisor of either 12 or 59, it must be that 11 is a divisor of n, so n = 11*k for some integer k. I leave the rest to you. - Doctor Rob, The Math Forum http://mathforum.org/dr.math/ Date: 08/20/2001 at 11:43:35 From: David Helinek Subject: 3-Handed Clock Crossings Given an analog clock with three very thin moving hands (hour, minute and second), how many times during the day will they line up exactly (to within observable capabilities) other than the trivial 12 o'clock case? It's been a long time since high school algebra, so I worked out the answer using brute force, but I know there must a more formal yet elegant proof of the answer. The answer I came up with was that there are no other times during the day when this happens. One can think of the obvious time, approximately 6 minutes after 1 am, 12 minutes after 2 am, etc., but in each case, the second hand is nowhere near the minute and hour hands during this overlap. Date: 08/20/2001 at 13:32:19 From: Doctor Rob Subject: Re: 3-Handed Clock Crossings Thanks for writing to Ask Dr. Math, David. Let the angle which the three hands make with vertical at time t in hours be h, m, and s, all in degrees. Then h = 30*t m = 360*t s = 21600*t (Why?) Then if, after a time t hours, they all point in the same direction, you have that m - h = 360*n s - h = 360*m for some integers n and m. (Why?) Putting these together, you get 360*t - 30*t = 330*t = 360*n 21600*t - 30*t = 21570*t = 360*m 11*t = 12*n 719*t = 12*m 12*n/11 = t = 12*m/719 719*n = 11*m Now 11 and 719 have no common factor, so that n must be a multiple of 11, say n = 11*x, for some integer x. Then m = 719*x, and t = 12*x. That shows that the only time when this happens is after an integer multiple of 12 hours, that is, at 12 o'clock. (The equation 11*t = 12*n gives you the times when the hour and minute hand coincide, t = (12/11)*n hours after 12 o'clock.) Is that what you had in mind? - Doctor Rob, The Math Forum http://mathforum.org/dr.math/ Date: 08/20/2001 at 14:02:31 From: David Helinek Subject: Re: 3-Handed Clock Crossings Yes, although I though you would go about it locking at the periodicity and ratios of sine wave, as each hand swipes out integer numbers of cycles throughout the day. Yours is a more numerical approach, which works well. Do we know immediately that 11 and 719 have no common factor because 11 is prime, and 719 is not a multiple of 11? I'm assuming so. It would be interesting to compute the possible crossings for a 24-hour clock. Thanks for the refresher. David Helinek Date: 08/20/2001 at 15:05:59 From: Doctor Rob Subject: Re: 3-Handed Clock Crossings Thanks for writing back, David. 11 and 719 have no common factor because both 11 and 719 are prime (and so, in particular, 719 = 11*65 + 4 is not a multiple of 11). To modify the above argument for a 24-hour clock, replace the formula for h by h = 15*t, and proceed as before. This has the effect of replacing 11 by 23, 12 by 24, and 719 by 1439. Both of these are primes, so the answer remains the same: only at midnight do the hands coincide. - Doctor Rob, The Math Forum http://mathforum.org/dr.math/ Date: 08/20/2001 at 15:54:38 From: David Helinek Subject: Re: 3-Handed Clock Crossings What's funny to me is that everyone I've ever asked this question of has always answered that there are many times during the day that all three hands line up, without even thinking too hard about it. People intuitively think that somehow clocks are magically 'resonant'. Again, thanks for the answer to a somewhat interesting question. David H. Date: 08/21/2001 at 09:11:58 From: Doctor Rob Subject: Re: 3-Handed Clock Crossings One could ask for the closest bunching not at 12 o'clock. I find that this occurs at about 5:27:27.3, when all the hands are within a 1.0014 degree sector. I believe that this is distinguishable to the naked eye, being 1/3 of the distance between two consecutive second marks around the clock (and so not "within observable capabilities"), but quite close, nonetheless. - Doctor Rob, The Math Forum http://mathforum.org/dr.math/ Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
<urn:uuid:736c6257-b8aa-4d94-8e73-47d2070ae37b>
CC-MAIN-2013-20
http://mathforum.org/library/drmath/view/56819.html
2013-05-22T15:07:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92666
1,546
Access your Project MUSE content using one of the login options below Close(X) Browse Results For: Indian-European Exchange in the Colonial Southeast In 1540, Zamumo, the chief of the Altamahas in central Georgia, exchanged gifts with the Spanish conquistador Hernando de Soto. With these gifts began two centuries of exchanges that bound American Indians and the Spanish, English, and French who colonized the region. Whether they gave gifts for diplomacy or traded commodities for profit, Natives and newcomers alike used the exchange of goods such as cloth, deerskin, muskets, and sometimes people as a way of securing their influence. Gifts and trade enabled early colonies to survive and later colonies to prosper. Conversely, they upset the social balance of chiefdoms like Zamumo's and promoted the rise of new and powerful Indian confederacies like the Creeks and the Choctaws. Drawing on archaeological studies, colonial documents from three empires, and Native oral histories, Joseph M. Hall, Jr., offers fresh insights into broad segments of southeastern colonial history, including the success of Florida's Franciscan missionaries before 1640 and the impact of the Indian slave trade on French Louisiana after 1699. He also shows how gifts and trade shaped the Yamasee War, which pitted a number of southeastern tribes against English South Carolina in 1715-17. The exchanges at the heart of Zamumo's Gifts highlight how the history of Europeans and Native Americans cannot be understood without each other. His Life, His Adventures, His Women Zane Grey was a disappointed aspirant to major league baseball and an unhappy dentist when he belatedly decided to take up writing at the age of thirty. He went on to become the most successful American author of the 1920s, a significant figure in the early development of the film industry, and central to the early popularity of the Western._x000B_Grey's personal life was as colorful as his best novels. Two backcountry trips into the Grand Canyon inspired his first Westerns, and he returned to Arizona annually for many years. His matching passion for sport fishing carried him to Mexico, Nova Scotia, the Galapagos Islands, New Zealand, Tahiti, and Australia. These trips were a canvas for the striking contradictions in Grey's life. Though he celebrated chastity and romantic love in his novels and his marriage was crucial to his success, these ideals were sorely tested by his long separations, deep depressions, and multiple involvements with women. Likewise his popularization of hunting, fishing, and the latest equipment threatened the wilderness that he revered and campaigned to protect._x000B_Thomas H. Pauly's work is the first full-length biography of Zane Grey to appear in over thirty years. Using a hitherto unknown trove of letters and journals, including never-before-seen photographs of his adventures--both natural and amorous--Zane Grey will greatly enlarge and radically alter the current understanding of the superstar author, whose fifty-seven novels and one hundred and thirty movies heavily influenced the world's perception of the Old West._x000B_ Farming and Food in the Northern Sierra of Oaxaca In this book, Roberto González convincingly argues that Zapotec agricultural and dietary theories and practices constitute a valid local science, which has had a reciprocally beneficial relationship with European and United States farming and food systems since the sixteenth century. The Obizzi Saga A prominent writer, a master painter, and a treasure of art that for centuries had been largely neglected are brought brilliantly to life in this first important study of one of the great legacies of Renaissance art. The immense castle at Cataio, about thirty-five miles from Venice, was built between 1570 and 1573. An extraordinary series of frescoes, painted in 1573, covers the walls of six of its palatial halls. Programmed by Giuseppe Betussi, the forty frescoes depict momentous events in the history of the Obizzi family from 1004 to 1422. Executed by Giambattista Zelotti and assistants, the frescoes, plus ceiling decorations, are painted in a Mannerist, highly illusionist style with such skill that the walls seem to be windows through which one views battle scenes, weddings, political negotiations, and other episodes in the dramatic history of the Obizzi family.Now one of the most distinguished scholars of Italian art takes readers room by room, fresco by fresco, on the first guided tour of this Betussi-Zelotti masterpiece. Writing with characteristic clarity, Irma Jaffe combines art history, iconography, formal analysis, Italian history, and the story of the Obizzi family in a richly detailed esthetic, social and historical introduction to the entire series.Describing and explaining with spirit and authority the composition and meaning of each fresco-each illustrated with full color plates-Jaffe also illuminates the fascinating decorations on the ceilings and overdoors of the great rooms. In figures that personify virtues and vices, to comment on the events painted on the walls beneath them, the values of sixteenth century Italy are reflected with uncommon clarity in both the fresco saga and the decorations above.A full understanding of Mannerism and sixteenth century painting must now include the contribution of Battista Zelotti. In the scenes at Cataio he reveals the possibilities available to Mannerist style in his countless poses of the human figure and of horses, in his variety of settings---indoor and outdoor, land and sea---and in the range of preeminent sixteenth century values such as family rank and pride, personal courage, and religion that are expressed in his Saga of the Obizzi family. Zelotti's masterpiece carries the artificiality inherent in Mannerism to a new level of theatrical drama. Viewing the scenes of fierce battles, magnificent weddings, assassinations, and triumph after triumph, suggests to modern viewers something of the splendor of grand opera.For Renaissance scholars and students, for art historians, for travelers and art lovers interested in the heritage of the Renaissance in Italy and in the glorious estates of the Veneto, Zelotti's Epic Frescoes at Cataio: The Obizzi Saga will be an indispensable introduction and guide to a treasure hidden in plain sight for many years. This is the definitive work on the first and greatest of Japan's twentieth-century philosophers, Nishida Kitaro (1870-1945). Interspersed throughout the narrative of Nishida's life and thought is a generous selection of the philosopher's own essays, letters, and short presentations, newly translated into English. The Quest for Cosmopolitan Modernity Widely perceived as an overwhelmingly Catholic nation, Brazil has experienced in recent years a growth in the popularity of Buddhism among the urban, cosmopolitan upper classes. In the 1990s Buddhism in general and Zen in particular were adopted by national elites, the media, and popular culture as a set of humanistic values to counter the rampant violence and crime in Brazilian society. Despite national media attention, the rapidly expanding Brazilian market for Buddhist books and events, and general interest in the globalization of Buddhism, the Brazilian case has received little scholarly attention. Cristina Rocha addresses that shortcoming in Zen in Brazil. Drawing on fieldwork in Japan and Brazil, she examines Brazilian history, culture, and literature to uncover the mainly Catholic, Spiritist, and Afro-Brazilian religious matrices responsible for this particular indigenization of Buddhism. In her analysis of Japanese immigration and the adoption and creolization of the Sôtôshû school of Zen Buddhism in Brazil, she offers the fascinating insight that the latter is part of a process of "cannibalizing" the modern other to become modern oneself. She shows, moreover, that in practicing Zen, the Brazilian intellectual elites from the 1950s onward have been driven by a desire to acquire and accumulate cultural capital both locally and overseas. Their consumption of Zen, Rocha contends, has been an expression of their desire to distinguish themselves from popular taste at home while at the same time associating themselves with overseas cultural elites. Japan's Tokeiji Convent Since 1285 Zen Sanctuary of Purple Robes examines the affairs of Rinzai Zen’s Toµkeiji Convent, founded in 1285 by nun Kakusan Shidoµ after the death of her husband, Hoµjoµ Tokimune. It traces the convent’s history through seven centuries, including the early nuns’ Zen practice; Abbess Yoµdoµ’s imperial lineage with nuns in purple robes; Hideyori’s seven-year-old daughter—later to become the convent’s twentieth abbess, Tenshuµ—spared by Tokugawa Ieyasu at the Battle for Osaka Castle; Toµkeiji as “divorce temple” during the mid-Edo period and a favorite topic of senryuµ satirical verse; the convent’s gradual decline as a functioning nunnery but its continued survival during the early Meiji persecution of Buddhism; and its current prosperity. The work includes translations, charts, illustrations, bibliographies, and indices. Beyond such historical details, the authors emphasize the convent’s “inclusivist” Rinzai Zen practice in tandem with the nearby Engakuji Temple. The rationale for this “inclusivism” is the continuing acceptance of the doctrine of “Skillful Means” (hoµben) as expressed in the Lotus Sutra—a notion repudiated or radically reinterpreted by most of the Kamakura reformers. In support of this contention, the authors include a complete translation of the Mirror for Women by Kakusan’s contemporary, Mujū Ichien. The Book of Capping Phrases for Koan Practice Zen Sand is a classic collection of verses aimed at aiding practitioners of kôan meditation to negotiate the difficult relationship between insight and language. As such it represents a major contribution to both Western Zen practice and English-language Zen scholarship. In Japan the traditional Rinzai Zen kôan curriculum includes the use of jakugo, or "capping phrases." Once a monk has successfully replied to a kôan, the Zen master orders the search for a classical verse to express the monk’s insight into the kôan. Special collections of these jakugo were compiled as handbooks to aid in that search. Until now, Zen students in the West, lacking this important resource, have been severely limited in carrying out this practice. Zen Sand combines and translates two standard jakugo handbooks and opens the way for incorporating this important tradition fully into Western Zen practice. For the scholar, Zen Sand provides a detailed description of the jakugo practice and its place in the overall kôan curriculum, as well as a brief history of the Zen phrase book. This volume also contributes to the understanding of East Asian culture in a broader sense. Everyday Peace in a Karachi Apartment Building Ethnic violence is a widespread concern, but we know very little about the micro-mechanics of coexistence in the neighborhoods around the world where inter-group peace is maintained amidst civic strife. In this ethnographic study of a multi-ethnic, middle-class high-rise apartment building in Karachi, Pakistan, Laura A. Ring argues that peace is the product of a relentless daily labor, much of it carried out in the zenana, or women's space. Everyday rhythms of life in the building are shaped by gender, ethnic and rural/urban tensions, national culture, and competing interpretations of Islam. Women's exchanges between households -- visiting, borrowing, helping -- and management of male anger are forms of creative labor that regulate and make sense of ethnic differences. Linking psychological senses of "tension" with anthropological views of the social significance of exchange, Ring argues that social-cultural tension is not so much resolved as borne and sustained by women's practices. Framed by a vivid and highly personal narrative of the author's interactions with her neighbors, her Pakistani in-laws, and other residents of the city, Zenana provides a rare glimpse into contemporary urban life in a Muslim society.
<urn:uuid:57dce66f-fed9-4f9d-be52-1b36fe86b9a4>
CC-MAIN-2013-20
http://muse.jhu.edu/browse/titles/z?browse_view_type=default&items_per_page=10
2013-05-22T15:00:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941241
2,531
disorders are speech disorders resulting from neurological damage that affects the motor control of speech muscles or motor programming of speech movements. The most common motor-speech disorders are dysarthria and apraxia of speech. dysarthrias are oral communication problems due to weakness, incoordination, or paralysis of the speech musculature. Who can this affect? People who have suffered neurological damage affecting the motor control of speech muscles or the motor programming of speech movements can be affected by motor-speech disorders. - relearning movement patterns for speech - learning specific compensatory techniques - exercise programs - counseling for the person and his/her
<urn:uuid:9b02b244-5186-4f38-9776-09b7fc0cabf7>
CC-MAIN-2013-20
http://nau.edu/CHHS/CSD/Clinic/Motor-Speech-Disorders/
2013-05-22T15:36:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.859595
141
There is a debate currently going on as to whether microfilm is acceptable to scholars and historians. Scholars say "No", while archivists say "yes". Archives lean more towards the geneologist community who are more focused on textual information, such as names, dates, etc. while scholars look beyond text, to pencil notations, stampings and markings, different colored inks, etc. Microfilm, the technology of the 1940s-1990s was the best way to preserve documents in the twentieth century. Unfortunately, microfilm does not provide an accurate representation of the document, leaving color out, as well as most markings, is under or over exposed, out of focus, making it unreadable. Microfilm requires a machine to view the document, and printing is expensive due to the toner and special paper required in many of the units. Digitizing documents began in the mid 1990s, with products appearing on CD ROM, laser disc, and DVD. When the Internet became the new format, documents are now available as pdf, in zoomify, and other formats for easy viewing. Search engines make research quick, with database interfaces such as Greenstone organizing document collections, providing powerful search capabilities that once impossible, in seconds. Our historical documents are in danger. Not only paper documents, but sound recordings, motion picture, photographs, etc. At the National Archives, as we search through the records in scope with the Lincolnarchives digital project, we are finding records that are deteriorating, text that is fading. In the motion picture division, the equipment used to show films of WWII, Korea and Vietnam are broken, with no avenues available to repair or replace these machines. Information saved on older equipment, or other technology such as floppy discs, tape drives, cds, etc. are in danger of being lost because the current technology will not read this data. Agencies responsible for preserving yet providing access are falling back on old practices of the 1950s still declaring that microfilm is preserving these documents, while libraries, universites, and such are removing their microfilm machines because they are expensive to maintain. Agencies are choosing to take the "cheap and dirty" route of providing access to records, by simply scanning bad microfilm instead of digitizing the originals at a high resolution to create a true preservation copy of the original. After 9/11, Katrina, last year's fire in Georgetown, the fire at the National Archives facility in St. Louis which destroyed thousands of military records from WWI and WWII, agencies still are not preserving valuable documents in case originals are lost. In the case of the National Archives, they have no inventory of what they do have. Many documents in these facilities are being withheld from the public, declared "invaluable", yet no digital copy is being made available, leaving scholars and researchers without access. Conservationists claim that the records are being withheld in hopes that future technology will be able to provide a better solution to access. They claim that these documents are being preserved for future generations, yet no attempts are being made to provide the current generations with records from the 17th, 18th, and 19th century. The educational community needs to become more involved with their state and local historical societies encouraging those agencies to use their funds wisely, not seeking a cheap alternative which does not address preservation or access. By scanning microfilm, these agencies are not addressing the necessity of preserving the originals in a digital format, not only for access, but to protect these documents before they degrade, losing valuable historical data. History happened in "color", for a reason, and to preserve them in anything less endangers our future generations from knowing their history.
<urn:uuid:f86fac68-0a31-40f5-8a42-b34ba9cd99b3>
CC-MAIN-2013-20
http://ncssnetwork.ning.com/profiles/blogs/microfilm-vs-color-digital
2013-05-22T15:22:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947768
743
Starting with magnetic phenomena on Earth (e.g., Fig. 1), a brief summary of our observational knowledge of the magnetic fields has been presented for the interplanetary medium (Fig. 5), for objects therein (Figs. 2, 3), and their association with angular momentum (Fig. 4). The methodology of signal detection from Earth has been discussed (Figs. 7, 8). Remanent magnetism (random shape) or dynamo magnetism (dipolar shape) can explain the maintenance of magnetism in most solar system objects. In a few cases, a dead or dying dynamo may be involved. The interplanetary medium supports an Archimedean spiral-shaped magnetic field, usually with 4 magnetic sectors (e.g., Fig. 6). Outside the solar system, the magnetism of normal stars can be mostly explained by a large scale dipolar dynamo, with the possible addition of small scale loop-type magnetism localized on the stellar surfaces. In a few cases, a quadrupolar shape field is necessary. For compact degenerate stars, a frozen dipolar magnetic field is found. Over recent years, direct measurements of the magnetic field strength and direction in dusty protostellar disks and dusty molecular cloudlets have shown that hydromagnetic processes are essential to understanding their internal physics and evolution. The ultimate goal of obtaining a full understanding of the magnetic field in the formation and evolution of molecular disks and cloudlets is a challenging task. Is the magnetic field a strong player/guide or else a weak player/tracer ? A preliminary answer favours a moderately strong (but not dominant) magnetic field. Observational studies have been mainly limited by numerous practical difficulties of measuring magnetic fields in astronomical objects, via dust emission (weighted by grain properties), or via Faraday rotation (weighted by electron density) or via Zeeman effect (with instrumental sensitivity effects) (e.g., Davies 1994). A succinct review and a short classification of various magnetic field models for protostellar disks has been made (Figs. 10 to 14), and a time evolution discussed (Fig. 17). Although the geometry of the magnetic field could be partially preserved (e.g., poloidal) when clouds contract from the interstellar medium (Fig. 21; e.g., Jones et al. 1992), after a while the increasing differential rotation of the protostellar disk may completely change the B lines to become spiral (toroidal). Cool protostellar disks and cloudlets are only detected in emission at Extreme IR and Far IR wavelengths, and their low polarized flux density values present a technical challenge for such telescopes (e.g., Fig. 9). Magnetic protostellar models involve "twist", "circulation", "cloud collapse", "disk-wind", "magnetic pinch", "hourglass", as well as "dynamo". A list of observational data for magnetism in cloudlets/protostellar disks has been presented (Table 1). Possible correlations have been discussed: polarization percentage correlating with wavelength, with beam size, with source age, and polarization position angle correlating with companion presence (Fig. 19), with viewing angle, with beam size, and with source age. For disks/cloudlets, the main predictions (Fig. 15, 16) and the effect of telescope beams (Fig. 18) have been noted and used in comparing selected observational maps (Figs. 20, 22, 23). Future trend: polarimetry at Extreme IR and Far IR is the most effective way to determine unambiguously the direction (position angle) of the magnetic field in protostellar disks and in molecular cloudlets, since it measures the emission of polarized radiation. Scattered light is not a problem at long wavelengths. Extreme-IR (submillimeter), Far IR, and mid-IR wavelength polarization observations are not affected by dust scattering since light scattering varies as (wavelength)-4 . Clearly this research area at Extreme-infrared wavelengths is in its infancy, much in need of more polarimetric data, and has a potential for a rapid growth in our physical understanding. The time is ripe for a major observational advance, with the recent improvements in polarimetric technology in the Extreme IR and Far IR. At these wavelengths, polarimetry is a powerful method to study circumstellar, protostellar, and interstellar magnetic fields. and it is limited by the Earth's atmosphere, observing time required, and lack of polarimetric instruments at many telescopes. A good polarimetric map in the Extreme IR and Far IR will in itself say a good deal, and it may inspire new theoretical work. Magnetic field orientations are crucial in most models of formation and evolution of disks and cloudlets and in star formation. Sensitive array receivers could open up a rich and exciting field of study in polarimetry. A reasonable 10 × improvement in sensitivity would result in a decrease in observing time by a factor 100, when aiming for the same signal to noise ratio. I thank Ms. Lyne Séguin (NRCC-Ottawa) for creative help in drawing Figures 10 to 14 and 16, Mr. David Duncan (NRCC-Victoria) for drawing Figures 1, 2, 3, 5, 7, 8, 17, while I used the PGPLOT software for Figures 4 and 19. I thank a referee (anonymous) for thoughtful and valuable advice, and Dr. D. C. Morton (NRCC-Victoria) for a reading of an early version.
<urn:uuid:b171dec0-a081-46f1-b2d7-3ddbd186ba06>
CC-MAIN-2013-20
http://ned.ipac.caltech.edu/level5/March03/Vallee2/Vallee9.html
2013-05-22T15:21:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897971
1,131
Research carried out with the participation of the University of Navarra has shown how a determinate molecule helps an important pathogen, Brucella abortus, escape destruction within the cells charged with eliminating infectious agents (macrophages). This research has been published in Nature Immunology scientific magazine. Brucella is a model of an intracellular parasite, a category that includes other important bacteria, such as those of tuberculosis or legionelosis. Brucella penetrates the macrophages within membranous vesicles that are not fused with lysosomes (structures containing cellular products necessary to destroy bacteria) as occurs in other micro-organisms. On the contrary, they reach certain compartments within the macrophage. Here the bacteria multiply and establish a chain of events that determine the illness. Brucellosis, the illness caused by these bacteria, is of great importance worldwide, with millions of human beings and domestic animals affected. This discovery not only means new useful ideas for other researchers, but also the enhanced knowledge of a very important pathogen. From this knowledge useful products, such as new vaccines, can be derived.
<urn:uuid:7d362a38-0058-4f12-af7e-bfdc70b0b7dc>
CC-MAIN-2013-20
http://news.bio-medicine.org/biology-news-3/A-molecule-impedes-the-destruction-of-the-Brucella-bacteria-11361-1/
2013-05-22T15:21:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917457
234
|New York Architecture Images- notes| |fascist architecture influences on Philip Johnston| |Click here for Fascist-era architecture in Italy, Germany and England.| A XXth century New Rome In 1936 the Italian government made a successful application for hosting in Rome the next World Exhibition which was due in 1941. The Exhibition was soon postponed to 1942 to celebrate the XXth anniversary of the Fascist regime. The Fascist regime emphasized the links between the expansion of the Roman Empire and its own aggressive policies and it poured money into redesigning in a spectacular way many areas of the city, mainly to the detriment of medieval or Baroque monuments; for sure the regime had something in common with the ancient Romans: a passion for erecting large buildings. Nevertheless the inscription on Palazzo della Civiltà Italiana (renamed della Civiltà del Lavoro) does not include costruttori (builders) in the list of the attibutes of the Italians. VN POPOLO DI POETI DI ARTISTI DI EROI DI SANTI DI PENSATORI (philosophers) DI SCIENZIATI DI NAVIGATORI DI TRASMIGRATORI (the meaning of this word today is rather obscure, but in the 1930s it was most likely a reference to the first intercontinental flights). The inscription as well as the building were the subject of many jokes: less commendable attributes were added to the list and the building was soon called Colosseo Quadrato (square) and even worse Palazzo del Groviera, after the Swiss cheese gruyère. While the arches of the building are a reminder of the Colosseo arches, the four statues at its corners have many points in common with those in Piazza del Quirinale. The first building to be completed was aimed at hosting the offices for the Exhibition, and it included a large hall announced from the outside by a high portico (on its top an inscription celebrates the expansion of Rome towards the sea). While the building had a very neat and modern design the mosaics and the reliefs which embellished it were evocative of Ancient Rome. The black and white mosaics replicated a pattern typical of Caracalla's Baths and the reliefs portrayed ancient monuments (in the image above: the Arch of Titus, Trajan's Column and the Pantheon). Mussolini himself was portrayed as if he were a direct descendant of the Roman consuls and emperors. He had a peculiar way of speaking with his fists pointed against his hips as shown by the position of his left arm; the right arm is raised in the so called saluto fascista which had replaced the traditional shaking of hands. What at the time must have looked very impressive, today appears a flattering description of Mussolini's ability to ride a horse without holding the reins. Michelangelo Antonioni is an Italian filmmaker who became famous in the early 1960s by a series of movies which depicted the difficulty of living, not because of material conditions or negative events, but because of existential anxieties. He shot several scenes of The Eclipse (1962) among the EUR buildings. The empty porches, the isolated statues, the unusually shaped buildings provided him with a not humanly scaled background which highlighted the feeling his characters had of living in an alien world. The gigantic stela dedicated to Guglielmo Marconi is a clear reference to the obelisks of Rome, but it does not have the grace of the originals. The Exhibition never took place because of WWII and the few buildings which had been completed were occupied by families who had lost their homes because of war events. In 1951, when the post war emergency was gradually receding, the Italian government decided to complete the quarter by relocating public offices and by inviting companies to build their headquarters in the new quarter. The quarter was renamed Quartiere Europa retaining to some extent its original name and the streets and buildings were in some cases renamed too in order to cancel references to the past regime. The assignment to Rome of the 1960 Olympic Games gave a new impulse to the completion of the monumental parts of EUR including the stela to Marconi. The EUR hosts several museums which are scarcely visited by tourists. One of them (Museo Pigorini) includes most of the collection of African, Chinese and American handicrafts gathered in Collegio Romano by the Jesuit Athanasius Kircher in the XVIIth century. Another interesting museum (Museo della Civiltà Romana) includes a reconstruction (scale 1:250) of the City of Rome in the IVth century. Also the materials used in the EUR buildings remind visitors of Ancient Rome: for the columns of the building shown in the picture the architects used a green stone resembling cipollino, a marble very much in fashion in the IInd century (see the cipollino columns of the Temple of Annia Faustina - S. Lorenzo in Miranda). The low dome of Palazzo dei Congressi is evocative of the Pantheon and the reference is more evident when the building is seen from the other side (in the image used as a background for this page). The photo shown above was taken on a summer Sunday when this part of EUR is almost deserted (if you wish to see the monuments of Rome on a summer Sunday morning click here). This palace is decorated with vaguely Renaissance reliefs portraying allegories of the Italian Maritime Republics (the winged lion of Venice and St. George, protector of a Genoese maritime company). Its design can be associated to that of the Trajan's Markets. The church of EUR clearly descends from Michelangelo's plan for St. Peter's which was based on a Greek cross shape. Special thanks to http://www.romeartlover.it
<urn:uuid:f7729e04-d977-437d-8ced-6e3e5350ab9b>
CC-MAIN-2013-20
http://nyc-architecture.com/ARCH/Notes-Fascist.htm
2013-05-22T15:27:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965543
1,223
An old confusion about the electrical properties of water's surface has ended, thanks to scientists at Pacific Northwest and Lawrence Livermore National Laboratories. The conflict arose because two types of measurements gave two radically different interpretations of what was happening at the surface of water. The team showed, through careful analysis, that the measurements weren't wrong, but rather the behavior of water's electrons influenced one measurement more than the other. The team's results provided a consistent interpretation of the different measurements and grace The Journal of Physical Chemistry B cover. "This could change how we think about water," said Dr. Gregory Schenter, a chemical physicist at PNNL who worked on the study. While most people may not think of water as having electrical properties, when the behavior and movement of the electrons in this ubiquitous liquid comes into play in designing alternatives to today's fossil fuels, water is often part of the conversation. The electrical forces that exist in water, a simple V-shaped molecule made from two hydrogen atoms and an oxygen atom, are vital to understanding and controlling how molecules, ions, and other chemical components move and behave. For example, understanding water is necessary to convert agricultural waste into bio-fuels. Further, water's behavior impacts work on storing energy from solar cells, wind turbines, and other renewable sources, allowing more flexibility in designing energy strategies. In chemistry, the simple approach that assigns a positive charge to a hydrogen atom and a negative charge to an oxygen atom can be very powerful. This simple model can work when it comes to understanding the forces that move molecules around in water. In other cases, it doesn't work. When taking measurements with certain instruments, the simple model matches experimental results quite well. But, when using other techniques, the model differs wildly from what's measured. Theoretical chemists at PNNL and LLNL were the first to figure out what was happening. They found that to measure electrical properties occurring at the molecular scale, where the length scales are measured in billionths of a meter, the models need to consider that the protons are in the nucleus and the electrons are everywhere else. "It comes down to understanding where in the molecule you are making the measurements," said Dr. Shawn Kathmann, a chemical physicist at PNNL who worked on the study. Complex descriptions of matter, referred to as ab initio electronic structure calculations, that focus on identifying electrons location and electron holography experiments showed that the conflict was caused by where you were measuring the surface potential in the molecules. If you determined the surface potential right next to the protons, you got one answer. If you determined the potential in the void between molecules, you get a different answer. And, finally, if you took measurements close to the electrons, you got still another answer. "When you treat the electrons and protons appropriately, you get more accurate results," said Kathmann. This research is part of ongoing work at PNNL to fundamentally understand the forces inside water and other molecules. The goal is to push past the existing knowledge frontiers regarding ions and interfaces. The team is working on developing models that more accurately and appropriately represent electrons. They are also striving to isolate the effects of electrons in driving matter at interfaces as well as the electrical stresses inside aqueous electrolytes. Explore further: Non-wetting fabric drains sweat More information: Kathmann SM, et al. 2011. "Understanding the Surface Potential of Water." The Journal of Physical Chemistry B 115, 4369-4377. DOI: 10.1021/jp1116036
<urn:uuid:7b1d5aaa-41e2-472c-be18-6cac5e3b6433>
CC-MAIN-2013-20
http://phys.org/news/2011-05-electrical-properties.html
2013-05-22T15:27:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945871
729
Walk, trot, gallop A dressage horse and rider performing the extended trot. Image: Chefsna. The abilities of equestrian athletes are amazing, but the skills of their horses are even more impressive. Through training, the horses can perform complicated movements with speed, elegance, and precision. But the training builds on innate features of horses, which are remarkably versatile, even in the wild. In Charles Dickens's Nicholas Nickleby, Wackford Squeers, the disreputable teacher who runs Dotheboys Hall, shows off his erudition: "A horse is a quadruped, and quadruped's Latin for beast, as everybody that's gone through grammar knows." Squeers is wrong about the Latin, but right about horses being quadrupeds: animals with four feet. With four feet, you can make a lot of movements that are impossible with two. Horses move using several different patterns, and those patterns have striking mathematical features. At slow speeds, horses walk. Here the legs hit the ground at regular intervals in the order back left, front left, back right, front right. To move faster, they trot. Now diagonally opposite pairs of legs move together, so that the left back and right front legs hit the ground at the same time, and then the other two legs hit the ground together. For really rapid movement, the horse gallops. Here the timing is more complicated. The two back legs hit the ground almost together, but one of them (say the right) is a split second behind the other one. Then the same thing happens with the front two legs, and again the left one hits the ground first. Many horses can also canter, an even more complicated pattern used at speeds between trot and gallop. Sometimes this comes naturally, sometimes it is the result of training. Other horses can pace: now the two left legs move together, then the two right ones. Four different gaits: the numbers in the feet of this creature show when the particular foot hits the ground, as a fraction of a complete cycle. The branch of science that studies animal movements is called gait analysis. A gait is a pattern of leg movements, and five of them have just been described: walk, trot, gallop, canter, pace. Strictly speaking, the gallop of the horse is a transverse gallop. There is another kind, the rotary gallop, in which the legs at the front hit the ground in the opposite order to those at the back. This is how cheetahs move when they are chasing prey. Gait analysis applies to all animals with legs, including insects, with six; spiders, with eight; and humans, with two. It seeks to understand the general principles of legged locomotion in nature. Those principles also apply to creatures that use wings, or fins, or wriggle—like snakes. Even snails have their own characteristic gaits. Mathematics is used in gait analysis in several different ways. At the simplest level, it describes the patterns. This description provides clues about the networks of nerve cells that control the gaits, known as central pattern generators. In many animals it is virtually impossible to observe these networks directly, but the timing patterns provide clues that suggest what symmetries the central pattern generator should have. Analysing the symmetries of the walk shows that the central pattern generator should have the same symmetries as two loops of four identical components, linked left-to-right. For example in the walk, each leg hits the ground one quarter of the gait cycle after the previous leg, so the central pattern generator must have a symmetry that naturally leads to these quarter period phase shifts. There is also a left-right symmetry. Putting all of the information together, and making some reasonable assumptions, it turns out that the central pattern generator should have the same symmetries as two loops of four identical components, linked left-to-right. It need not have this form in a literal sense, because other arrangements can have the same symmetries, but even so, we can make some predictions about the motion that have been verified experimentally. The mechanics of the movement is also important—how the forces exerted by the muscles act on the bones and joints of the horse's skeleton. In simple models the horse is treated like a table whose legs are hinged, so that they can move, and are supported by springs. Better models correspond more closely to the actual arrangement of muscles and bones. So why do horses use different gaits at different speeds? It's a bit like the gears on a car. Patterns of movement that work fine at slow speeds become inefficient, or mechanically unworkable, at higher ones. To convince yourself of this, try walking faster and faster. At some point you will find that however hard you try, you don't speed up. But if you switch to a run, a different gait, going faster is suddenly easy. Experiments have revealed a close connection between a horse's gait, speed, and oxygen consumption. At low speeds, the walk uses least oxygen. At higher speeds, the trot takes over. At higher speeds still, the gallop makes the best use of the available oxygen. However, the walk can be sustained for much longer than the trot, and the trot can be sustained for longer than the gallop—just as we can walk for longer periods than we can run. Mathematical models help to explain these observations. Gait analysis has important applications to medicine (disorders that affect movement, especially in young children), robotics (robots with legs can move in difficult terrain), and sport. Which brings us back to the Olympics. When you watch the equestrian events, keep an eye out for regular patterns in how the horses are moving. For that matter, do the same for the human athletes. The same mathematical principles govern how a horse jumps a gate, and how a human runs a hundred-metre sprint. - You can read more about central pattern generators in the Plus articles Chaos in the brain and Controlling cockroach chaos; - And about modelling gaits in the Plus article Modelling, step by step; - And about the mechanics of movement and medical engineering in the Plus article Shaping our bones. About the author Ian Stewart is Professor of Mathematics at the University of Warwick, popular mathematics and science fiction writer and recipient of the Christopher Zeeman Medal. Some of his books have been reviewed on Plus:
<urn:uuid:870d6e0c-5092-43f9-aec3-b1a5c0fe90a6>
CC-MAIN-2013-20
http://plus.maths.org/content/walk-trot-gallop
2013-05-22T15:14:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938285
1,348
Mary Magdalene, c. 1858-60. Frederick Sandys. Seven young men calling themselves the Pre-Raphaelite Brotherhood (P.R.B.) gathered together in London in 1847, united by a shared distrust for the Royal Academy, the sanctioned art institution of the day. Instead, they turned for inspiration to the art of the Middle Ages--the time "before Raphael." Their subjects were drawn primarily from literature, including the Bible, Shakespeare, and the poets of their own age, such as Alfred Tennyson and John Keats. As the Pre-Raphaelite Brotherhood gradually dispersed, new inspiration appeared when William Morris and Dante Gabriel Rossetti became close friends. In 1861, Morris founded the firm that would become Morris and Company, designing hand-crafted household objects, and signaling the beginning of the Arts and Crafts Movement. By the late 1860s, new artists, including Edward Burne-Jones, Simeon Solomon, and Albert Moore, were introduced into the Pre-Raphaelite coterie, bringing fresh influences and issues to the table. This influx of new individuals led to the subtle merging of Pre-Raphaelitism with what is now referred to as the "Aesthetic Movement," prevalent in the 1870s through the 1890s. This style reflected a desire to move away from the sentimental narratives of the early Victorian period and to focus instead on images of "beauty" (often women) in which color harmony, the beauty of form, and compositional balance took precedence over narrative. The Delaware Art Museum's Samuel and Mary R. Bancroft Collection of Pre-Raphaelite Art consists of over 150 works, including paintings, drawings, photographs, decorative arts, and illustrated books. It is the largest collection of Pre-Raphaelite art outside of England. Generous individual support also provided by Members of the Museum's Rossetti Circle. - Art Gallery - An in-depth look at 10 of the works in the Museum's Pre-Raphaelite collection. View Collection - The story of the Pre-Raphaelites and how a collection of their art came to be housed at the Delaware Art Museum. Learn More - Programs and Tours - For more information on the broad range of programs, tours, and special events related to the Delaware Art Museum's Pre-Raphaelite collection, please click on this box. More Info
<urn:uuid:e863bbee-3731-45b4-a489-7b68929c8f3d>
CC-MAIN-2013-20
http://preraph.org/
2013-05-22T15:02:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939823
491
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy · Philosophies · Philosophers · List of lists Early life and thought Sartre was born in Paris to parents Jean-Baptiste Sartre, an officer of the French Navy, and Anne-Marie Schweitzer, cousin of Albert Schweitzer. When he was 15 months old, his father died of a fever and Anne-Marie raised him with help from her father, Charles Schweitzer, who taught Sartre mathematics and introduced him to classical literature at an early age. As a teenager in the 1920s, Sartre became attracted to philosophy upon reading Henri Bergson's Essay on the Immediate Data of Consciousness. He studied in Paris at the elite École Normale Supérieure, an institution of higher education which has served as the alma mater for multiple prominent French thinkers and intellectuals. Sartre was influenced by many aspects of Western philosophy, absorbing ideas from Immanuel Kant, Georg Wilhelm Friedrich Hegel and Martin Heidegger. In 1929 at the École Normale, he met fellow student Simone de Beauvoir, later to become a noted thinker, writer, and feminist. The two, it is documented, became inseparable and lifelong companions, initiating a romantic relationship, though one that was not monogamous. Together, Sartre and Beauvoir challenged the cultural and social assumptions and expectations of their upbringings, which they considered bourgeois, in both lifestyle and thought. The conflict between oppressive, spiritually-destructive conformity (mauvaise foi, literally, "bad faith") and an "authentic" state of "being" became the dominant theme of Sartre's work, a theme embodied in his principal philosophical work L'Être et le Néant (Being and Nothingness) (1944). Sartre's most well-known introduction to his philosophy is his work Existentialism is a Humanism (1946). In this work, he defends existentialism against its detractors, which ultimately results in a somewhat incomplete description of his ideas. The work has been considered a popular, if over-simplifying, point of entry for those seeking to learn more about Sartre's ideas but lacking the background in philosophy necessary to fully absorb his longer work Being and Nothingness. One should not take the expression of his ideas contained here as authoritative; in 1965, Sartre told Francis Jeanson that its publication had been "une 'erreur.'" In 1935 Sartre tried the psychedelic drug mescaline, which is found naturally in the peyote cactus of North America. By all accounts he had a bad experience. It is widely reported that the chapter 'Six o'clock in the evening' from 'The Nausea' is essentially a description of a bad mescaline trip. The sudden revelation of the independent existence of objects (rather than merely the formulation of ideas in the observer's mind), the loss or irrelevance of names for those objects, and the indwelling horror of 'naked existence', are all very common elements of a negative visionary experience (see 'Heaven and Hell' by Aldous Huxley for an instructive comparison). La Nausée and Existentialism As a junior lecturer at the Lycée du Havre in 1938, Sartre wrote the novel La Nausée (Nausea) which serves in some ways as a manifesto of existentialism and remains one of his most famous books. Taking a page from the German phenomenological movement, he believed that our ideas are the product of experiences of real-life situations, and that novels and plays describing such fundamental experiences have as much value as do discursive essays for the elaboration of philosophical theories. With this mandate, the novel concerns a dejected researcher (Roquentin) in a town similar to Le Havre who becomes starkly conscious of the fact that inanimate objects and situations remain absolutely indifferent to his existence. As such, they show themselves to be resistant to whatever significance human consciousness might perceive in them. This indifference of "things in themselves" (closely linked with the later notion of "being-in-itself" in his Being and Nothingness) has the effect of highlighting all the more the freedom Roquentin has to perceive and act in the world; everywhere he looks, he finds situations imbued with meanings which bear the stamp of his existence. Hence the "nausea" referred to in the title of the book; all that he encounters in his everyday life is suffused with a pervasive, even horrible, taste -- specifically, his freedom. No matter how much he longs for something other or something different, he cannot get away from this harrowing evidence of his engagement with the world. The stories in Le Mur (The Wall) emphasize the arbitrary aspects of the situations people find themselves in and the absurdity of their attempts to deal rationally with them. A whole school of absurd literature subsequently developed. Sartre and World War II In 1939 Sartre was drafted into the French army, where he served as a meteorologist]]. German troops captured him in 1940 in Padoux, and he spent nine months in prison — later in Nancy and finally in Stalag 12D, Treves, where he wrote his first theater piece: Barionà, fils du tonnerre, a drama concerning Christmas. Due to poor health (he claimed that his poor eyesight affected his balance) Sartre was released in April 1941. Given civilian status, he recovered his position as a teacher of Lycée Pasteur near Paris, settled at the Hotel Mistral near Montparnasse at Paris and was given a new position at Lycée Condorcet, replacing a Jewish teacher, forbidden to teach by Vichy law. After coming back to Paris in May 1941, he participated in the founding of the underground group Socialisme et Liberté with other writers Simone de Beauvoir, Merleau-Ponty, Jean-Toussaint and Dominique Desanti, Jean Kanapa, and École Normale students. In August, Sartre and Beauvoir went to the French Riviera seeking the support of André Gide and André Malraux. However, both Gide and Malraux were undecided, and this might be the cause of Sartre's disappointment and discouragement. Socialisme et liberté disappeared soon and Sartre decided to write instead of being involved in active resistance. He then wrote Being and Nothingness, The Flies and No Exit, none of them being censored by the Germans. He also contributed to both legal and illegal literary magazines. After August 1944 and the Paris Liberation, he was a very active contributor of Combat, a newspaper created during the period of clandestinity by Albert Camus, a philosopher and author who held similar beliefs. Sartre and Beauvoir remained friends with him until Camus turned away from communism, a schism that eventually divided them in 1951, after the publication of Camus' book entitled The Rebel. Later, while Sartre was labelled by some authors as a resistant, the French philosopher and resistant Vladimir Jankelevitch criticized Sartre's lack of political commitment during the German Occupation, and interpreted his further struggles for liberty as an attempt to redeem himself. When the war ended Sartre established Les Temps Modernes (Modern Times), a monthly literary and political review, and started writing full-time as well as continuing his political activism. He would draw on his war experiences for his great trilogy of novels, Les Chemins de la Liberté (The Roads to Freedom) (1945–1949). Jean-Paul Sartre was the head of the Organization to Defend Iranian Political Prisoners from 1964 till the victory of the Islamic Revolution. Sartre and Communism The first period of Sartre's career, defined by Being and Nothingness (1943), gave way to a second period as a politically engaged activist and intellectual. His 1948 work Les Mains Sales (Dirty Hands) in particular explored the problem of being both an intellectual at the same time as becoming "engaged" politically. He embraced communism, though he never officially joined the Communist party, and took a prominent role in the struggle against French colonialism in Algeria. He became perhaps the most eminent supporter of the Algerian war of liberation. He had an Algerian mistress, Arlette Elkaïm, who became his adopted daughter in 1965. He opposed the Vietnam War and, along with Bertrand Russell and other luminaries, he organized a tribunal intended to expose U.S. war crimes, which became known as the Russell Tribunal. As a fellow-traveller, Sartre spent much of the rest of his life attempting to reconcile his existentialist ideas about self-determination with communist principles, which taught that socio-economic forces beyond our immediate, individual control play a critical role in shaping our lives. His major defining work of this period, the Critique de la raison dialectique (Critique of Dialectical Reason) appeared in 1960. Sartre's emphasis on the humanist values in the early works of Marx led to a dispute with the leading Communist intellectual in France in the 1960s, Louis Althusser, who claimed that the ideas of the young Marx were decisively superseded by the "scientific" system of the later Marx. Sartre and literature During the 1940s and 1950s Sartre's ideas remained ambiguous, and existentialism became a favoured philosophy of the beatnik generation. Sartre's views were counterposed to those of Albert Camus in the popular imagination. In 1948, the Catholic Church placed his complete works on the Index of prohibited books. Most of his plays are richly symbolic and serve as a means of conveying his philosophy. The best-known, Huis-clos (No Exit), contains the famous line: "L'enfer, c'est les autres", usually translated as "Hell is other people". Besides the obvious impact of Nausea, Sartre's major contribution to literature was the Roads to Freedom trilogy which charts the progression of how World War II affected Sartre's ideas. In this way, Roads to Freedom presents a less theoretical and more practical approach to existentialism. The first book in the trilogy, L'âge de raison (The Age of Reason) (1945), could easily be said to be the Sartre work with the broadest appeal. Sartre after literature In 1964, Sartre renounced literature in a witty and sardonic account of the first six years of his life, Les mots (Words). The book is an ironic counterblast to Marcel Proust, whose reputation had unexpectedly eclipsed that of André Gide (who had provided the model of literature engagée for Sartre's generation). Literature, Sartre concluded, functioned as a bourgeois substitute for real commitment in the world. In the same year he was awarded the Nobel Prize for Literature, but he resoundingly declined it, stating that he had always refused official honors and didn't wish to align himself with institutions. Though he was now world-famous and a household word (as was "existentialism" during the tumultuous 1960s), Sartre remained a simple man with few possessions, actively committed to causes until the end of his life, such as the student revolution strikes in Paris during the summer of 1968. In 1975, when asked how he would like to be remembered, Sartre replied: "I would like [people] to remember Nausea, [my plays] No Exit and The Devil and the Good Lord, and then my two philosophical works, more particularly the second one, Critique of Dialectical Reason. Then my essay on Genet, Saint Genet...If these are remembered, that would be quite an achievement, and I don't ask for more. As a man, if a certain Jean-Paul Sartre is remembered, I would like people to remember the milieu or historical situation in which I lived,...how I lived in it, in terms of all the aspirations which I tried to gather up within myself." Sartre's physical condition deteriorated, partially due to the merciless pace of work he put himself through during the writing of the Critique and the last project of his life, a massive analytical biography of Gustave Flaubert (The Family Idiot), both of which remain unfinished. He died April 15, 1980 in Paris from an edema of the lung. Sartre was an atheist for most of his adult life, atheism being foundational for his style of existentialist philosophy. However, in March 1980, about a month before Sartre's death, he was interviewed by an assistant of his, Benny Lévy, and within these interviews he claimed that he had converted to Messianic Judaism. The validity of these interviews was disputed; Sartre's supporters were understandably reluctant to believe that he had so abruptly renounced a crucial part of his philosophy. However, shortly before his death, Sartre confirmed that the interviews were authentic. (major philosophical works in bold) - L'imagination (Imagination: A Psychological Critique), 1936 - La transcendance de l'égo (The Transcendence of the Ego) 1937 - La nausée (Nausea), 1938 - Le mur (The Wall), 1939 - Esquisse d'une théorie des émotions (Sketch for a Theory of the Emotions), 1939 - L'imaginaire (The Imaginary), 1940 - Les mouches (The Flies), 1943 - a modern version of the Oresteia - L'être et le néant (Being and Nothingness), 1943 - Réflexions sur la question juive (Anti-Semite and Jew; literally, Reflections on the Jewish Question), 1943 - Huis-clos (No Exit), 1944 - Les Chemins de la liberté (The Roads to Freedom) trilogy, comprising: - Morts sans sépulture (The Victors, literally, Deaths without burial), 1946 - L'Existentialisme est un humanisme (Existentialism and Humanism), 1946 - La putain respectueuse (The Respectful Prostitute) 1946 - Qu'est ce que la littérature? (What is literature?), 1947 - Baudelaire, 1947 - Situations, 1947–1965 - Les mains sales (Dirty Hands), 1948 - "Orphée Noir" (Black Orpheus), introduction to Anthologie de la nouvelle poésie nègre et malgache. edited by Léopold Sédar Senghor, 1948 - Le diable et le bon dieu (The Devil and the Good Lord), 1951 - Les jeux sont faits (The Game is Up), 1952 - Saint Genet, Actor and Martyr, 1952 - Existentialism and Human Emotions, 1957 - Les séquestrés d'Altona (The Condemned of Altona), 1959 - Critique de la raison dialectique (Critique of Dialectical Reason), 1960 - Search for a Method (English tr. of preface to Critique, Vol. I) 1962 - Les mots (The Words), 1964 - autobiographical - "Preface" to Frantz Fanon's The Wretched of the Earth - L'idiot de la famille (The Family Idiot), 1971–1972 - on Gustave Flaubert - Cahiers pour une morale(Notebooks for an Ethics), 1983, 1947-48 notes on ethics - Les carnets de la drôle de guerre: Novembre 1939 - Mars 1940 (War Diaries: Notebooks from a Phoney War 1939-1940), 1984, notebooks from Sartre's time in the Phony War of 1939-1940 The For-itself, in fact, is nothing but the pure nihilation of the In-itself; it is like a hole of being at the heart of Being. -Being and Nothingness, p. 617 Quand les riches se font la guerre, ce sont les pauvres qui meurent. — Jean-Paul Sartre (When the rich make war, it's the poor that die) - Bering, J.M. (2008). Why Hell is other people: Distinctively human psychological suffering. Review of General Psychology, 12, 1-8. Full text - Cohen-Solal, A. (1985). Sartre 1905-80. - Laing, R.D., & Cooper, D.G. (1971). Reason and Violence: A Decade of Sartre's Philosophy 1950-1960, New York: Pantheon. - Spade, P.V. (1996). Class Lecture Notes on Jean-Paul Sarte's, Being and Nothingness. Full text - Wittmann, H. (2001). L'esthétique de Sartre. Artistes et intellectuels, translated from the German by N. Weitemeier and J. Yacar, Éditions L'Harmattan (Collection L'ouverture philosophique), Paris. - Wittmann, H. (1996). Sartre und die Kunst. Die Porträtstudien von Tintoretto bis Flaubert, Gunter Narr Verlag, Tübingen. - Americans and Their Myths Sartre's essay in The Nation (October 18, 1947 issue) - Sartre Internet Archive on Marxists.org Audiobook (mp3) : incipit of The Words (1964), read aloud in french by IncipitBlog. - Sartre’s Critique of Dialectical Reason essay by Andy Blunden - Stanford Encyclopedia of Philosophy: Sartre - Jean-Paul Sartre (1905-1980): Existentialism Internet Encyclopedia of Philosophy - Sartre.org Articles, archives, and forum - "The Second Coming Of Sartre", John Lichfield, The Independent, 17 June 2005 - The World According to Sartre essay by Roger Kimball - Reclaiming Sartre A review of Ian Birchall, Sartre Against Stalinism - Biography and quotes of Sartre - Short biography - Discussion of Sartre's "Kean" - Sartre and Vietnam - Sartre at NNDB - 1987 audio interview of Annie Cohen-Salal, author of Sartre: A Life. Interview by Don Swaim of CBS Radio - RealAudio |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:236f886a-7278-4d87-a77b-154cceb4fee4>
CC-MAIN-2013-20
http://psychology.wikia.com/wiki/Jean_Paul_Sartre?direction=prev&oldid=155258
2013-05-22T15:11:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927606
4,026
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | | Smooth newt (Lissotriton vulgaris)| Smooth newt (Lissotriton vulgaris) A newt is an amphibian of the Salamandridae family, although not all aquatic salamanders are considered newts. Newts are classified in the subfamily Pleurodelinae of the family Salamandridae, and are found in North America, Europe and Asia. Newts metamorphose through three distinct developmental life stages: aquatic larva, terrestrial juvenile (called an eft), and adult. Adult newts have lizard-like bodies and may be either fully aquatic, living permanently in the water, or semi-aquatic, living terrestrially but returning to the water each year to breed. Like all members of the order Caudata, newts are characterised by a frog-like body with four equal sized limbs and a distinct tail. Aquatic larvae have true teeth on both upper and lower jaws, and external gills. They have the ability to regenerate limbs, eyes, spinal cords, hearts, intestines, and upper and lower jaws. The cells at the site of the injury have the ability to de-differentiate, reproduce rapidly, and differentiate again to create a new limb or organ. One theory is that the de-differentiated cells are related to tumour cells since chemicals which produce tumours in other animals will produce additional limbs in newts. The main breeding season for newts is between the months of June and July. After courtship rituals of varying complexity, which take place in ponds or slow moving streams, the male newt transfers a spermatophore which is taken up by the female. Fertilised eggs are laid singly and are usually attached to aquatic plants. This distinguishes them from the free-floating eggs of frogs or toads, that are laid in clumps or in strings. Plant leaves are usually folded over and adhered to the eggs to protect them. The tadpoles, which resemble fish fry but are distinguished by their feathery external gills, hatch out in about three weeks. After hatching they eat algae, small invertebrates or other tadpoles. During the next few months the tadpoles undergo metamorphosis, during which they develop legs, and the gills are absorbed and replaced by air-breathing lungs. Some species, such as the North American newts, also become more brightly coloured during this phase. Once fully metamorphosised they leave the water and live a terrestrial life, when they are known as "efts". Only when the eft reaches adulthood will the North American species return to live in water, rarely venturing back onto the land. Conversely, most European species live their adult lives on land and only visit water to breed. Many newts produce toxins in their skin secretions as a defense mechanism against predators. Taricha newts of western North America are particularly toxic. The Rough-skinned newt Taricha granulosa of the Pacific Northwest produces more than enough tetrodotoxin to kill an adult human, and some Native Americans of the Pacific Northwest used the toxin to poison their enemies. More recently, a 29-year-old man in Coos Bay, Oregon, who had been drinking heavily, swallowed a rough-skin newt for a dare; he died later that day despite hospital treatment. Most newts can be safely handled, provided that the toxins they produce are not ingested or allowed to come in contact with mucous membranes, or breaks in the skin. After handling, proper hand-washing techniques should be followed due to the risk from the toxins they produce and bacteria they carry, such as salmonella. It is, however, illegal to handle or disturb Great Crested Newts in the UK without a licence. About two thirds of all species of the family Salamandridae are commonly called "newts", compromising the following genera: - Calotriton, Spanish brook newts - Cynops, firebelly newts - Echinotriton, spiny newts - Lissotriton, small bodied newts - Mesotriton, Alpine newts - Neurergus, spotted newts - Notophthalmus, Eastern newts - Ommatotriton, banded newts - Pachytriton, paddle-tail newts - Paramesotriton, warty newts - Pleurodeles, ribbed newts - Taricha, Pacific newts - Triturus, crested newts - Tylototriton, crocodile newts The term "newt" has traditionally been seen as an exclusively functional term for salamanders living in water, and not a systematic unit. The relationship between the genera has been uncertain, although it has been suggested that they constitute a natural systematic unit and newer molecular analyses tend to support this position. Newts only appear in one subfamily of salamanders, the Pleurodelinae (of the family Salamandridae), however, Salamandrina and Euproctus, which are sometimes listed as Pleurodelinae, are not newts. Whether these are basal to the subfamily (and thus the sister group of the newt group) or derived, making the newts an evolutionary grade (an "incomplete" systematic unit, where not all branches of the family tree belong to the group) is currently not known. The three common European genera are the crested newts (Triturus spp.), the smooth and palmate newts (Lissotriton spp.) and the banded Newts (Ommatotriton spp.). Other species present in Europe are the Iberian ribbed newt (Plurodeles waltl), which is the largest of the European newts, the pyrenean brook newt (Calotriton sp.); the European brook newt (Euproctus sp.) and the Alpine newt (Mesotriton alpestris). In North America, there are the Eastern newts (Notophthalmus spp.), of which the red-spotted newt (Notophthalmus viridescens) is the most abundant species, but it is limited to the area east of the Rocky Mountains. The three species of coastal or Western newts are the red-bellied newt, the California newt, and the rough-skinned newt, all of which belong to the genus Taricha, which is confined to the area west of the Rockies. In Southeast Asia and Japan, species commonly encountered in the pet trade include the fire belly newts (Cynops spp.), the paddletail newts (Pachytriton spp.), the crocodile newts (Tylototriton spp.), and the warty newts (Paramesotriton spp.). In the Middle East there are the spotted newts (Neurergus spp.). Newt populations have fallen across the world because of pollution or destruction of their breeding sites and terrestrial habitats, and countries such as the USA and the UK have taken steps to halt their decline. In the UK, they are protected under the Wildlife and Countryside Act 1981 and the Habitat Regulations Act 1994. It is illegal to catch, possess or handle Great Crested Newts without a licence, and it is also illegal to cause them harm or death, or to disturb their habitat in any way. The IUCN Red List categorises the species as ‘lower risk’ Although the other UK species, the smooth newt and palmate newt are not listed, the sale of either species is prohibited under the Wildlife and Countryside Act, 1981. In Europe, nine newts are listed as "strictly protected fauna species" under appendix II of the Convention on the Conservation of European Wildlife and Natural Habitats: - Euproctus asper - Euproctus montanus - Euproctus platycephalus - Triturus carnifex - Triturus cristatus - Triturus dobrogicus - Triturus italicus - Triturus karelinii - Triturus montandoni The remaining European species are listed as "protected fauna species" under appendix III. The etymology for this term has gone through a complex twist of old Middle English variations. The oldest form of the name is eft, which is still used for newly metamorphosed specimens, but according to the Oxford English Dictionary it changed for unknown reasons first to euft and then to ewt. For some time it remained as an ewt, but the "n" from the indefinite article an shifted to form a newt. The sexually mature stage was also called an ewte, with similar etymology roots linking an ewte, newt, "euft", and eft: "small lizard-like animal," . - ↑ Brockes, J. & A. Kumar. 2005. Newts. Current Biology. 15(2):R4244) - ↑ Heying, H. 2003. "Caudata" (On-line), Animal Diversity Web. Accessed 2007-12-05 - ↑ www.bioscience.utah.edu; Odelberg, S. Accessed 2007-01-24 - ↑ www.scienceclarified.com Accessed 2007-12-01 - ↑ http://lnr.cambridge.gov.uk/news/article.asp?ItemID=285 Accessed 2008-03-06 - ↑ bbc.co.uk Factfile 478 Accessed 2007-11-30 - ↑ 7.0 7.1 7.2 see caudata.org Accessed 2007-11-28 - ↑ Salmonellosis - Reptiles and Amphibians Accessed 2007-11-28 - ↑ CDC MMWR: Reptile-Associated Salmonellosis: Selected States, 1998-2002 Accessed 2007-11-28 - ↑ 10.0 10.1 bbc.co.uk Factfile 479 Accessed 2007-11-28 - ↑ Titus, T. A. & A. Larson (1995):. A molecular phylogenetic perspective on the evolutionary radiation of the salamander family Salamandridae. Systematic Biology 44, pp 125-151. - ↑ 12.0 12.1 Steinfartz, S., S. Vicario, J. W. Arntzen, & A. Caccone (2006): A Bayesian approach on molecules and behavior: reconsidering phylogenetic and evolutionary patterns of the Salamandridae with emphasis on Triturus newts. Journal of Experimental Zoology Part B: Molecular and Developmental Evolution - ↑ 13.0 13.1 Weisrock, D. W., Papenfuss, T. J., Macey, J. R., Litvinchuk, S. N., Polymeni, R., Ugurtas, I. H., Zhao, E., Jowkar, H., & A. Larson (2006): A molecular assessment of phylogenetic relationships and lineage accumulation rates within the family Salamandridae (Amphibia, Caudata). Molecular Phylogenetics and Evolutio 41, pp 368-383. - ↑ Larson, A, Wake, D., & Devitt, T. (2007): Salamandridae, Newts and "True Salamanders". Tree of Life on-line project - ↑ Montori, A. and P. Herrero (2004): Caudata. In Amphibia, Lissamphibia. García-París, M., Montori, A., and P. Herrero. Fauna Ibérica, vol. 24. Ramos M. A. et al. (eds.). Museo Nacional de Ciencias Naturales. CSIC. Madrid: pp 43-275 - ↑ www.calcadamemy.org; California Academy of Sciences Accessed 2007-12-05 - ↑ Carranza, S. & Amat, F. (2005) Taxonomy, biogeography and evolution of Euproctus (Amphibia: Salamandridae), with the resurrection of the genus Calotriton and the description of a new endemic species from the Iberian Peninsula Zoological Journal of the Linnean Society 145 (4), 555–582. - ↑ livingunderworld.org; Amphibian Order:caudata ; Accessed 2007-02-05 - ↑ USGS Amphibian Research Monitoring Initiative (Pacific Northwest Region) Accessed 2007-11-30 - ↑ UK Biodiversity Action Plan Accessed 2007-11-30 - ↑ bbc.co.uk Factfile 478 Accessed 2007-11-28 - ↑ arkive.org Accessed 2007-11-30 - ↑ Annexe II: Strictly protected fauna species Retrieved on 15 September 2008 - ↑ Annexe III: Protected fauna species Retrieved on 15 September 2008 - Caudata.nl - The Dutch Newt & Salamander Site - Caudata Culture - Eastern Newt - Notophthalmus viridescens Species account from the Iowa Reptile and Amphibian Field Guide |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:affb6922-3128-43e5-ae28-cb5b34cc95d3>
CC-MAIN-2013-20
http://psychology.wikia.com/wiki/Newt?direction=prev&oldid=131688
2013-05-22T15:25:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.851322
2,835
No body in particular can be attributed as the inventor of MUSIC. Music was there since the beginning... in the form of chirping of the birds, water flowing down a waterfall, swaying of branches in breeze, baying of animals. Man learned about Music from nature. Your vote on this answer has already been received Music has been around since before humans came into existence. Music was never invented; Birds sing, water drips, leaves rustle, fire crackles, and so on. It might be better to ask, \"Who first realized that the world is so filled with music?\" No one knows. Caveman used to make music. Pope Gregory the First wrote music around 600 AD Music wasn\'t \"invented\" only discovered. Everyone hums, taps, beats, etc. Early indians created music. Music goes as far back as human existence. I don\'t believe one single person can be credited with \"inventing\" music. Music has existed since prehistoric times and predates the written word. Music in terms of sound created by people for perposes of enjoyment has been around as long as people have. Music theory (noted music, and rules for forming it) didn\'t start showing up until later, though. The earliest evidence of notated music is from about 2000 BCE, and it was very simple. More developed forms showed up later.
<urn:uuid:7003dd96-45a0-4228-b97d-4f4aa5cf76a6>
CC-MAIN-2013-20
http://qna.rediff.com/questions-and-answers/what-is-the-evolution-of-money/14303068/answers/6763066
2013-05-22T15:15:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978435
291
Retinitis pigmentosa (RP) is the name for a group of congenital eye diseases that lead to a destruction of the retina, i.e. of the tissue at the posterior end of the eye that is capable of transmitting visual impressions. Worlwide about 3 Million persons –in Germany approx. 30 to 40 thousand– suffer from one of the different forms of RP. This – still incurable disease– is one of the most frequent causes of loss of sight in adult life. RP is a congenital disease that can be transmitted to one’s offspring. It has been estimated that every 80th person carries an “unfavourably modified” RP gene, i.e. a genetic information that can start the development of this retinal disease in RP-gene carriers or in their offspring. The diseases of this type are characterized by the onset of night blindness in adolescence or in middle age, with a constriction of the visual field, contrast- and colour vision, but also visual acuity deteriorate, so that the ability to see gradually deteriorates, often until blindness is reached. The entire process is gradual, occurs in phases and over decades. This development leads to professional and private disadvantages and thus to heavy psychological stress. At the bottom of these symptoms lies a gradual destruction of the retinal photoreceptors, usually beginning with the rods responsible for night and twilight vision, later continuing with the cones located at the center of the retina, important for reading and colour vision. Even though the disease has been described in the middle of the 19th century, with only very few exceptions there still exists no possibility , neither surgical nor pharmacological nor through a diet, to slow down or to halt the process of photoreceptor cell death. Ever since the disease became known, a whole spectrum of therapies has been tried out, from scientifically founded attempts to sheer quackery, all of which turned out to be ineffective. back to top In order to explain the process of seeing, a comparison with photoraphy is useful. The light by which we are able to see falls into the eye through the pupil. The pupil corresponds to the aperture and widens or narrows according to the light intensity. The lens of the eye has the same function as the lens of the camera, i.e. to render an image sharp at a certain distance. The retina corresponds to the film. The retina constitutes the innermost of three layers of tissue that line the invisible back part of the eyebulb. Directly beneath the retina lies the choroid that supplies the retina with nutrients, and the outermost layer is made up of the protective sclera. Whereas film has the same light sensitivity everywhere at its surface the retina has different sensitivity to light in different areas. The retina is made up of rods and cones. The rods enable us to see at night and during twilight; they cover the larger part of the retinal surface. The cones enable us to see colour and allow us to see images sharply. They are found mainly in the centre of the retina, in the so-called “yellow spot” or macula. From the rim of the macula outwards, the proportion of rods increases, that of cones decreases. The macula has a diameter of only 2 mm. Our visual field extends to about 180°, but we are not able to see everywhere with the same acuity. The centre of the visual field, i.e. the area of our line of vision possesses the greatest visual acuity. We rely on this area for reading and for recognizing fine details. Towards the periphery, our visual acuity decreases. Already 5° laterally to the line of vision, visual acuity decreases to 30% (see Figure below). The peripheral zone of the retina allows spatial orientation. The retina contains millions of photoreceptors; these are stimulated by incident light, light is transformed into electric impulses transmitted via the optic nerve to the brain where the image that we see is finally composed. Source: VISUS 1/95, Durch das Auge Degenerative retinal diseases The concept “degenerative retinal diseases” covers generally all those eye diseases that are based on the death of various retinal cells. Two main types are distinguished: Diffuse retinal degenerations (e.g. Retinitis pigmentosa, Usher Syndrome, Choroideremia) and localized retinal degenerations (e.g. age-related macula degeneration). A series of congenital and non-congenital degenerative retinal diseases affect primarily the area of sharpest vision, the macula. The outer visual field and the capacity for orientation remain untouched. As a rule, night blindness does not occur, since the rods lying outside the central retina keep functioning normally. A few symptoms resulting from the damage to the macula correspond to those of advanced RP: Decrease of visual acuity, of the ability to read, of contrast sensitivity, of colour vision and an increase in sensitivity to glare. The affected person can no longer fixate an object. Thus it is possible to see a clock but not to recognize the time, to see a person without recognizing his or her features. The age of onset and the severity of the symptoms vary and depend upon the type of the disease. In Germany 1-2 million persons suffer from one type of macula degeneration with the majority suffering from age-related maculare disease (AMD). The name “age-related” or “senile” macula degeneration (AMD) derives from the fact that the first symptoms appear only from the age of 45 to 50 years and the probability of being affected by AMD increases with age. The “juvenile” type macula degeneration can occur already in the 10th or 20th year and it subsumes various diseases with similar characteristics. All of them can lead to a progressive decrease of vision in the center of the retina. Age-related macula degeneration (AMD) Age-related macula degeneration can take two different courses. The more frequent form (ca. 85%) is the “dry” type where central retinal cells slowly degenerate. This cell death leads slowly to a decrease in the ability to see. Occasionally, for a longer period of time, the process of degeneration comes to a standstill, so that some patients are able to read up into their old age by means of optic or electronic reading aids. Effective drugs or other therapeutical measures do not exist. In the far more rare cases of “wet” macula degeneration, liquid collects beneath the macula, stemming usually from ingrown choroid vessels. The leakage of fluid from the blood vessels damages the light-sensitive cells of the macula. the image on the retina is distorted, so that the first symptom of this disease is the apparent contortion of straight lines into curved ones, later followed by spots in the center of the visual field. back to top Cone-rod dystrophy is a disease which from its onset goes hand in hand with a heavy loss of visual acuity. It progresses with varying speed. Alone in Germany roughly 2000 persons are affected by this disease that in rare cases is connected with other organ diseases. Usually the disease manifests itself by age 20, sometimes, however, it appears later in life. back to top Choroideremia constitutes a rare retinal-choridal dystrophy; according to its course it is a rod-cone dystrophy. In Germany about 1000 persons, usually men, are affected. back to top Light-sensitive retinal photoreceptors. There exist rods and three types of cones containing different photopigments for distinguishing various hues of colour back to top “Visual acuity” is the eye’s capacity of recognizing different fine details; it depends above all on the resolution of the retina and lies in area of sharp vision around 1-2’. With low brightness and at the periphery of the visual field visual acuity decreases markedly, because the concentration of photoreceptors is lower. The optometrist or the ophthalmologist examines visual acuity with test images on which individual letters, numbers or pictures must be recognized at a certain distance. A visual acuity of 100% is regarded as a good average value in normal-sighted patients. Beneath 20% is classified as “strong visual handicap” and beneath 3% is classified as “blind”, even though some remainin vision may still be present e.g. the ability to still distinguish darkness and light. back to top a hearing implant, placed into the inner ear when the inner ear no longer functions but the acoustic nerve still is intact. In contrast to hearing aids, cochlear implants stimulate the acoustic nerve directly.
<urn:uuid:73448f30-7dd2-4568-87c9-00a9db1a4604>
CC-MAIN-2013-20
http://retina-implant.de/en/glossary/default.aspx
2013-05-22T15:06:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926648
1,840
Students at the Graduate School of Design at Harvard University under the guidance of Professor Ingeborg M. Rocker of Rocker-Lange Architects , have built a wall structure out of chipboard bricks. The research seminar On the Bri(n)ck II: Architectural Envelope traces the historical development of a debate about the architectural envelope that began at the end of the 19th century. It was a critical period in the industrial revolution when new materials and technologies became available and started to inform architectural design and debate. Architects began to question the role that mass-production should play in architecture, and also questioned the influence that new notation and construction-techniques had on the architects’ work. Today these and similar questions are resurfacing as the digital medium literally informs the conceptualization and production of architecture. In the beginning of the 20th century brick became the dominant local material, embodying the socially and politically motivated expansions of rapidly growing European cities. Brick was particularly favored in the urban centers of the Netherlands, Germany and Austria. Today the role of brick has evolved, though solid and capable of bearing great loads, it is now mostly used as cladding. On the Bri(n)ck II focused consequently on the changing role and materiality of brick today. The project engaged several teams to develop architectural envelopes that were constituted from either mass-produced or mass-customized load bearing brick units, or alternatively mass-produced or mass-customized non-structural brick cladding. In addition to the research on different discretization techniques and structural properties of surfaces, the research-seminar also sought to identify alternative brick materials that were widely available, sustainable, light and inexpensive. On the Bri(n)ck II (1:1) project employed several hundred cardboard brick units to form the geometry of a Limaçon surface. This is a continuous geometry that inscribes an interior space with a single surface. The openings of the brick-units along with the units adapt in size, geometry and width to the surface’s geometry. At the same time the overall surface geometry is challenged through the discretization techniques generating the bricks. Using a 2-dimensional material to create a 3-dimensional brick unit was challenging. Research had to overcome obstacles such as the geometric construction of the unit, its ability to unfold and resourceful use of the material. Working with chipboard also required a very precise study of the units’ geometry in relation to their structural stability. Much attention was paid to the units, their seams and the ease in which one was able to assemble and disassemble them. A chipboard rib further stabilized the unit connections. The project was designed and built using the CAD/CAM facilities at the GSD. Overall the design and building process brought up questions regarding mass-production and mass-customization. The project explored the limits of a mass-customization process; examining how the same procedure can lead to an array of possible results. Ingeborg M. Rocker, Ph.D. Hiroshi Jacobs (MDES) Core Team: Mais Al Azab, William Choi, Hernan Garcia, Casey Hughes, John Jakubiec, Lesley McTague, Marta Nowak, and Mark Pomarico Team: Harvard GSD Students Drawings: Hiroshi Jacobs + Casey Hughes Renderings: Will Choi Funding: Junior Faculty Grant from the Department of Architecture, Harvard University, GSD
<urn:uuid:d51f66e4-38ad-40b3-9068-020e85828b3c>
CC-MAIN-2013-20
http://rocker-lange.com/blog/?cat=8
2013-05-22T15:07:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940853
712
We recently heard about the two young ladies in Minnesota who hung themselves after being bullied for not fitting in and being overweight. That’s ridiculous that our children have to commit suicide due to another’s person’s bias and hatred. A hate crime is any criminal act motivated by bigotry and bias. That bias can be based on racial, religious, ethnic, handicap, gender or sexual orientation prejudice. Through harassment, intimidation, or violence, hate crimes are an assault on our constitutional rights and our physical wellbeing. And they cannot be tolerated. Crimes motivated by hatred and bias are among the most serious challenges to public safety our Commonwealth faces. The Patrick-Murray administration is committed to preventing, responding to and supporting the victims of hate crimes in every community. Since 1991, Massachusetts has been aggregating data from local police departments on the number and nature of hate crimes reported in every one of the Commonwealth’s communities. For more information on these reports visit www.mass.gov/eops. Ending hate crimes in our communities requires each of us to do our part. Raising awareness about hate crimes and the promotion of non-violent, tolerant attitudes are the most important tools we have to improve the safety of all Massachusetts citizens.
<urn:uuid:110d6c77-a9f2-42f8-a591-516d521abef9>
CC-MAIN-2013-20
http://safety.blog.state.ma.us/blog/2011/05/index.html
2013-05-22T15:20:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938476
256
You probably visit the restroom several times a day. While you're there, do you ever consider the technology at work around you? And do you ever think about how much water is used to make it all work? The world, in fact, flushes up to 20 percent of its drinking water down various drains [source: Waterless]. That's a lot of water going to waste. In addition to the obvious -- water conservation -- there are many other answers to the question, "Why waterless toilets?" In developing countries, waterless toilets can provide sanitation on little infrastructure and are doubly helpful in regions prone to droughts. Homeowners in Death Valley might like the idea of not flushing drinking water down the toilet. New Yorkers might like the idea of saving money that would otherwise pay for expanding congested sewer systems and wastewater treatment plants. If you're moving to the backwoods and don't want to buy a septic system, a waterless toilet could work. What a waterless toilet will mean for you, the toilet owner, is that your toilet won't flush with water. In most cases, except for today's waterless urinals, the toilet doesn't connect to a city's water grid. The waste doesn't go to a water treatment plant. Instead, you take care of the waste. Does it sound disgusting? Suppress your memories of smelly camp latrines; modern waterless toilets aren't like that. As you'll soon learn, instead of the waste becoming a rank mess, in these toilets, the waste can become harmless or even able to do work for you. Do you want to find out how? Read on.
<urn:uuid:bd914289-fc0b-4927-9526-bc8083e7a39d>
CC-MAIN-2013-20
http://science.howstuffworks.com/environmental/green-tech/sustainable/waterless-toilet.htm
2013-05-22T14:59:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961332
341
A video from the University of Washington explains how condensation heats up frosty cans more quickly. Droplets of condensation may make a cold can of beer look more appealing on a hot day, but they're also making that frosty brew warm up faster. So here's some news you can use: If it's hot and humid, put a cover over your can of cold beverage. And if you want to warm up a frozen can quickly, don't bake it. Steam it. That's exactly what University of Washington researchers did in a series of experiments to show how the warming power of condensation applies to issues ranging from colder beer to hotter climates. The beer-can study, published in the April issue of Physics Today, began a couple of years ago when UW atmospheric scientist Dale Durran was looking for a way to explain how condensation produced heat as the flip side of evaporative cooling. The cooling effect is well-known — we feel it when sweat evaporates to cool us off in the summer time, or when we turn on a mist cooler. But the flip side of the effect is less widely understood. Durran figured out that the condensation on a cold aluminum can might serve as a handy illustration. He did a quick back-of-the-napkin calculation, and found that the heat released by water just 100 microns (four thousandths of an inch) thick should heat its contents by 9 degrees Fahrenheit (5 degrees Celsius). "I was surprised to think that such a tiny film of water would cause that much warming," Durran said in a UW news release. He recruited a fellow atmospheric scientist at UW, Dargan Frierson, to conduct the initial experiment ... in Frierson's basement bathroom. First, they set a can of beverage on the toilet tank and warmed it up with a space heater. Then they took another can, turned on the shower and let the bathroom get nice and steamy. Each time they ran the experiment, the researchers stuck a thermometer through the can's pop-top opening and watched the temperature rise over the course of 15 minutes. Droplets of condensation on a chilly can are a signal that the temperature inside is rising. Frierson said conditions got a little sticky in the steamed-up bathroom. "I think that's the most uncomfortable my research has ever made me — but it's all for science," he told NBC News. Even though the air temperature was the same in both cases, the liquid in the steamed-up can warmed up twice as fast. The researchers followed up on the basement-bathroom findings with more rigorous lab experiments. Every time, the cans warmed up more quickly in more humid conditions. The researchers even charted how quickly 12-ounce aluminum cans of chilled liquid should warm up, depending on different levels of temperature and humidity. For example, in five minutes, the can should get 6 degrees F (3 degrees C) warmer due to condensation amid New Orleans' typical summer conditions. The equivalent warm-up factor would be 3.5 degrees F (2 degrees C) in New York, and 2 degrees F (1 degree C) in Seattle. But in Dhahran, a Saudi city that ranks among the hottest, stickiest places in the world, the can would get about 14 degrees F (8 degrees C) warmer in five minutes. That's why covering a cold can is a such a good idea on a steamy-hot summer day. "Probably the most important thing a beer koozie does is not simply insulate the can, but keep condensation from forming on the outside of it," Durran said. The effects of condensation and evaporation are well-known to climatologists, but Durran and Frierson say the beer-can experiments can give the general public a better understanding of atmospheric dynamics. "Condensation as a heat source is just tremendously important," Frierson said. "It's really like the gasoline that powers hurricanes, thunderstorms and tornadoes." Some climate models suggest that there could be 25 percent more humidity in the atmosphere by the end of the 21st century, and that could lead to more bouts of extreme weather in the decades to come. "We want people to appreciate how powerful this effect is," Durran told NBC News. "A very thin film around the can makes a big difference in the temperature of its contents, and that just makes you appreciate the importance of that same heating effect in our atmosphere." Here's how to run the experiment described in the YouTube video from University of Washington Department of Atmospheric Sciences Outreach: - Freeze two cans of your favorite beverage. This should take roughly seven hours, depending on your freezer. - Fifteen minutes before taking out the cans, preheat oven to 250 degrees F and start boiling water in a pot. Place a cookie rack on top of pot. - Take the cans out of freezer. Place one in the preheated oven. and one over the boiling pot. - Start timer for 10 minutes. - After 10 minutes, carefully remove cans from oven and pot. - Crack open both cans and pour into separate glasses. - Take a photo/video of the two cans and glasses, go to the UW YouTube page, and post a video response. More beer-can science: - Tiny sip of beer can produce burst of pleasure - Study explains the science of a beer buzz - Scientists study how beer goes bad Update for 9:30 p.m. ET April 26: Would wiping off the drops of condensation keep your drink cooler? Sorry, says UW spokeswoman Hannah Hickey. "That will only make your drink even warmer," she writes in a Twitter update. Update for 2:25 p.m. ET April 27: Some commenters are wondering why there's so much fuss over a relatively simple concept. The point of the exercise wasn't really to break new ground in atmospheric physics (or in summertime beverage consumption), but "to improve our intuition about the power of condensational heating" — which is a huge factor in climate dynamics. Durran explained further in a comment below, and I'm providing an extended version of his comments here to give them a little more visibility: "In my class, students definitely need to know how condensation causes heating. Here's how. There are bonds that link water molecules together into a crystal lattice to form ice. It takes heat (energy) to break a few of those bonds and turn ice to liquid water. To evaporate the liquid water, the rest of the bonds between molecules need to be broken, which takes a lot more heat. Once all the bonds are broken, the liquid is converted to water vapor, an invisible gas. "This processes reverses when water vapor is cooled enough to condense as liquid water. Bonds between molecules re-form, and the heat it took to originally break them is released into the surroundings. "The reason we make a big deal about the power of condensational heating is that it does amazing things in the atmosphere, such as powering the updrafts in thunderstorms. The rising cloud-filled updrafts in the video linked below ascend like hot-air balloons because they are warmed, not by burning a fuel like propane, but by the heat released as water vapor condenses. "Here's the video link: http://www.youtube.com/watch?v=GVIwDoogncQ "Such a visualization might help people understand some of the applications. (Only the last half of the Physics Today article was about the beer can heating.)" Durran and Frierson are the authors of "Condensation, Atmospheric Motion, and Cold Beer" in Physics Today. Supplemental experiments are described in "An Experiment Uses Cold Beverages to Demonstrate the Warming Power of Latent Heat." Lab experiments were performed by Stella Choi and Steven Brey. Galen Richards and Jaycyl Golding, high school students serving as Pacific Science Center Discovery Corps interns, worked on earlier versions of the experiments. Instrument makers Allen Hart and Steven Domonkos built experimental apparatuses. Funding was provided by National Science Foundation grants AGS-0846641 and AGS-1138977. Alan Boyle is NBCNews.com's science editor. Connect with the Cosmic Log community by "liking" the log's Facebook page, following @b0yle on Twitter and adding the Cosmic Log page to your Google+ presence. To keep up with Cosmic Log as well as NBCNews.com's other stories about science and space, sign up for the Tech & Science newsletter, delivered to your email in-box every weekday. You can also check out "The Case for Pluto," my book about the controversial dwarf planet and the search for new worlds.
<urn:uuid:8e960ffc-a972-49e6-9e59-83a5dabbab24>
CC-MAIN-2013-20
http://science.nbcnews.com/whimsy
2013-05-22T15:27:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94517
1,808
The Belted Kingfisher can often be spotted on a perch overlooking the water, with its head down as it scans the water below for suitable prey. They are plunge divers, diving headfirst to grab prey (usually small fish) near the water's surface. The female (see bottom photo) is actually more colorful than the male (see photo to the right), with a chestnut colored band across the belly that the male lacks. be found near practically any aquatic habitat, especially in the winter and the fall. They are more particular in the summer, when they require aquatic habitats with nearby dirt banks for their nesting burrows. Diet: Primarily feeds on small fish. They will also eat other aquatic animals including frogs, tadpoles, crayfish, and aquatic insects. They occasionally feed away from water and will take mice and other small rodents, lizards, small snakes, and small birds. Behavior: Observes from a perch over water, plunging head first into the water after prey when spotted. Also will hover over water in search of prey. Nesting: May through July Migration: Summers throughout the U.S. and Canada. Birds in the northern part of its range usually migrate southward in the fall, but they can be found in winter in the north as long as open water is available. They are generally a rare occurrence in South Dakota in the winter, however. Similar Species: Similar to other Kingfishers, but the Belted Kingfisher is the only one found in South Dakota. Conservation Status: Numbers appear to be stable with slight decreases in parts of Cornell University's "All About Birds - Belted Kingfisher" eNature.com: Belted Kingfisher Photo Information: Top Photo (Male): February 9th, 2003 -- Beaver pond on Sergeant Creek in Newton Hills State Park -- Terry Sohl Bottom Photo (Female): April 5th, 2003 -- Pierre, SD -- Additional Photos: Click on the image chips or text links below for additional, higher-resolution Belted Kingfisher photos.
<urn:uuid:80a03c89-1af3-4b6b-a61b-718259aa5b63>
CC-MAIN-2013-20
http://sdakotabirds.com/species/belted_kingfisher_info.htm
2013-05-22T15:21:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924917
457
How Do I Teach the Process of Science?This module was authored by Anne E. Egger, Stanford University, as part of a collaboration between Visionlearning and the SERC Pedagogic Service, and includes the products of a July 2009 workshop on Teaching the Process of Science. Whether you are thinking about making big or small changes in your courses, the same principles apply in emphasizing the process of science: Be explicitMcGinn & Roth, 1999 ). For example, in teaching about evolution, you might focus on the data that supports the concept of change in organisms rather than on the conclusions that Darwin and others made. In the process, you could emphasize how evolutionary theory is supported by multiple lines of evidence from many lines of research up to the present. Or in a laboratory exercise where students graph a large amount of data, you might ask them to first draw conclusions on their data before graphing it, and again after graphing it, in the process emphasizing the importance that visual representation brings to data analysis. Most of our textbooks don't explicitly address the process of science. You can read more about Integrating the Process into Readings, and Browse Text Resources that do incorporate the process of science. Tell storiesKlassen, 2008 ). In addition, telling science stories can help address some of the conflicts that students may feel between science and religion (Bickmore et al., 2009 ). For more information, read Addressing Science and Religion, contributed by participants in the July 2009 workshop. Use real data Perhaps one of the most significant things you can do in your classroom is to give students the opportunity to work with real data (Manduca & Mogk, 2002 ). Nothing can compare with the insight gained by students collecting and working with data, where they have a chance to experience for themselves the challenges and successes that are a part of every scientific endeavor. For more information about how to use data in your classroom, visit the module Teaching with Data. Assessing student understanding of the process of science can be very challenging, especially when we are far more used to assessing their content knowledge. Several participants in the July 2009 workshop had developed and tested their own instruments, including: - Nancy Ruggieri, Department of Curriculum & Instruction, University of Wisconsin-Madison - On Assessing Student Understanding of the Nature of Scientific Knowledge (PowerPoint 583kB Jul16 09) - Kaatje Kraft, Department of Physical Science, Mesa Community College - Using Metacurriculum to Enhance Student Understanding of the Nature of Science (PowerPoint 16.8MB Jul16 09) - Karen Viskupic, Department of Geosciences, Boise State University - Measuring Geoscience Students' Perceptions of the Nature of Science (PowerPoint 123kB Jul16 09) - Anthony Carpi, Science Department, John Jay College-CUNY - Incorporating Process-Oriented Instruction Assessment into a Non-majors Science Class (PowerPoint 829kB Jul16 09) In addition, a number of assessment instruments have been developed for addressing different aspects of the nature of science, the process of science, and student attitudes towards science. These include: - Views on the Nature of Science Questionnaire (VNOS) is an open-ended questionnaire developed by Norm Lederman, Fouad Abd-El-Khalick, Randy Bell, and Renee Schwartz to test student understanding of several aspects of nature of science concepts. For a complete reference, see Lederman et al., 2002 . - The Genomics Education Partnership has developed a pre- and post-survey for their biology courses that incorporate student research. The instrument assesses student attitudes towards science and understanding of the nature and process. For a complete reference, see Lopatto et al., 2008 . For general information about designing effective assessment tools, visit the SERC module on Assessment.
<urn:uuid:0b303845-304b-48cb-bad2-361f9234e8bc>
CC-MAIN-2013-20
http://serc.carleton.edu/sp/library/process_of_science/how_process_science.html
2013-05-22T15:08:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.907009
794
I compare thee to a summer's day? Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: Sometime too hot the eye of heaven shines, And often is his gold complexion dimmed, And every fair from fair sometime declines, By chance, or nature's changing course untrimmed: But thy eternal summer shall not fade, Nor lose possession of that fair thou ow'st, Nor shall death brag thou wander'st in his shade, When in eternal lines to time thou grow'st, So long as men can breathe, or eyes can see, So long lives this, and this gives life to thee. This is one of the most famous of all the sonnets, justifiably so. But it would be a mistake to take it entirely in isolation, for it links in with so many of the other sonnets through the themes of the descriptive power of verse; the ability of the poet to depict the fair youth adequately, or not; and the immortality conveyed through being hymned in these 'eternal lines'. It is noticeable that here the poet is full of confidence that his verse will live as long as there are people drawing breath upon the earth, whereas later he apologises for his poor wit and his humble lines which are inadequate to encompass all the youth's excellence. Now, perhaps in the early days of his love, there is no such self-doubt and the eternal summer of the youth is preserved forever in the poet's lines. The poem also works at a rather curious level of achieving its objective through dispraise. The summer's day is found to be lacking in so many respects (too short, too hot, too rough, sometimes too dingy), but curiously enough one is left with the abiding impression that 'the lovely boy' is in fact like a summer's day at its best, fair, warm, sunny, temperate, one of the darling buds of May, and that all his beauty has been wonderfully highlighted by the comparison. The 1609 Quarto Version SHall I compare thee to a Summers day? Thou art more louely and more temperate: Rough windes do ſhake the darling buds of Maie, And Sommers leaſe hath all too ſhorte a date: Sometime too hot the eye of heauen ſhines, And often is his gold complexion dimm'd, And euery faire from faire ſome-time declines, By chance,or natures changing courſe vntrim'd: But thy eternall Sommer ſhall not fade, Nor looſe poſſeſſion of that faire thou ow'ſt, Nor ſhall death brag thou wandr'ſt in his ſhade, When in eternall lines to time thou grow'ſt, So long as men can breathe or eyes can ſee, So long liues this,and this giues life to thee,
<urn:uuid:39cfb417-8da9-42ca-8f80-b5412935f2cd>
CC-MAIN-2013-20
http://shakespeares-sonnets.com/sonnet/18
2013-05-22T15:36:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946407
674
The Dynamic Crab Pulsar Look carefully at this animated image. What you see is the Crab pulsar, a rapidly rotating neutron star at the heart of the Crab Nebula, propelling matter and antimatter outward at near the speed of light, seen in 24 sequential images acquired over several months by the Hubble Space Telescope. The Crab pulsar is a tiny, dense remnant of a star that exploded in a supernova, observed here on Earth in the year 1054. It is very small - only about 25 km (15 miles) across, has a mass about 1.5 times our own Sun, and rotates at an amazing rate of 30 times per second. Bright wisps of energetic particles can be seen moving outward from the pulsar at half the speed of light to form an expanding ring. These wisps appear to originate from a shock wave that shows up as an inner X-ray ring. Also, a turbulent jet appears to be spewing material to the left, looking much like steam from a high-pressure boiler - except it’s a stream of matter and anti-matter electrons moving at half the speed of light. This little neutron star, some 6,500 light-years away, has been doing this energetic pirouette for the past thousand years, and, undisturbed, will likely continue to do so for billions of years more. Absolutely ridiculously mind blowing. I don’t even have words to articulate how awesome this is.
<urn:uuid:3e9d231a-f762-4c33-a346-f0437d5c96a6>
CC-MAIN-2013-20
http://skibinskipedia.org/post/2492476926
2013-05-22T15:14:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934572
304
Please Read How You Can Help Keep the Encyclopedia Free Discrimination is prohibited by six of the core international human rights documents. The vast majority of the world's states have constitutional or statutory provisions outlawing discrimination. (Osin and Porat 2005) And most philosophical, political, and legal discussions of discrimination proceed on the premise that discrimination is morally wrong and, in a wide range of cases, ought to be legally prohibited. However, co-existing with this impressive global consensus are many contested questions, suggesting that there is less agreement about discrimination than initially meets the eye. What is discrimination? Is it a conceptual truth that discrimination is wrong, or is it a substantive moral judgment? What is the relation of discrimination to oppression and exploitation? What are the categories on which acts of discrimination can be based, aside from such paradigmatic classifications as race, religion, and sex? These are some of the contested issues. - 1. The Concept of Discrimination - 2. Types of Discrimination (in its Moralized Sense) - 3. Challenging the Concept of Indirect Discrimination - 4. Why Is Discrimination Wrong? - 5. Which Groups Count? - 6. What Good is the Concept of Discrimination? - 7. Intersectionality - 8. Conclusion - Academic Tools - Other Internet Resources - Related Entries What is discrimination? More specifically, what does it mean to discriminate against some person or group of persons? It is best to approach this question in stages, beginning with an answer that is a first approximation and then introducing additions, qualifications, and refinements as further questions come into view. In his review of the international treaties that outlaw discrimination, Wouter Vandenhole finds that “[t]here is no universally accepted definition of discrimination” (2005: 33). In fact, the core human rights documents fail to define discrimination at all, simply providing non-exhaustive lists of the grounds on which discrimination is to be prohibited. Thus, the International Covenant on Civil and Political Rights declares that “the law shall prohibit any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status” (Article 26). And the European Convention for the Protection of Human Rights declares, “The enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, color, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status” (Article 14). Left unaddressed is the question of what discrimination itself is. Any viable account of what discrimination is will regard it as consisting of actions, practices, or policies that are—in some appropriate sense-- based on the (perceived) social group to which those discriminated against belong. Moreover, the relevant groups must be “socially salient,” as Kasper Lippert-Rasmussen puts it, i.e., they must be groups that are “important to the structure of social interactions across a wide range of social contexts” (2006: 169). Thus, groups based on race, religion and gender qualify as potential grounds of discrimination in any modern society, but groups based on the musical or culinary tastes of persons would typically not so qualify. Discrimination against persons, then, is necessarily oriented toward them based on their membership in a certain type of social group. But it is also necessary that the discriminatory conduct impose some kind of disadvantage or harm on the persons at whom it is directed. In this connection, consider the landmark opinion of the U.S. Supreme Court in Brown v. Board of Education, holding that de jure racial segregation in public schools is unconstitutional. The court writes, “Segregation with the sanction of law . . . has a tendency to [retard] the educational and mental development of negro children and to deprive them of some of the benefits they would receive in a racial[ly] integrated school system” (1954: 495). Thus, the court rules that segregation amounts to illegal discrimination against black children because it imposes on them educational and psychological disadvantages. Additionally, as Brown makes clear, the disadvantage imposed by discrimination is to be determined relative to some appropriate comparison social group. This essential reference to a comparison group explains why duties of non-discrimination are “duties to treat people in certain ways defined by reference to the way that others are treated” (Gardner 1998: 355). Typically, the relevant comparison group is part of the same society as the disadvantaged group, or at least it is governed by the same overarching political structure. In Brown, the relevant comparison group consisted of white citizens. Accordingly, it would be mistaken to think that the black citizens of Kansas who brought the lawsuit were not discriminated against because they were treated no worse than blacks in South Africa were being treated under apartheid. Blacks in South Africa were not the proper comparison class. The appropriate comparison class is determined by normative principles. American states are obligated to provide their black citizens an education that is no worse than what they provide to their white citizens; any comparison with the citizens or subjects of other countries is beside the point. It should also be noted that, whether or not American states have an obligation to provide an education to any of their citizens, if such states provide an education to their white citizens, then it is discriminatory for the states to fail to provide an equally good education to their black citizens. And if states do have an obligation to provide an education to all their citizens, then giving an education to whites but not blacks would constitute a double-wrong against blacks: the wrong of discrimination, which depends on how blacks are treated in comparison to whites, and the wrong of denying blacks an education, which does not depend on how whites are treated. Discrimination is inherently comparative, and the Brown case seems to suggest that what counts in the comparison is not how well or poorly a person (or group) is treated on some absolute scale, but rather how well she is treated relative to some other person. But an important element of the court's reasoning in Brown suggests that the essence of discrimination does not lie in treating some persons more favorably than others. Thus, the court famously writes, “Separate educational facilities are inherently unequal” (1954: 495). In other words, the harm of discrimination lies in the very act itself of racially separating black and white children, quite apart from the educational or psychological impact of the separation. On this understanding, treating blacks differently from whites amounts to discrimination, even if they are treated as well as whites. However, there is a critical problem with the view that the essence of discrimination is differential treatment rather than disadvantageous treatment. If this view were correct, then, under Jim Crow segregation, not only blacks but also whites would be victims of discrimination. Differential treatment is symmetrical: if blacks are treated differently from whites, then whites must be treated differently from blacks. But it is implausible to hold that the South's system of racial segregation discriminated against whites. The system arguably held back economic progress for everyone in the South, but that point is quite different from the implausible claim that everyone was a victim of discrimination. Accordingly, it is better to think of discrimination in terms of disadvantageous treatment rather than simply differential treatment. Discrimination imposes a disadvantage on certain persons relative to others, and those who are treated more favorably are not to be seen as victims of discrimination. An act can both be discriminatory and, simultaneously, confer an absolute benefit on those discriminated against, because the conferral of the benefit might be combined with conferring a greater benefit on the members of the appropriate comparison group. In such a case, the advantage of receiving an absolute benefit is, at the same time, a relative disadvantage or deprivation. For example, consider the admissions policy of Harvard University in the early twentieth century, when the university had a quota on the number of Jewish students. Harvard was guilty of discriminating against all Jewish applicants on account of their religion. Yet, the university still offered the applicants something of substantial value, viz., the opportunity to compete successfully for admission. What made the university's offer of this opportunity discriminatory was that the quota placed (potential and actual) Jewish applicants at a disadvantage, due to their religion, relative to Christian ones. One might think that it downplays the harm done by discrimination to say that the disadvantage it imposes only need be a relative disadvantage. However, the Brown case shows how the imposition of even a “merely” relative disadvantage can have extremely bad and unjust consequences for persons, especially when the relevant comparison class consists of one's fellow citizens. Disadvantages relative to fellow citizens, when those disadvantages are severe and concern important goods such as education, can make persons vulnerable to domination and oppression at the hands of their fellow citizens. (Anderson 1999) The domination and oppression of American blacks by their fellow citizens under Jim Crow was made easier by the relative disadvantage imposed on blacks when it came to education. Norwegians might have had an even better education than southern whites, but Norwegians posed little threat of domination to southern whites or blacks, because they lived under an entirely separate political structure, having minimal relations to American citizens. Matters are different in today's globalized world, where an individual's disadvantage in access to education relative to persons who live in other countries could pose a threat of oppression. Accordingly, one must seriously consider the possibility that children from poor countries are being discriminated against when they are unable to obtain the education routinely available to children in affluent societies. The relative nature of the disadvantage that discrimination imposes explains the close connection between discrimination and inequality. A relative disadvantage necessarily involves an inequality with respect to persons in the comparison class. Accordingly, antidiscrimination norms prohibit certain sorts of inequalities between persons in the relevant comparison classes. (Shin 2009) For example, the U.S. Civil Rights Act of 1866 requires that all citizens “shall have the same right, in every State and Territory in the United States, to make and enforce contracts, to sue, be parties, and give evidence, to inherit, purchase, lease, sell, hold, and convey real and personal property, and to full and equal benefit of all laws and proceedings for the security of person and property, as is enjoyed by white citizens” (Civil Rights Act 1866). And the international convention targeting discrimination against women condemns “any distinction, exclusion or restriction made on the basis of sex which has the effect or purpose of impairing or nullifying the recognition, enjoyment or exercise by women … on a basis of equality of men and women, of human rights and fundamental freedoms” (CEDAW, Article 1). To review: as a reasonable first approximation, we can say that discrimination consists of acts, practices, or policies that impose a relative disadvantage on persons based on their membership in a salient social group. But notice that this account does not make discrimination morally wrong as a conceptual matter. The imposition of a relative disadvantage might, or might not, be wrongful. In the next section, we will see how the idea of moral wrongfulness can be introduced to form a moralized concept of discrimination. The concept of discrimination is inherently normative to the extent that the idea of disadvantage is a normative one. But it does not follow from this point that discrimination is, by definition, morally wrong. At the same time, many—or even most—uses of the term ‘discrimination’ in contemporary political and legal discussions do employ the term in a moralized sense. David Wasserman is using this moralized sense, when he writes that “[t]o claim that someone discriminates is … to challenge her for justification; to call discrimination ‘wrongful’ is merely to add emphasis to a morally-laden term” (1998: 805). We can, in fact, distinguish a moralized from a non-moralized concept of discrimination. The moralized concept picks out acts, practices or policies insofar as they wrongfully impose a relative disadvantage on persons based on their membership in a salient social group of a suitable sort. The non-moralized concept simply dispenses with the adverb ‘wrongfully’. Accordingly, the sentence ‘Discrimination is wrong’ can be either a tautology (if ‘discrimination’ is used in its moralized sense) or a substantive moral judgment (if ‘discrimination’ is used in its non- moralized sense). And if one wanted to condemn as wrong a certain act or practice, then one could call it ‘discrimination’ (in the moralized sense) and leave it at that, or one could call it ‘discrimination’ (in the non-moralized sense) and then add that it was wrongful. In contexts where the justifiability of an act or practice is under discussion and disagreement, the moralized concept of discrimination is typically the key one used, and the disagreement is over whether the concept applies to the act. Because of its role in such discussion and disagreement, the remainder of this article will be concerned with the moralized concept of discrimination, unless it is explicitly indicated otherwise. There is an additional point that needs to be made in connection with the wrongfulness of discrimination in its moralized sense. It is not simply that such discrimination is wrongful as a conceptual matter. The wrongfulness of the discrimination is tied to the fact that the discriminatory act is based on the victim's membership in a salient social group. An act that imposes a relative disadvantage or deprivation might be wrong for a variety of reasons; for example, the act might violate a promise that the agent has made. The act counts as discrimination, though, only insofar as its wrongfulness derives from a connection of the act to the membership in a certain group(s) of the person detrimentally affected by the act. Accordingly, we can refine the first-approximation account of discrimination and say that the moralized concept of discrimination is properly applied to acts, practices or policies that meet two conditions: a) they wrongfully impose a relative disadvantage or deprivation on persons based on their membership in some salient social group, and b) the wrongfulness rests (in part) on the fact that the imposition of the disadvantage is on account of the group membership of the victims. Legal thinkers and legal systems have distinguished among a bewildering array of types of discrimination: direct and indirect, disparate treatment and disparate impact, intentional and institutional, individual and structural. It is not easy to make sense of the morass of categories and distinctions. The best place to start is with direct discrimination. Consider the following, clear instance of direct discrimination. In 2002, several men of Roma (“gypsy”) descent entered a bar in a Romanian town and were refused service. The bar employee explained his conduct by pointing out to them a sign saying, “We do not serve Roma.” The Romanian tribunal deciding the case ruled that the Roma men had been the victims of unlawful direct discrimination. (Schiek, Waddington, and Bell 2007: 185) The bar's policy, as formulated in its sign, explicitly and intentionally picked out the Roma qua Roma for disadvantageous treatment. It is those two features—explicitness and intention—that characterize direct discrimination. More precisely, acts of direct discrimination are ones which the agent performs with the aim of imposing a disadvantage on persons for being members of some salient social group. In the Roma case, the bartender and bar owner aimed to exclude Roma for being Roma, and so both the owner's policy and the bartender's maxim of action explicitly referred to the exclusion of Roma. It is clear that the policy of the bar was wrong, but the question of what makes the policy and other instances direct discrimination wrongful will be put on hold until section 4.1 below. In some cases, a discriminator will adopt a policy that, on its face, makes no explicit reference to the group that he or she aims to disadvantage. Instead, the policy employs some facially-neutral surrogate that, when applied, accomplishes the discriminator's hidden aim. For example, during the Jim Crow era, southern states used literacy tests for the purpose of excluding African-Americans from the franchise. Because African-Americans were denied adequate educational opportunities and because the tests were applied in a racially-biased manner, virtually all of the persons disqualified by the tests were African-Americans, and, in any given jurisdiction, the vast majority of African-American adults seeking to vote were disqualified. The point of the literacy tests was precisely such racial exclusion, even though the testing policy made no explicit reference to race. Notwithstanding the absence of an explicit reference to race in the literacy tests themselves, their use was a case of direct discrimination. The reason is that the persons who formulated, voted for, and implemented the tests acted on maxims that did make explicit reference to race. Their maxim was something along the lines of: ‘In order to exclude African-Americans from the franchise and do so in a way that appears consistent with the U.S. Constitution, I will favor a legal policy that is racially-neutral on its face but in practice excludes most African-Americans and leaves whites unaffected.’ As with the Roma case, there are agents whose aim is to disadvantage persons for belonging to a certain social group. Accordingly, direct discrimination is intentional discrimination. Without the intent to disadvantage persons based on their race, sex, religion and so on, there is no direct discrimination; with such an intent to disadvantage, there is direct discrimination (in the moralized sense), as long as imposing the disadvantage on the basis in question is wrong. In American legal doctrine, the more opaque term, ‘disparate treatment’, is used to refer to this type of discrimination, although courts will often simply talk of intentional discrimination. It might be thought that one big problem with the concept of direct discrimination is that it makes no room for unconscious discrimination. It is plausible to think that in many societies, unconscious prejudice is a factor in a significant range of discriminatory behavior, and a viable understanding of the concept of discrimination must be able to accommodate the possibility. In fact, there is growing evidence that unconscious discrimination exists. (Dasgupta 2004) But it is a mistake to assume that direct discrimination must be conscious discrimination. As Amy Wax (2008: 983) has argued, the mistake rests on a conflation of the intentional-unintentional distinction with the quite distinct conscious-unconscious distinction. Intentions and aims can be unconscious. Thus, an agent might engage in direct discrimination without any consciousness of the fact that she is aiming to disadvantage persons on account of their group membership. In contrast to the Roma and the literacy cases are those in which acts or policies are not aimed—explicitly or surreptitiously, consciously or unconsciously- at persons for being members of a certain social group. Yet, the acts or policies have the effect of disproportionately disadvantaging the members of a particular group. According to many thinkers and legal systems, such acts can, in some circumstances, count as a form of discrimination, viz., “indirect discrimination” or, in the language of American doctrine, “disparate impact” discrimination. Thus, the European Court of Human Rights (ECHR) has held that “[w]hen a general policy or measure has disproportionately prejudicial effects on a particular group, it is not excluded that this may be considered as discriminatory notwithstanding that it is not specifically aimed or directed at that group.” (Shanaghan v. U.K. 2001: para. 129) Indirect discrimination is different from the direct form in that the relevant agents do not aim to disadvantage persons for being members of a certain group. It is important to note that the ECHR says that policies with disproportionate effects may be discriminatory even if that is not the aim of the policies. So what criterion determines when a policy with disproportionately worse effects on a certain group actually counts as indirect discrimination? There is no agreed upon answer. The ECHR has laid down the following criterion: a policy with disproportionate effects counts as indirect discrimination “if it does not pursue a legitimate aim or if there is not a reasonable relation of proportionality between means and aim” (Abdulaziz et al. v. U.K., 1985: para. 721). Swedish law contains a different criterion: a policy with disproportionate effects is not discriminatory if and only if it “can be motivated by a legitimate aim and the means are appropriate and necessary to achieve the aim” (Osin and Porat 2005: 864). The Human Rights Committee of the United Nations has judged that a policy with disproportionate effects is discriminatory “if it is not based on objective and reasonable criteria” (Moucheboeuf 2006: 100). Under the British Race Relations Act, such a policy is discriminatory if the policymaker “cannot show [the policy] to be justifiable irrespective of the … race … of the person to whom it is applied” (Osin and Porat 2005: 900). And in its interpretation of the Civil Rights Act of 1964, the U.S. Supreme Court has held that, in judging whether the employment policies of private businesses are (indirectly) discriminatory, “[i]f an employment practice which operates to exclude Negroes cannot be shown to be related to job performance, the practice is prohibited” (Griggs v. Duke Power 1971: 431). Despite the differences, these criteria have a common thought behind them: a disproportionately disadvantageous impact on the members of certain salient social groups must not be written off as morally or legally irrelevant or dismissed as mere accident, but rather stands in need of justification. In other words, the impact must not be treated as wholly inconsequential, as if it were equivalent, for example, to a disproportionate impact on persons with long toe nails. Toe-nail group impact would require no justification, because it would simply be an accidental and morally inconsequential feature of the act, at least in all actual societies. In contrast, the thought behind the idea of indirect discrimination is that, if an act has a disproportionately disadvantageous impact on persons belonging to certain types of salient social groups, then the act is morally wrong and legally prohibited if it cannot meet some suitable standard of justification. To illustrate the idea of indirect discrimination, we can turn to the U.S. Supreme Court case, Griggs v. Duke Power (1971). A company in North Carolina used a written test to determine promotions. The use of the test had the result that almost all black employees failed to qualify for the promotions. The company was not accused of intentional (direct) discrimination, i.e., there was no claim that race was a consideration that the company took into account in deciding to use the written test. But the court found that the test did not measure skills essential for the jobs in question and that the state of North Carolina had a long history of deliberately discriminating against blacks by, among other things, providing grossly inferior education to them. The state had only very recently begun to rectify that situation. In ruling for the black plaintiffs, the court reasoned that the policy of using the test was racially discriminatory, because of the test's disproportionate racial impact combined with the fact that it was not necessary to use the test to determine who was best qualified for promotion. In many cases, acts of discrimination are attributed to collective agents, rather than to natural persons acting in their individual capacities. Accordingly, corporations, universities, government agencies, religious bodies, and other collective agents can act in discriminatory ways. This kind of discrimination can be called “organizational,” and it cuts across the direct-indirect distinction. Confusion sometimes arises when it is mistakenly believed that organizations cannot have intentions and that only indirect discrimination is possible for them. As collective agents, organizations do have intentions, and those intentions are a function of who the officially authorized agents of the institution are and what they are trying to do when they act as their official powers enable them. Suppose that the Board of Trustees of a university votes to adopt an admissions policy that (implicitly or explicitly) excludes Jews, and the trustees vote that way precisely because they believe that Jews are inherently more dishonest and greedy than other people. In such a cases, the university is deliberately excluding Jews and is guilty of direct discrimination. Individual trustees acting in their private capacity might engage in other forms of discriminatory conduct; for example, they might refuse to join clubs that have Jewish members. Such a refusal would not count as organizational discrimination. But any discriminatory acts attributable to individual board members in virtue of some official power that they hold would count as organizational discrimination. Structural discrimination—sometimes called “institutional” (Ture and Hamilton 1992/1967: 4)—should be distinguished from organizational: the structural form concerns the rules that constitute and regulate the major sectors of life such as family relations, property ownership and exchange, political powers and responsibilities, and so on. (Pogge 2008: 37) It is true that when such rules are discriminatory, they are often—though not always—the deliberate product of some collective or individual agent, such as a legislative body or executive official. In such cases, the agents are guilty of direct discrimination. But the idea of structural discrimination is an effort to capture a wrong distinct from direct discrimination. Thus, Fred Pincus writes that “[t]he key element in structural discrimination is not the intent but the effect of keeping minority groups in a subordinate position” (1994: 84). What Pincus and others have in mind can be explained in the following way. When the rules of a society's major institutions consistently produce disproportionately disadvantageous outcomes for the members of certain salient social groups and the production of such outcomes is unjust, then there is structural discrimination against the members of the groups in question, apart from any direct discrimination in which the collective or individual agents of the society might engage. This account does not mean that, empirically speaking, structural discrimination stands free of direct discrimination. It is highly unlikely that the consistent production of unjust and disproportionately disadvantageous effects would be a chance occurrence. Rather, it is (almost) always the case that, at some point(s) in the history of a society in which there is structural discrimination, important collective agents, such as governmental ones, intentionally created rules with the aim of disadvantaging the members of the groups in question. It is also likely that some collective and individual agents continue to engage in direct discrimination in such a society. But by invoking the idea of structural discrimination and attributing the discrimination to the rules of a society's major institutions, we are pointing to a form of discrimination that is conceptually distinct from the direct discrimination engaged in by collective or individual agents. Thus understood, structural discrimination is, as a conceptual matter, necessarily indirect, although, as empirical matter, direct discrimination is (almost) always part of the story of how structural discrimination came to be and continues to exist. Also note that the idea of structural discrimination does not presuppose that, whenever the rules of society's major institutions consistently produce disproportionately disadvantageous results for a salient group such as women or racial minorities, structural discrimination thereby exists. Because our concern is with the moralized concept of discrimination, one might think that disproportionate outcomes, by themselves, entail that an injustice has been done to the members of the salient group in question and that structural discrimination thereby exists against the group. However, on a moralized concept of structural discrimination, the injustice condition is distinct from the disproportionate outcome condition. Whether a disproportionate outcome is sufficient for concluding that there is an injustice against the members of the group is a substantive moral question. Some thinkers might claim that the answer is affirmative, and such a claim is consistent with the moralized concept of structural discrimination. However, the claim is not presupposed by the moralized concept, which incorporates only the conceptual thesis that a pattern of disproportionate disadvantage falling on the members of certain salient groups does not count as structural discrimination unless the pattern violates sound principles of distributive justice. The distinction between direct and indirect discrimination plays a central role in contemporary thinking about discrimination. However, some thinkers hold that talking about indirect discrimination is confused and misguided. For these thinkers, direct discrimination is the only genuine form of discrimination. Examining their challenge to the very concept of indirect discrimination is crucial in developing a philosophical account of what discrimination is. Iris Young argues that the concept of discrimination should be limited to “intentional and explicitly formulated policies of exclusion or preference.” She holds that conceiving of discrimination in terms of the consequences or impact of an act, rather than in terms of its intent, “confuses issues” by conflating discrimination with oppression. Discrimination is a matter of the intentional conduct of particular agents. Oppression is a matter of the outcomes routinely generated by “the structural and institutional framework” of society. (1990: 196) Matt Cavanagh holds a position similar to Young's, writing that persons “who are concerned primarily with how things like race and sex show up in the overall distributions [of jobs] have no business saying that their position has anything to do with discrimination. It is not discrimination they object to, but its effects; and these effects can equally be brought about by other causes” (2002: 199). For example, the disproportionate exclusion of certain ethnic groups from the ranks of professional violinist could be the result of discrimination against those groups, but it also might be an effect of the fact that there is a lower proportion of persons from those groups who have perfect pitch than the proportion found in other ethnic groups. The arguments of Cavanagh and Young raise a question that is not easy to answer, viz,, why can indirect and direct discrimination be legitimately considered as two subcategories of one and the same concept? In other words, what do the two supposed forms of discrimination really have in common that make them forms of the same type of moral wrong? Direct discrimination is essentially a matter of the reasons and aims that guide the act or policy of a particular agent, while indirect discrimination is not about such reasons and aims. Even conceding that acts or policies of each type can be wrong, it is unclear that the two types are each species of one and the same kind of moral wrong, i.e., the wrong of discrimination. And if cases of direct discrimination are paradigmatic examples of discrimination, then a serious question arises as to whether the concept of discrimination properly applies to the policies, rules, and acts that are characterized as “indirect” discrimination. Moreover, there is a crucial ambiguity in how discrimination is understood that lends itself to conflating direct discrimination with the phenomena picked out by ‘indirect discrimination’. Direct discrimination involves the imposition of disadvantages “based on” or “on account of” or “because of” membership in some salient social group. Yet, these phrases can refer either to a) the reasons that guide the acts of agents or to b) factors that do not guide agents but do help explain why the disadvantageous outcomes of certain acts and policies fall disproportionately on certain salient groups. (Cf. Shin, 2010.) In the Roma case, the disadvantage was “because of” ethnicity in the former sense: the ethnicity of the Roma was a consideration that guided the acts of the bar owner and bartender. In the Griggs case, the disadvantage was “because of race” in the latter sense: race did not guide the acts of the company but neither was it an accident that the disadvantages of the written test fell disproportionately on blacks. Rather, race, in conjunction with the historical facts about North Carolina's educational policies, explained why the disadvantage fell disproportionately on black employees. The thought that the policy of the company in Griggs is a kind of discrimination, viz., indirect discrimination, seems to trade on the ambiguity in the meanings of the locutions ‘based on’, ‘because of’, ‘on account of’, and so on. The state of North Carolina's policy of racial segregation in education imposed disadvantages based on/because of/on account of race, in one sense of those terms. The company's policy of using a written test imposed disadvantages based on/because of/on account of race, in a different sense. Even conceding that both the state and the company wronged blacks on the basis of their race, it appears that the two cases present two different kinds of wrong. However, one can reasonably argue, to the contrary, that the two wrongs are not different in kind, because they are both instances of wrongs done to persons in which membership in some salient social group explains why the wrongful disadvantages fell on the individuals in question. In order to understand why the persons in the Roma case were thrown out of the bar, it is necessary to understand that their identity as Roma was the key practical reason guiding the actions of the bartender. In order to understand why the employees in the Griggs case were unable to get promotions, it is necessary to understand that their identity as blacks was a key factor explaining why they failed to achieve high enough scores on the company's written test. The types of explanation operate at different levels. In the Roma case, the explanation works at the retail level of particular agents, explaining why individual Roma were harmed by appealing to the practical reasons of the bartender. By contrast, in the Griggs case, the explanation involves considerations at the wholesale level of general social facts, explaining racially disproportionate exam outcomes by appealing to factors about a population's access to educational opportunities, factors that did not play a role in the practical reasoning of the company. But in each case, the appropriate explanation for why certain individuals were disadvantaged concerned their group membership. Additionally, for both forms of discrimination, the wrongfulness of the discriminatory act or policy derived from the connection between the disadvantages and the salient-group membership of the persons who were disadvantaged. Still, one might argue that the concept of indirect discrimination is problematic because its use mistakenly presupposes that the wrongfulness of discrimination can lie ultimately in its effects on social groups. Certainly, bad effects can be brought about by discriminatory processes, but, the argument claims, the wrongfulness lies in what brings about the effects, i.e., in the unfairness or injustice of those acts or policies that generate the effects, and does not lie in the effects themselves. Cavanagh seems to have this argument in mind when he writes that people who characterize an act or policy as discriminatory on the basis of its effects are really objecting to the effects and that “these effects can equally be brought about by other causes” (2002: 199). Addressing this argument requires a closer examination of why discrimination is wrong, the topic of section 4. Before turning to that section, it would helpful to address a suspicion that might arise in the course of pondering whether indirect discrimination is really is a form of discrimination. One might suspect that any disagreement over whether indirect discrimination is really a form of discrimination is only a terminological one, devoid of any philosophical substance and capable of being adequately settled simply by the speaker stipulating how she is using term ‘discrimination’. (Cavanagh 2002: 199) One side in the disagreement could, then, stipulate that, as it is using the term, ‘discrimination’ applies only to direct discrimination, and the other side could stipulate that ‘discrimination’, as it is using the term, applies to direct and indirect discrimination alike. However, the choice of terminology is not always philosophically innocent or unproblematic. A poor choice of terminology can lead to conceptual confusions and fallacious inferences. Cavanagh argues that precisely these sorts of infelicities are fostered when ‘discrimination’ is used to refer to a wrong that essentially depends on certain effects being visited upon the members of a social group. (2002: 199) Moreover, the critics and the defenders of the term ‘indirect discrimination’ presumably agree with one another that the concept of discrimination possesses a determinate meaning that either admits, or does not admit, of an indirect form of discrimination. So it seems that the disagreement over indirect discrimination has philosophical significance. The possibility should be acknowledged that the concept of discrimination is insufficiently determinate to dictate an answer to the question of whether there can be an indirect form of discrimination. In that case, there would be no answer, and any disagreement over the possibility of such discrimination would be devoid of philosophical substance and should be settled by speaker stipulation. However, it would be hasty to arrive at the conclusion that there is no answer before a thorough examination of the concept of discrimination is completed and some judgment is made about what the best account is of the concept. And a thorough examination must take up the question of why discrimination is wrong. As we have seen, discrimination, in its moralized sense, is necessarily wrongful, and the wrong is connected in some way to its group-based character. But these points do not yet tell us why discrimination is wrongful. Let us begin with direct discrimination and then turn to the indirect form, in order to shed some light on whether the wrongs involved in the two forms are sufficiently analogous to consider them as two types of one and the same kind of wrong, viz., the wrong of discrimination in general. Specifying why direct discrimination is wrongful has proved to be a surprisingly controversial and difficult task. There is general agreement that the wrong concerns the kind of reason that guides the action of the agent of discrimination: the agent is acting on a reason that is in some way illegitimate or morally tainted. But there are more than a half-dozen distinct views about what the best principle is for drawing the distinction between acts of direct discrimination (in the moralized sense) and those acts that are not wrongful even though the agent takes account of another's social group membership. One popular view is that direct discrimination is wrong because the discriminator treats persons on the basis of traits that are immutable and not under the control of the individual possessing them. Thus, Richard Kahlenberg asserts that racial discrimination is unjust because race is such an immutable trait. (1996: 54–55). And discrimination based on many forms of disability would seem to fit this view. But Bernard Boxill rejects the view, arguing that there are instances in which it is justifiable to treat persons based on features that are beyond their control. (1992: 12–17) Denying blind people a driver's license or persons with little athletic ability a place on the basketball team is not an injustice to such individuals. Moreover, Boxill notes that, if scientists developed a drug that could change a person's skin color, it would still be unjust to discriminate against people because of their color. (1992: 16) Additionally, a paradigmatic ground of discrimination, a person's religion, is not an immutable trait, nor are some forms of disability. Thus, there are serious problems with the popular view that direct discrimination is wrong due to the immutable nature of the traits on the basis of which the discriminator treats the persons whom he wrongs. A second view holds that direct discrimination is wrong because it treats persons on the basis of inaccurate stereotypes. When the state of Virginia defended the male-only admissions policy of the Virginia Military Institute (VMI), it introduced expert testimony that there was a strong correlation between sex and the capacity to benefit from the highly disciplined and competitive educational atmosphere of the school: those who benefited from such an atmosphere were, for the most part, men, while women had a strong tendency to thrive in a quite different, cooperative educational environment. This defense involved the premise that the school's admissions policy was not discriminatory because the policy relied on accurate generalizations about men and women. And in its ruling against VMI, the Supreme Court held that a public policy “must not rely on overbroad generalizations about the different talents, capacities, or preferences of males and females” (U.S. v. Virginia 1996: 533). But the court went on to argue that “generalizations about ‘the way women are’, estimates of what is appropriate for most women, no longer justify denying opportunity to women whose talent and capacity place them outside the average description” (U.S. v. Virginia 1996: 550).(ital. in original) This reasoning implies that, even if gender was a very good a predictor of the qualities needed to benefit from and be successful at the school, VMI's admissions policy would still be discriminatory. (Schauer 2003: 137–141) The Court's reasoning here seems sound, because it is possible that there was something discriminatory about the way in which the school had defined success. For example, the school might have focused on those forms of discipline and competition that generally favored men and might have ignored those forms that favored women. In such a case, the discrimination would not rest on an inaccurate stereotype, but, perhaps, on the undue devaluation of qualities accurately associated with women. Whether or not VMI's policy involved such a consideration, an account of why direct discrimination is wrong should be consistent with the possibility of discrimination that is based on accurate gender generalizations. A third view is that direct discrimination is wrong because it is an arbitrary or irrational way to treat persons. In other words, direct discrimination imposes a disadvantage on a person for a reason that is not a good one, viz., that the person is a member of a certain salient social group. Accordingly, Anne-Marie Cotter argues that such discrimination treats people unequally “without rational justification” (2006: 10). John Kekes expresses a similar view in condemning race-based affirmative action as “arbitrary” (1995: 200), and, in the same vein, Anthony Flew argues that racism is unjust because it treats persons on the basis of traits that “are strictly superficial and properly irrelevant to all, or almost all, questions of social status and employability” (1990: 63–64). However, many thinkers reject this third view of the wrongness of direct discrimination. John Gardner argues that there is no “across-the-board-duty to be rational, so our irrationality as such wrongs no one.” Additionally, Gardner contends that “there patently can be reasons, under some conditions, to discriminate on grounds of race or sex,” even though the conduct in question is wrongful. (1998: 168) For example, a restaurant owner might rationally refuse to serve blacks if most of his customers are white racists who would stop patronizing the establishment if blacks were served. (1998: 168 and 182) The owner's actions would be wrong and would amount to a rational form of discrimination. Additionally, Richard Wasserstrom argues that the principle that persons ought not to be treated on the basis of morally arbitrary features cannot grasp the fundamental wrong of direct racial discrimination, because the principle is “too contextually isolated” from the actual features of a society in which many people have racist attitudes. (1995: 161) For Wasserstrom, the wrong of racial discrimination cannot be separated from the fact that such discrimination manifests an attitude that the members of certain races are intellectually and morally inferior to the rests of the population. A fourth view is that direct discrimination is wrong because it fails to treat individuals based on their merits. Thus, Sidney Hook argues that hiring decisions based on race, sex, religion and other social categories are wrong because such decisions should be based on who “is best qualified for the post” (1995: 146). In a similar vein, Alan Goldman argues that discriminatory practices are wrong because “the most competent individuals have prima facie rights to positions” (1979: 34). Opponents of this merit-based view note that it is often highly contestable who the “best qualified” really is, because the criteria determining qualifications are typically vague and do not come with weights attached to them. (Wasserman, 1998: 807) Another line of criticism claims that merit does not entitle a person to a position. For example, the most meritorious worker might be an obnoxious person whose fellow workers would dislike working with him. It would seem that an employer has the right to hire a less meritorious but more likable person. Even if the company profits fell a bit as a result, no wrong is done to the meritorious but obnoxious applicant. Thus, Cavanagh suggests that “hiring on merit has more to do with efficiency than fairness” (2002: 20). Cavanagh also notes that a merit principle cannot explain what is distinctively wrong about an employer who discriminates against blacks because the employer thinks that they are morally or intellectually inferior. The merit approach “makes [the employer's] behavior look the same as any other way of treating people … non-meritocratically” (2002: 24-25). A fifth view, developed by Sophia Moreau, regards direct discrimination as wrong because it violates the equal entitlement each person has to freedom. In particular, she contends that “the interest that is injured by discrimination is our interest in … deliberative freedoms: that is, freedoms to have our decisions about how to live insulated from the effects of normatively extraneous features of us, such as our skin color or gender” (2010: 147). Normatively extraneous features are “traits that we believe persons should not have to factor into their deliberations …as costs.” For example, “people should not be constrained by the social costs of being one race rather than another when they deliberate about such questions as what job to take or where to live” (2010: 149). Yet, it is unclear that Moreau's account gets to the bottom of what is wrong with discrimination. One might object, following the criticisms leveled by Wasserstrom and Cavanagh at the arbitrariness and merit accounts, respectively, that the idea of a normatively extraneous feature is too abstract to capture what makes racial discrimination a paradigmatic form of direct discrimination. There are reasons that justify our belief “that persons should not have to factor [race] into their deliberations …as costs,” and those reasons seem to be connected to the idea that racial discrimination treats persons of a certain race as having a diminished or degraded moral status as compared to individuals belonging to other races. The wrong of racial and other forms of discrimination seems better illuminated by understanding it in terms of such degraded status than in terms of the idea of normatively extraneous features. A sixth view, developed by Deborah Hellman, holds that direct discrimination is wrong because it demeans and denigrates those against whom it is directed, thereby treating such persons as morally inferior. Thus, she contends that “the act of demeaning is the wrong of wrongful discrimination” (2008: 172). For example, it is demeaning, she argues, for an employer to require female employees to wear cosmetics because such a requirement “conveys the idea that a woman's body is for adornment and the enjoyment by others” (2008: 42). Patrick Shin proposes a similar account in his discussion of equal protection, arguing that “to characterize an action as unequal treatment is to register a certain objection as to what, in view of its rationale, the action expresses” (2009: 170). Offending actions are ones that treat an individual “as though that individual belonged to some class of individuals that was less entitled to right treatment than anyone else” (2009:169). Closely related to Hellman's account is a seventh view, holding that direct discrimination is wrong on account of its connection to prejudice, where prejudice is understood as an attitude that regards the members of a salient group, qua members, as not entitled to as much respect or concern as the members of other salient groups. Prejudice can involve feelings of hostility, antipathy, or indifference, as well as belief in the inferior morals, intellect, or skills of the targeted group. Returning to the case of the Roma who were excluded by the policy of a bar, we could say that the policy was discriminatory because it was the expression of prejudice against the Roma, whereas a bar's policy of excluding men from the women's restroom would fail to be discriminatory because it would not be an expression of prejudice. John Hart Ely defends a version of this seventh view, holding that discriminatory acts are those that are motivated by prejudice. (1980: 153–159) Ronald Dworkin has formulated an alternative version, arguing that discriminatory acts are those that could be justified only if some prejudiced belief were correct. The absence of a “prejudice-free justification” thus makes a law or policy discriminatory. (1985: 66) The seventh view, along with the accounts of Hellman and Shin, rest on the intuitively attractive idea that the wrongfulness of direct discrimination is tied to its denial of the equal moral status of persons. This idea is also at the heart of Wasserstrom's complaint that understanding discrimination in terms of arbitrariness is too abstract to capture the wrongfulness of racial discrimination as it has actually been practiced and Cavanagh's related objection to the merit view that what is wrong with the discriminatory acts of a racist cannot be adequately grasped strictly in terms of the denial of merit. However the details are to be worked out, the essential wrong of paradigmatic acts of direct discrimination seems to be that they violate the equal moral status of persons by treating the victims in ways that would be appropriate only for individuals having a diminished or degraded moral status. The most egregious forms of indirect discrimination are typically structural, due to the pervasive impact of a society's basic institutions on the life-prospects of its members. (Rawls 1971: 7) Indirect discrimination is structural when the rules and norms of society consistently produce disproportionately disadvantageous outcomes for the members of a certain group, relative to the other groups in society, the outcomes are unjust to the members of the disadvantaged group, and the production of the outcomes is to be explained by the group membership of those individuals. Cass Sunstein nicely captures the wrong of this form of indirect discrimination in the course of explaining his antidiscrimination principle, which he calls the “anticaste principle.” He writes, “The motivating idea [for the anticaste principle] is that without good reason, social and legal structures should not turn differences that are both highly visible and irrelevant from the moral point of view into systematic social disadvantages. A systematic disadvantage is one that operates along standard and predictable lines in multiple and important spheres of life” (1994: 2429). In a similar vein, Catharine MacKinnon finds structural discrimination against women to be intolerable because it consists of “the systematic relegation of an entire group of people to a condition of inferiority” (1987: 41). Two related wrongs belonging to structural discrimination can be distinguished. First is the wrong that consists of society's major institutions imposing, without adequate justification, relative disadvantages on persons belonging to certain salient social groups. Accordingly, it is wrong for society's basic rules to deny to women or to racial or religious minorities opportunities for personal freedom, development, and flourishing equal to those that men or racial and religious majorities enjoy. Second is the wrong of placing the members of a salient social group in a position of vulnerability to exploitation and domination as a result of the denial of equal opportunities and the imposition of other kinds of relative disadvantage. Accordingly, it is wrong for a society to make women vulnerable to sexual exploitation and domination at the hands of men by the imposition of various economic and social disadvantages relative to men. In contrast, the wrongs of non-structural forms of indirect discrimination seem to be dependent on structural (or direct) discrimination. Consider the Griggs case. The company's promotion policy was not part of the wrong involved in society's basic institutions imposing relative disadvantages on blacks. But the policy did have some connection to structural racial discrimination and to the widespread direct discrimination against blacks that existed prior to and contemporaneous with the policy. The policy helped to perpetuate the unjust disadvantages that were due to such structural and direct discrimination, even though the policy was not needed to serve any legitimate business purpose and that was why the policy was wrong. Or at least that is what the proponents of the idea of indirect discrimination appear to have in mind when they talk about non-structural forms of indirect discrimination. Are the wrongs of indirect discrimination sufficiently similar to the wrongs of direct discrimination that it is reasonable to say that they are, in fact, two different types of one and the same wrong? We have seen that the accounts of the wrong of direct discrimination are many and various. But abstracting from those differences, critics of talk of “indirect discrimination” might argue that discrimination is essentially a process-based wrong, rather than an outcome-based one, and that only direct discrimination is process-based. In other words, only with direct discrimination is there a defect in how some outcome is brought about, rather than in what the outcome itself is. On this view, discriminating against people is similar to having an incompetent person judge an ice-skating competition: just as the incompetent judging taints the process by which places are awarded in the competition, discrimination taints the process by which opportunities and other social goods get distributed among the members of society. However, one can understand indirect discrimination as involving process-based wrongs, although the wrongs do not necessarily occur at the retail level of the practical reasoning of specific agents. Consider the structural form of indirect discrimination. Disproportionately disadvantageous outcomes do not, by themselves, amount to structural discrimination, even when those outcomes fall on the shoulders of the members of a salient social group such as women or racial or religious minorities. There must also be a linkage between membership in the group and the disadvantageous outcomes: group membership must help explain why the disproportionately disadvantageous outcomes fall where they do. This explanation will proceed at the wholesale level of macro-social facts about the population and the various groups that constitute it. But the requirement of a linkage shows that how the disproportionate outcomes are brought about is essential to the existence of structural discrimination. There must be social processes at work that, as Sunstein puts its, “turn differences that are both highly visible and irrelevant from the moral point of view into systematic social disadvantages” (1994: 2429). It is true that the differences do not need to be literally visible; they need only be socially salient. But the main point is that there is something morally wrong with social processes that consistently but avoidably turn such differences into relative disadvantages for the members of salient groups, such as women or racial or religious groups. A parallel is thereby established with direct discrimination, in which there is something morally wrong with a practical-reasoning process that treats sex, race, or religion as grounds for treating persons as having a degraded or diminished moral status. With the non-structural form of indirect discrimination, the parallel to the wrong of direct discrimination is even stronger, because the morally flawed process does occur at the retail level. Consider the Griggs case. The company's decision to use certain exams to determine promotions contributed to the unjust disadvantages suffered by blacks from structural and direct discrimination. Yet, the use of the exams was apparently not necessary to determine who could best perform the jobs in question or to meet any other legitimate purpose of the business. It is plausible to say, then, that the company's decision process wrongly counted for nothing the promotion policy's contribution to the perpetuation and even exacerbation of unjust disadvantages from which blacks already suffered. This process-based wrong is at the level of a specific agent, albeit a collective agent. The difference with direct discrimination is that it is a moral failure of omission, i.e., failing to count for something the impact of the promotion policy on blacks, rather than a failure of commission, such as deliberately excluding blacks from better-paying positions. In either case, though, an agent has engaged in a morally flawed process of practical reasoning in which the flaw concerns the role that considerations of salient group membership play. There is a case to be made, then, that the wrongs of indirect discrimination, structural and non-structural, are importantly parallel to those of direct discrimination. For that reason, it can be argued that direct and indirect discrimination are usefully conceived as different versions of the same moral wrong, viz., discrimination in general, and that the term ‘indirect discrimination’ is a valuable part of our moral vocabulary. Discrimination wrongfully imposes relative disadvantages or deprivations on persons based on their membership in some salient social group. But which salient groups count for the purpose of determining whether an act is an act of discrimination? This question is at the heart of many heated political and legal disputes, such as the controversy over gay marriage. The question is also central to a question that is less politically prominent than such disputes but which has important political and philosophical implications. The question is whether the members of socially dominant groups can, in principle, be victims of discrimination. It is sometimes said that, in the United States and other Western countries, whites cannot really be discriminated against on account of their race, because whites are the socially dominant racial group whose members are systematically advantaged by their being white. Thus, in his account of racial discrimination, Thomas Scanlon acknowledges that his view entails that, in the U.S., at least, whites can discriminate against blacks but not vice-versa. He holds that discrimination is “unidirectional, [applying] only to actions that disadvantage members of a group that has been subject to widespread denigrations and exclusion.” This implication derives from his claim that it is “crucial to racial discrimination … that the prejudicial judgments it involves are not just the idiosyncratic attitudes of a particular agent but are widely shared in the society in question and commonly expressed and acted on in ways that have serious consequences” (2008: 73–74). The idea that discrimination is unidirectional is also implied by Owen Fiss's understanding of discrimination in terms of “the perpetual subordination” of “specially disadvantaged groups …[whose] political power is severely circumscribed” (1976: 154–155). Although is it undeniable that the members of socially dominant groups typically enjoy a host of unfair advantages, it seems mistaken to conclude from this fact that such persons cannot be victims of discrimination. Even if any disadvantages that might be imposed on them based on their group membership are too small to outweigh the sum total of the unfair advantages that they enjoy, it still does not follow that the members of socially dominant groups cannot be discriminated against by others in society. And even though the members of dominant groups enjoy many unfair advantages, it is possible, for example, for them to be wronged by some agent deliberately imposing on them disadvantages because of their race, religion, or some similar consideration. Thus, direct discrimination against whites because they are white is possible in a white-dominated society: non-whites can wrongfully deny them opportunities such as a job or a place of residence, based on their being white, even when almost all of the direct racial discrimination in the society is perpetrated by whites against non-whites. The same is true with respect to indirect discrimination: even if whites in a certain society constitute a dominant group, individual whites might confront indirect discrimination in the form of policies that unjustly, though unintentionally, disadvantage them on account of their race. The concept of discrimination itself places no substantive restrictions on which salient social groups could, in principle, count for purposes of determining whether an act is an act of discrimination. Thus, suppose there were some society and historical context such that a) the length of one's thumb determined membership in some salient social group, b) it was wrong to impose a disadvantage on a person based on membership in a certain thumb-length group, and c) the wrongness was due to the fact that the imposition was based on membership in the thumb-length group. In such a scenario, thumb-length groups would count in determining which acts were acts of discrimination. Put another way, the fact that, in our society and its historical context, thumb length does not count, but race and religion do count, is not because the concept of discrimination includes race and religion, while excluding thumb length. Rather, it is because the formal elements of the concept of discrimination are properly specified—or made more concrete—in terms of race and religion (among other categories), given our social and historical context, while those elements are not properly specified in terms of thumb length, given that same context. Perhaps the most heated of contemporary debates over the question of which social groups count for purposes of determining whether an act is an act of discrimination are the debates concerning sexual orientation. Many persons hold the view that it is discrimination whenever gays and lesbians are denied the same set of legal rights and powers that heterosexual persons have, but others reject such a view. Philosophers and political theorists can be found on both sides of this divide, although the predominant view among such thinkers is that it is discriminatory to deny gays and lesbians the same legal rights and powers as heterosexuals. (Macedo, 1996; Corvino, 2005; and, dissenting, Finnis, 1997) The debate is ultimately one of moral principle, resting on the question of whether government wrongs gays and lesbians if it denies them any such rights or powers. The concept of discrimination cannot, by itself, settle the question, because the concept only tells us that it is properly applied to the imposition of wrongful disadvantages on account of salient group membership. The concept does not specify whether it is wrongful to impose disadvantages on persons on account of their sexual orientation. Substantive moral reasoning is needed to address the question concerning wrongfulness. The concept of discrimination picks out a kind of moral wrong that is a function of the salient social group membership of the person wronged: persons are treated as though they had diminished or degraded moral status on account of their group membership, or they are, because of their group membership and the relative disadvantages that they suffer due to that membership, made vulnerable to domination and oppression. But why have such a concept? Why not simply have the concepts of domination, oppression, and degrading treatment, abstracting from whether or not the reasons for such wrongs involve group membership? Until the middle of the 19th century, critical moral reflection and discussion proceeded largely without the concept of discrimination. But over the course of the first half of the 20th century, moral reflection became increasingly sensitive to the fact that many, even most, of the large-scale injustices in history had a group-based structure: certain members of society were identified by others as belonging to a particular salient group; the group members were consistently denigrated and demeaned by the rest of society and by its official organs; and many serious relative disadvantages connected to this denigration and demeaning, such as material deprivation and extreme restrictions on liberty, were imposed on the members of the denigrated group. It is this historical reality, apparently deeply rooted in human social life, that gives the concept of discrimination its point and its usefulness. The concept highlights the group-structure, and the relative deprivations built around this structure, that are exhibited by many of the worst systematic wrongs that humans inflict on one another. At the same time, the group structure of these injustices does not mean that the group as such is the party that is wronged; rather, the wrongs are ultimately wrongs to the individual persons making up the group. Accordingly, the concept of discrimination has become a useful tool for representing many serious wrongs, while avoiding the implication that these wrongs are ultimately done to the groups as such. However, this understanding of the significance of the concept of discrimination is challenged by Young, who claims that the concept is inadequate for capturing group-based wrongs. She argues that the concept “tends to present the injustices groups suffer as aberrant, the exception rather than the rule.” Accordingly, she contends that “[i]f one focuses on discrimination as the primary wrong that groups suffer, then the more profound wrongs of exploitation, marginalization, powerlessness, cultural imperialism, and violence that we still suffer go undiscussed and unaddressed.” (1990: 196–97) Nonetheless, Young's understanding of discrimination seems to rest on some misconceptions. First, the concept of discrimination does not, strictly speaking, present injustices as ones that groups suffer. The injustices are suffered by the members of the group and not by the group as such. This point might seem to play into Young's hand, as one might infer from it that the idea of discrimination cannot capture injustices that are systemic rather than aberrant, the rule rather than the exception. But such an inference would be mistaken, and that mistake leads to a second misconception in Young's account. Discrimination against the members of a group can be, and often is, systemic. The reason is that wrongs against individuals on account of their group membership typically are not aberrant but form broad social patterns. Accordingly, the idea of discrimination can capture the profound systemic wrongs to which Young refers, while preserving the key moral thought that the wrongs are done to individuals. At the same time, Young is right insofar as she is claiming that exploitation, powerlessness, and her other profound wrongs do not necessarily have a component involving direct discrimination. The claim is important, because the failure to appreciate it would incline one to think mistakenly that, to the extent that direct discrimination recedes, so must exploitation, powerlessness and so on. Accordingly, if direct discrimination recedes, the profound injustices referred to by Young could persist with their present force or even grow worse. Kimberlé Crenshaw (1998) introduced the idea of intersectionality in her account of the distinctive form of discrimination faced by black women. Intersectionality refers to the fact that one and the same person can belong to several distinct groups, each of whose members are victimized by widespread discrimination. This overlapping membership can generate experiences of discrimination that are very different from those of persons who belong to just one, or the other, of the groups. Thus, Crenshaw argues that “any analysis that does not take intersectionality into account cannot sufficiently address the particular manner in which Black women are subordinated” (1998: 315). Crenshaw's idea of intersectionality applies beyond race and gender to cover any social groups against which discrimination is directed: discrimination is inflected in different ways depending on the particular combination of social groups to which those persons discriminated against belong. And one implication of intersectionality is that the disadvantages suffered by some persons who are discriminated against on account of belonging to a certain group might be offset, partially or fully, by advantages those same persons gain by being discriminated in favor of due to their belonging to other groups. As Crenshaw notes, women who are wealthy and white are “race- and class-privileged,” even as they are disadvantaged by their gender. (1998: 314) The idea of intersectionality threatens to destabilize the concept of discrimination. The idea highlights what is problematic about any account of discrimination that abstracts from how different salient identities converge to shape the experiences of persons. But, taken to the hilt, the idea of intersectionality might appear to undermine any feasible account of discrimination. Reflection on Crenshaw's own intersectional account illustrates the point: she examines the intersection of race and gender but abstracts from other salient social identities, such as disability status, sexual orientation, and religion. Any of those additional identities can and do converge with race and gender to form distinctive experiences of discrimination, and so abstracting from those identities seems problematic from the perspective that the idea of intersectionality opens to us. Yet, no feasible treatment can take into account all of those identities and the many more socially salient identities that persons have in contemporary societies. Nonetheless, judgments about discrimination can and do reveal genuine wrongs that persons suffer due to their salient group membership and expose actual patterns of disadvantage and deprivation that amount to systemic injustices against the members of certain salient groups. It is not necessary to take account of everything relevant to a phenomenon in order to understand and represent important aspects of it. Thus, notwithstanding the complications introduced by intersectionality, judgments about direct and indirect discrimination can tell us something important about who is wrongfully disfavored, and who wrongfully favored, by the actions of individual and collective agents and by the rules of society's major institutions. The concept of discrimination provides an explicit way of thinking about a certain kind of wrong that can be found in virtually every society and era. The wrong involves a group-based structure that works in combination with relative deprivations built around the structure. The deprivations are wrongful because they treat persons as having a degraded moral status, but also because the deprivations tend to make members of the group in question vulnerable to domination and oppression at the hands of those who occupy positions of relative advantage. It is true that there has been confusion attending the concept of discrimination, and there will long be debates about the best way to understand and apply it. However, the concept of discrimination has proved to be a useful one, at the national and international levels, for representing in thought and combating in action a kind of wrong that is deeply entrenched in human social relations. - Abdulaziz et al. v. U.K. European Court of Human Rights, App. No. 9214/80; decided 28 May 1985. [Available online] - Brown v. Board of Education 347 U.S. 483 (1954). [Available online] - Civil Rights Act of 1866. [Available online] - Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW). [Available online] - Convention on the Elimination of All Forms of Racial Discrimination. (CERD) [Available online] - European Convention for the Protection of Human Rights. [Available online] - Griggs v. Duke Power 401 U.S. 424 (1974). [Available online] - Shanaghan v. U.K. European Court of Human Rights, App. No. 37715/97; decided 4 May 2001. [Available online] - United States v. Virginia 518 U.S. 515 (1996), [Available online] - Anderson, Elizabeth. 1999. “What is the Point of Equality,” Ethics, 109: 283–337. - Boxill, Bernard. 1992. Blacks and Social Justice, revised ed. Lanham, MD: Rowman and Littlefield. - Cavanagh, Matt. 2002. Against Equality of Opportunity, Oxford: Oxford University Press. - Corvino, John. 2005. “Homosexuality and the PIB Argument.” Ethics, 115: 501–34. - Cotter, Anne-Marie Mooney. 2006. Race Matters: An International Legal Analysis of Race Discrimination, Burlington, VT: Ashgate. - Crenshaw, Kimberlé. 1998/1989. “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics,” rpt. in Anne Phillips, Feminism and Politics, New York: Oxford University Press, pp. 314–343. - Dasgupta, Nilanjana. 2004. “Implicit Ingroup Favoritism, Outgroup Favoritism, and Their Behavioral Manifestations,” Social Justice Research, 17: 143–169. - Dworkin, Ronald. 1985. A Matter of Principle, Cambridge MA: Harvard University Press. - Ely, John Hart. 1980. Democracy and Distrust, Cambridge, MA: Harvard University Press. - Finnis, John. 1997. “The Good of Marriage and the Morality of Sexual Relations,” American Journal of Jurisprudence 42: 97–134. - Fiss, Owen. 1976. “Groups and the Equal Protection Clause,” Philosophy and Public Affairs, 5: 107–177 - Flew, Anthony. 1990. “Three Concepts of Racism,” Encounter, 75: 63–66. - Gardner, John. 1998. “On the Ground of Her Sex(uality),” Oxford Journal of Legal Studies, 18: 167–187. - Goldman, Alan. 1979. Justice and Reverse Discrimination, Princeton: Princeton University Press. - Hellman, Deborah. 2008. Why is Discrimination Wrong?, Cambridge, MA: Harvard University Press. - Hook, Sidney. 1995. “Reverse Discrimination,” in Steven Cahn, ed. The Affirmative Action Debate, New York: Routledge, pp. 145–152. - Kahlenberg, Richard. 1996. The Remedy, New York: Basic Books. - Kekes, John. 1995. “The Injustice of Affirmative Action Involving Preferential Treatment,” in Steven Cahn, ed. The Affirmative Action Debate, New York: Routledge, pp. 293–204. - Lippert-Rasmussen, Kasper. 2006. “The Badness of Discrimination,” Ethical Theory and Moral Practice, 9: 167–185. - Macedo, Stephen. 1996. “Sexual Morality and the New Natural Law,” in R.P. George, ed. Natural Law, Liberalism, and Morality, Oxford, UK: Oxford University Press. - MacKinnon, Catharine. 1979. Sexual Harassment of Working Women, New Haven: Yale University Press. - –––. 1987. Feminism Unmodified, Cambridge, MA: Harvard University Press. - Moreau, Sophia. 2010. “What is Discrimination?” Philosophy and Public Affairs 38: 143–179. - Moucheboeuf, Alcidia. 2006. Minority Rights Jurisprudence, Strasbourg: Council of Europe. - Osin, Nina and Dina Porat, eds. 2005. Legislating Against Discrimination: An International Survey of Anti-Discrimination Norms, Leiden: Martinus Nijhoff. - Pincus, Fred L. 1994. “From Individual to Structural Discrimination,” in Fred L. Pincus and Howard J. Ehrlich, eds. Race and Ethnic Conflict, Boulder, CO: Westview, pp. 82–87. - Pogge, Thomas. 2008. World Poverty and Human Rights, second edition. Malden, MA: Polity Press. - Rawls, John. 1971. A Theory of Justice, Cambridge, MA: Harvard University Press. - Scanlon, Thomas. 2008. Moral Dimensions, Cambridge, MA: Harvard University Press. - Schauer, Frederick. 2003. Profiles, Probabilities, and Stereotypes, Cambridge, MA: Harvard University Press. - Schiek, Dagmar, Lisa Waddington, and Mark Bell, eds. 2007. Non-Discrimination Law, Oxford, UK: Hart Publishing. - Shin, Patrick. 2009. “The Substantive Principle of Equal Treatment,” Legal Theory, 15: 149–172. - –––. 2010. “Liability for Unconscious Discrimination? A Thought Experiment in the Theory of Employment Discrimination Law,” Hastings Law Journal, 62: 67-102. - Sunstein, Cass. 1994. “The Anticaste Principle,” Michigan Law Review, 92: 2410–2455. - Ture, Kwame and Charles V. Hamilton. 1992/1967. Black Power, New York: Vintage Books. - Vandenhole, Wouter. 2005. Non-Discrimination and Equality in the View of the UN Human Rights Treaty Bodies, Oxford, UK: Intersentia. - Wasserman, David. 1998. “Discrimination, Concept of,” in Ruth Chadwick, ed., Encyclopedia of Applied Ethics, San Diego, CA: Academic Press, pp. 805–814. - Wasserstrom, Richard. 1995. “Preferential Treatment, Color-Blindness, and the Evils of Racism,” in Steven Cahn, ed. The Affirmative Action Debate, New York: Routledge, pp. 153–168. - Wax, Amy, 2008. “The Discriminating Mind: Define It, Prove It,” Connecticut Law Review, 40: 979–1022. - Young, Iris. 1990. Justice and the Politics of Difference, Princeton: Princeton University Press. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. - Anti-discrimination Laws in the European Union - Disability Discrimination Act (UK) - Employment Discrimination Laws in the United States - International Covenant on Civil and Political Rights - International Convention on Economic, Social, and Cultural Rights
<urn:uuid:d262f99f-9876-4835-b30d-0c523c1e1c39>
CC-MAIN-2013-20
http://stanford.library.usyd.edu.au/entries/discrimination/
2013-05-22T15:34:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953692
15,917
A lie which will not die Our ancestors did not fight for Jefferson Davis (1808-1899), president of the Confederacy, made a statement that may have colored the views of his men in the field. On Dec. 23, 1862, he had words for Gen. Benjamin Franklin Butler (1818-1893), who was in command of the Union troops in New Orleans and had armed his black soldiers. Said Davis: "All commissioned officers in the command of Benjamin F. Butler are not declared entitled to be considered as soldiers engaged in honorable warfare, but as robbers, and criminals, deserving death; and that they, each of them, be whenever captured, reserved for execution." He further stated that private soldiers and noncommissioned officers be considered regular prisoners of war. He said that all captured slaves be delivered to the authority of the state in which they lived. Often that did not happen. Some commanders in the field did things their own way. Col. James S. Brisbin reported that, when he and his troops left Bean Station, Tenn., in the winter of 1864 to destroy the salt works in Virginia, Confederate troops followed them, captured lagging soldiers and butchered them. Said he: "For the last two days a force of Confederate cavalry, under Witcher, had been following our command and picking up stragglers and worn out horses in the rear. Part of our troops were Negroes and those Confederates killed them as fast as they caught them, laying the dead bodies by the roadside with pieces of paper pinned to their clothing 'this is the way we treat all (n - - - - -) soldiers who fight against the south.' We did not know what had been going on in our rear until we turned about to go back to Wytheville." Many captured black Union soldiers were hanged, buried alive, put before a firing squad, nailed by their hands to posts or locked in barns and burned alive. Indeed, a few black Rebels carried weapons, but they were not issued by their government. In fact, the Confederate government began debating whether to arm black soldiers in December 1864 to reinforce their depleted ranks. Even Gen. Robert E. Lee (1807-1870), commander of Confederate forces, favored arming black Rebels. Many members of Confederate Congress denounced Lee for his views and on Feb. 9, 1865, voted down a resolution that would have freed 200,000 slaves and put them in the army. The Confederate Senate continued to postpone or defeat the proposal until it finally approved it on March 9, 1865. It was too late. Lee surrendered April 9, 1865, exactly one month later. I don't know why the Civil War was fought, and I won't offer any guesses. I don't know the real reasons for fighting in Iraq. I do know that we go to fight where our leaders send us. Some of us even believe the attending propaganda. Some slaves did, too. Yes, there were few free blacks who fought with the South. Maybe they were told that the Yankees had weapons of mass destruction that would disintegrate their shacks, poison their food crops and ruin their cotton fields. Rather than risk being homeless, hungry and naked, they went off to war. Their greatest insult is that Black History Month ignores them? The Union Army enlisted 179,000 blacks. The Confederacy had ONE company of black troops at the end of the Civil War. The idea that the Confederate forces were integrated in any way shape or form beggers the mind. This was a war or racial subjugation. It would be like looking for the Jewish units who fought for Hitler. They don't exist, nor do organized units which fought for the Confederacy. The reason this lie has to exist is because without it, all the Confederate kitsch and ancestor worship will be revealed for the racist tribute it really is. I watched this girl who wanted to attend her prom in a Confederate dress make this very claim. No one wanted to call her a liar who had racial motives. Because racists use this to hide their agenda. If blacks fought to stay slaves it wasn't all that bad. Of course, reality was quite different. Blacks enlisted in massive numbers as soon as the Union would take them. Anything else is fiction. posted by Steve @ 12:36:00 AM
<urn:uuid:3bd35af7-516e-44b0-9f05-61a36d3e6764>
CC-MAIN-2013-20
http://stevegilliard.blogspot.com/2005/06/lie-which-will-not-die.html
2013-05-22T15:22:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.981727
884
I am just posting this for posterity. The Pew Folks published the Future Of Higher Education research in 2011. Q: In 2020 the brains of multitasking teens and young adults are “wired” differently from those over age 35 and overall it yields helpful results. They do not suffer notable cognitive shortcomings as they multitask and cycle quickly through personal- and work-related tasks. Rather, they are learning more and they are more adept at finding answers to deep questions, in part because they can search effectively and access collective intelligence via the Internet. In sum, the changes in learning behavior and cognition among the young generally produce positive outcomes. - or - In 2020, the brains of multitasking teens and young adults are “wired” differently from those over age 35 and overall it yields baleful results. They do not retain information; they spend most of their energy sharing short social messages, being entertained, and being distracted away from deep engagement with people and knowledge. They lack deep-thinking capabilities; they lack face-to-face social skills; they depend in unhealthy ways on the Internet and mobile devices to function. In sum, the changes in behavior and cognition among the young are generally negative outcomes. A: There is recent evidence (Watson and Strayer) that suggests that some people are natural ‘supertaskers’ capable of performing two difficult tasks at once, without loss of ability on the individual tasks. This explodes the conventional wisdom that ‘no one can really multitask’, and by extension, the premise that we shouldn’t even try. The human mind is plastic. The area of the brain that is associated with controlling the left hand, for example, is much larger in professional violinists. Likewise, trained musicians listen to music differently, using more centers of the brain, than found in non-musicians. To some extent this is obvious: we expect that mastery in physical and mental domains will change those master’s perceptions and skills. But cultural criticism seems to want to sequester certain questionable activities — like video gaming, social networking, multitasking, and others — into a no-man’s-land where the plasticity of the human mind is negative. None of these critics wring their hands about the dangerous impacts of learning to read, or the intellectual damage of learning a foreign language. But once kids get on a skateboard, or start instant messaging, it’s the fall of western civilization. Perhaps most important, the sociality of web use frightens many detractors. But we have learned a great deal about social cognition in recent years thanks to advances in cognitive science, and we have learned that people are innately more social than was ever realized. The reason that kids are adapting so quickly to social tools online is because they align directly with human social connection, much of which takes place below our awareness. Social tools are being adopted because they match the shape of our minds, but, yes, they also stretch our minds based on use and mastery, just like martial arts, playing the piano, and badminton. My friend Jamais Cascio wisely said that ‘technology is everything that was invented after you became thirteen’. Our society’s concern with the supposed negative impacts of the Internet will seem very old-fashioned in a decade, like Socrates bemoaning the downside of written language, or the 1950’s fears about Elvis Presley’s rock-and-roll gyrations. Q: In 2020, higher education will not be much different from the way it is today. While people will be accessing more resources in classrooms through the use of large screens, teleconferencing, and personal wireless smart devices, most universities will mostly require in-person, on-campus attendance of students most of the time at courses featuring a lot of traditional lectures. Most universities’ assessment of learning and their requirements for graduation will be about the same as they are now. - or - By 2020, higher education will be quite different from the way it is today. There will be mass adoption of teleconferencing and distance learning to leverage expert resources. Significant numbers of learning activities will move to individualized, just-in-time learning approaches. There will be a transition to “hybrid” classes that combine online learning components with less-frequent on-campus, in-person class meetings. Most universities’ assessment of learning will take into account more individually-oriented outcomes and capacities that are relevant to subject mastery. Requirements for graduation will be significantly shifted to customized outcomes. A: The institutions that control education are far too conservative to made radical changes at the core of their world view in the decade between 2011 and 2020. Given a longer time line, say 25 years, I would agree, but the people that will be attending colleges in 2020 are alive today, and are attending extremely conventional elementary schools, for the most part. For a change of the sort sketched in the question, we would have to see a fragmenting of the consensus about higher education, and a paradigm-based battle between revolutionaries and conservatives of the form that Thomas Kuhn outlined in The Structure Of Scientific Revolutions. Once we start to see some significant number of established universities actually rejecting conventional education and adopting an alternative approach, then we’ll have a decade or so before it displaces the old model. Q: By 2020, most people will have embraced and fully adopted the use of smart-device swiping for purchases they make, nearly eliminating the need for cash or credit cards. People will come to trust and rely on personal hardware and software for handling monetary transactions over the Internet and in stores. Cash and credit cards will have mostly disappeared from many of the transactions that occur in advanced countries. People will not trust the use of near-field communications devices and there will not be major conversion of money to an all-digital-all-the-time format. By 2020, payments through the use of mobile devices will not have gained a lot of traction as a method for transactions. The security implications raise too many concerns among consumers about the safety of their money. And people are resistant to letting technology companies learn even more about their personal purchasing habits. Cash and credit cards will still be the dominant method of carrying out transactions in advanced countries. A: I think that credit and debit cards will almost be dead by 2020, because of the convenience and lower costs of directing payments through mobile devices, either by swiping, near-field techniques, or other services offered by cell carriers or platform companies (like Apple). However, cash is hear to stay because there are a wide range of use cases where anonymity is necessary, like illegal transactions (drugs, sex, bribes), gray economics (paying undocumented immigrants), or other sorts of secret activities (gift for a mistress). It’s conceivable that an anonymous form of digital money could serve, like the design premises behind BitCoin, but that remains to be seen. Q: In 2020, most people will prefer to use specific applications (apps) accessible by Internet connection to accomplish most online work, play, communication, and content creation. The ease of use and perceived security and quality-assurance characteristics of apps will be seen as superior when compared with the open Web. Most industry innovation and activity will be devoted to apps development and updates, and use of apps will occupy the majority of technology-users’ time. There will be a widespread belief that the World Wide Web is less important and useful than in the past and apps are the dominant factor in people’s lives. - or - In 2020, the World Wide Web is stronger than ever in users’ lives. The open Web continues to thrive and grow as a vibrant place where most people do most of their work, play, communication, and content creation. Apps accessed through iPads, Kindles, Nooks, smartphones, Droid devices, and their progeny - the online tools GigaOM referred to as “the anti-Internet” - will be useful as specialized options for a finite number of information and entertainment functions. There will be a widespread belief that, compared to apps, the Web is more important and useful and is the dominant factor in people’s lives. A: We are quickly moving away from the so-called ‘open web’ — which means one based on browser-based access — to an app-based model of web access. This is not really being driven by security issues, as suggested in your question, but rather a combination of other factors. First, Apple and other platform companies can retain greater control of the user experience, and guarantee a uniformly better user experience in the app model, based on a controlled distribution of apps through platform-based app stores. This also has enormous economic incentives for app and platform companies, since blocking low-cost low-quality apps raises the average price for accepted apps. Much more important: the ‘open web’ is based on relatively old principles and tired metaphors, like disconnected computers, http, and the desktop operating system of folders, files, and executables. Platform companies — especially Apple and Google — are moving to new meta-architecture principles, such as tablets, touch and gestural interfaces, ubiquitous connectivity, and social networking. These are being baked into the core platforms so that app developers will be able to take advantage of them, natively, without having to reinvent those wheels over and over again. Note that this provides a second and enormously large economic leverage for app developers, and by extension, for users. Put another way, the platform companies will push a great deal into their infrastructure, and app developers will be able to push much higher into ultrastructure, providing a much richer user experience via post-browser-web apps. In the very near-term, like 5-7 years, the browser will drop from the most used tool to the least used, because of this change. Just look at how people use their iPhones, already. The browser will be something like the terminal program on the Mac: a tool for programmers and throwbacks, only occasionally used by regular folks. A few years ago, I worked on a project for the Mozilla foundation, on the future of the browser. I was the first to raise my hand and say that in ten years the browser would be dead. The Mozilla guys laughed it off, but I am standing by my original prediction. Q: Influence of Big Data, Internet of Things in 2020 Thanks to many changes, including the building of “the Internet of Things,” human and machine analysis of large data sets will improve social, political, and economic intelligence by 2020. The rise of what is known as “Big Data” will facilitate things like ”nowcasting” (real-time “forecasting” of events); the development of “inferential software” that assesses data patterns to project outcomes; and the creation of algorithms for advanced correlations that enable new understanding of the world. Overall, the rise of Big Data is a huge positive for society in nearly all respects. - or - Thanks to many changes, including the building of “the Internet of Things,” human and machine analysis of Big Data will cause more problems than it solves by 2020. The existence of huge data sets for analysis will engender false confidence in our predictive powers and will lead many to make significant and hurtful mistakes. Moreover, analysis of Big Data will be misused by powerful people and institutions with selfish agendas who manipulate findings to make the case for what they want. And the advent of Big Data has a harmful impact because it serves the majority (at times inaccurately) while diminishing the minority and ignoring important outliers. Overall, the rise of Big Data is a big negative for society in nearly all respects. PLEASE ELABORATE: What impact will Big Data have in 2020? What are the positives, negatives, and shades of grey in the likely future you anticipate? How will use of Big Data change analysis of the world, change the way business decisions are made, change the way that people are understood? A: Overall, the growth of the ‘internet of things’ and ‘big data’ will feed the development of new capabilities in sensing, understanding, and manipulating the world. However, the underlying analytic machinery — like Bruce Sterling’s ‘engines of meaning’ — will still require human cognition and curation to connect dots and see the big picture. And there will be dark episodes, too, since the brightest light casts the darkest shadow. There are opportunities for terrible applications, like the growth of the surveillance society — where the authorities watch everything and analyze our actions, behavior and movements looking for patterns of illegality, something like a real-time Minority Report. On the other side, access to more large data can also be a blessing, so social advocacy groups may be able to amass information at a low- or zero-cost that would be unaffordable today. For example, consider the bottom-up creation of an alternative food system, outside the control of multinational agribusiness, and connecting local and regional food producers and consumers. Such a system, what I and others call Food Tech, might come together based on open data about people’s consumption, farmers’ production plans, and regional, cooperative logistics tools. So it will be a mixed bag, like most human technological advances. Q: Getting into the gamification? By 2020, gamification (the use of game mechanics, feedback loops, and rewards to spur interaction and boost engagement, loyalty, fun and/or learning) will not be implemented in most everyday digital activities for most people. While game use and game-like structures will remain an important segment of the communications scene and will have been adopted in new ways, the gamification of other aspects of communications will not really have advanced much beyond being an interesting development implemented occasionally by some segments of the population in some circumstances. - or - By 2020, there will have been significant advances in the adoption and use of gamification. It will be making waves on the communications scene and will have been implemented in many new ways for education, health, work, and other aspects of human connection and it will play a role in the everyday activities of many of the people who are actively using communications networks in their daily lives. PLEASE ELABORATE: Explain your choice and share your view of gamification and implications for the future. What new approaches to information sharing do you anticipate will be finding their footing by 2020? What are the positives, negatives, and shades of grey in the likely future you anticipate? A: Gamification is a passing fad, currently of interest to a small segment of the social tools developer community. In some segments it will have a long-term impact, but only in circumstances where it is integral, and not as a gloss or veneer. Much of what gamification seeks to do — to increase involvement, and foster certain collective behaviors in groups of people — actually run counter to the fragmentation of user experience, online. The rise of apps means that users are spreading their time out over a larger number of more specialized tools, and tool developers try to counter that through inducements to stay, or return frequently, and to align activities with others: a forced viralization. A much more profitable set of ideas? As people are made more autonomous they naturally move away from collaboration — where users share the same aims and reward systems — toward cooperation — where users do not necessarily share long-term goals or values. Gamification has little use in cooperation, and that is the area of social software that is least realized at this time, and which I predict will be the highest growth area in the future. Q: [missing question must have been about the Home Of The Future.] A: I think the home of 2020 will be a lot like the average house of today, although some major changes will take place, especially related to things that can be affordably brought on line, like entertainment, and things that can be dropped, like landlines. By 2020, nearly all entertainment media will be delivered via web, with the corresponding crash of cable companies, who become low-margin utilities. I predict that most municipalities will take back cable- and phoneline-based internet infrastructure by imminent domain or State legislation, and provide low-cost or zero-cost connectivity to the home and business, probably supported by US government subsidies, arising from election 2012 infrastructure initiatives advanced by President Obama. Appliance manufacturers will build in wifi capabilities into printers, TVs, refrigerators, hot water heaters, air conditioners, washing machines, and clothes dryers, subsidized by energy tax credits, so that people can minimize their energy use, and schedule machines to take advantage of lower-cost energy at night. Next generation solar heating systems will also be wifi-connected, relying on web-based computing to maximize energy capture. But these will all be based on today’s houses, which are not particularly well-insulated. The real break though in housing will take a long time to roll out: so-called passive homes, or ultra low energy buildings, based on new materials and very different construction techniques. Maybe by 2040. Q: Corporate responsibility: Which road will be taken? In 2020, technology firms with their headquarters in democratic countries will be expected to abide by a set of norms - for instance, the “Responsibility to Protect” (R2P) citizens being attacked or challenged by their governments. In this world, for instance, a Western telecommunications firm would not be able to selectively monitor or block the Internet activity of protestors at the behest of an authoritarian government without significant penalties in other markets. - or - In 2020, technology firms headquartered in democratic countries will have taken steps to minimize their usefulness as tools for political organizing by dissidents. They will reason that too much association with sensitive activities will put them in disfavor with autocratic governments. Indeed, in this world, commercial firms derive significant income from filtering and editing their services on behalf of the world’s authoritarian regimes. PLEASE ELABORATE: When it comes to the behavior and practices of global tech firms and political, social, and economic movements, how will firms respond? Explain your choice and share your view of this tension pair’s implications for the future. What are the positives, negatives, and shades of grey in the likely future you anticipate? A: Tech firms based in Western democratic countries will continue to support the compromises of political free speech and personal privacy that are, more or less, encoded in law and policy, today. The wild card in the next decade is the degree to which civil unrest is limited to countries outside that circle. If disaffected youth, workers, students, or minorities begin to burn the blighted centers of Western cities, all bets are off because the forces of law and order may rise and demand control of the web. And of course, as China and other countries with large populations, like India, Malaysia, and Brazil, begin to create their own software communities, who knows what forms will evolve, or what norms will prevail. But they are unlikely to be what we see in the West. So we can expect a fragmented web, where different regions are governed by very different principles and principals.
<urn:uuid:3a2122c9-5df0-4d38-a9db-fdc8e4544a3f>
CC-MAIN-2013-20
http://stoweboyd.com/post/32729627679/q-a-from-pew-internet-survey-august-2011
2013-05-22T15:28:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94661
3,984
National Science Foundation grants will bring together what's known about how species are related A new initiative aims to build a comprehensive tree of life that brings together everything scientists know about how all species are related, from the tiniest bacteria to the tallest tree. Researchers are working to provide the infrastructure and computational tools to enable automatic updating of the tree of life, as well as develop the analytical and visualization tools to study it. Scientists have been building evolutionary trees for more than 150 years, since Charles Darwin drew the first sketches in his notebook. Darwin's theory of evolution explained that millions of species are related and gave biologists and paleontologists the enormous challenge of discovering the branching pattern of the tree of life. But despite significant progress in fleshing out the major branches of the tree of life, today there is still no central place where researchers can go to visualize and analyze the entire tree. Now, thanks to grants totaling $13 million from the National Science Foundation's (NSF) Assembling, Visualizing, and Analyzing the Tree of Life (AVAToL) program, three teams of scientists plan to make that a reality. "The AVAToL awards are an exciting new direction for an area that's a foundation of much of biology," says Alan Townsend, director of NSF's Division of Environmental Biology. "That's critical to understanding a changing relationship between human society and Earth's biodiversity." Figuring out how the millions of species on Earth are related to one another isn't just important for pinpointing an antelope's closest kin, or determining if tuna are more closely related to starfish or hagfish. Information about evolutionary relationships is fundamental to comparative biology research. It helps scientists identify promising new medicines; develop hardier, higher-yielding crops; and fight infectious diseases such as HIV, anthrax and influenza. If evolutionary trees are so widely used, why has assembling them across all life been so hard to achieve? It's not for lack of research, or data. Advances in DNA sequencing and evolutionary analysis, discovery of pivotal early fossils, and novel methods and tools have enabled thousands of new evolutionary trees to be published in scientific journals each year. However, most of these focus on specific, disconnected branches of the tree of life. Part of the difficulty lies in the sheer enormity of the task. The largest evolutionary trees to date contain roughly 100,000 groups of organisms. Assembling the branches for all species of animals, plants, fungi and microbes--and the countless more still being named or discovered--will require new computational tools for analyzing large data sets, for combining diverse kinds of data, and for connecting vast numbers of published trees into a synthetic whole. Another difficulty lies in how scientists typically disseminate their results. A tiny fraction of all evolutionary trees have been published. Researchers estimate a mere four percent end up in a database in a digital form. Most of the knowledge is locked up in figures in static journal articles in file formats that may be difficult for other researchers to download, reanalyze or merge with new information. AVAToL aims to change that. What makes this program different from previous efforts, scientists say, is its scope: its focus on creating an open, dynamic, evolutionary framework that can be continually refined as new biodiversity data is collected, and its development of computational and visualization tools to scale up tree-based evolutionary analyses. Researchers will be able to go online and compare their trees to others that have already been published, or download trees for further study. They'll also be able to expand the tree, filling in the missing branches and placing newly named or discovered species among their relatives. The goal is to incorporate new trees automatically, so the complete tree can be continuously updated. In addition to the creation of an updatable tree of life, AVAToL scientists will create new tools for the kinds of research that rely on evolutionary trees and for the collection and analysis of important evolutionary data, including from fossils critical to the placement of many branches in the tree of life. The three NSF-funded AVAToL projects are: Automated and Community-Driven Synthesis of the Tree of Life Principal Investigator: Karen Cranston, Duke University and the National Evolutionary Synthesis Center This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientists. Assembly of the tree will incorporate previously published results and efforts to develop, test and improve methods of data synthesis. This initial tree of life, called the Open Tree of Life, will not be static. Scientists will develop tools for researchers to update and revise the tree as new data come in. Arbor: Comparative Analysis Workflows for the Tree of Life Principal Investigator: Luke Harmon, University of Idaho Scientists deal with daunting volumes of data. One of the most basic challenges facing researchers is how to organize that information into a usable format that can inspire new scientific insights. This project team is working to develop a way to visually portray evolutionary data so scientists can see, at a glance, how organisms are related. The team will create software tools that will enable researchers to visualize and analyze data across the tree of life, enabling research in all areas of comparative biology at multiple evolutionary, space and time scales. The results have the potential to transform the way biologists test evolutionary and ecological hypotheses, enabling new research in fields from medicine to public health, from agriculture to ecology to genetics. Next Generation Phenomics for the Tree of Life Principal Investigator: Maureen O'Leary, SUNY-Stony Brook This team of biologists, computer scientists and paleontologists will extend and adapt methods from computer vision, machine learning and natural language processing to enable rapid and automated study of species' phenotypes on a vast scale across the tree of life. The team's goal is to develop large phenomic datasets using new methods, and to provide the scientific community and the public with tools for future such work. Phenomics is an area of biology that measures the physical and biochemical traits of organisms as they change in response to genetic mutations and environmental influences. Enormous phenomic datasets, many with images, will foster public interest in biodiversity and the fossil record. Phenotypic data allow scientists to reconstruct the evolutionary history of fossil species, in turn crucial for an understanding of the history of life. This project will leverage recent advances in image analysis and natural language processing to develop novel approaches to rapidly advance the collection and analysis of phenotypic data for the tree of life.
<urn:uuid:140c7ba6-cd59-44cb-b954-774545fa38a7>
CC-MAIN-2013-20
http://supercomputingonline.com/this-years-stories/assembling-visualizing-and-analyzing-a-tree-of-all-life
2013-05-22T15:15:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920928
1,349
A professor at a Swiss university on Tuesday unveiled a robot that can be controlled by the brainwaves of a paraplegic person wearing an electrode-fitted cap, news agency ATS reported. A paralysed man at a hospital in the town of Sion demonstrated the device, sending a mental command to a computer in his room, which transmitted it to another computer that moved a small robot 60 kilometres away in Lausanne. The system was developed by Jose Millan, a professor at the Federal Polytechnic School of Lausanne who specialises in non-invasive interfaces between machines and the brain. The same technology can be used to drive a wheelchair, Millan said. "Once the movement has begun, the brain can relax, otherwise the person would soon be exhausted," he said. But the technology has its limits, he added. The brain signals can be scrambled if too many people are gathered around a wheelchair, for example. Besides making paraplegics mobile, neuroprosthetics could be used to help patients recover lost senses, researchers said. Professor Stephanie Lacour and her team are working on an "electric skin" for amputees, a glove fitted with tiny sensors that would send information directly to the user's nervous system. Eventually, researchers say they hope to create mechanised prosthetics that are as mobile and sensitive as a natural hand, Lacour said. Other researchers at Lausanne are working on enabling paraplegics to walk again with electrodes implanted in their spinal cords. "The goal is that after a year of training with a robotic aide, the patient will be able to walk without a robot. The electrodes would stay implanted for life," said Professor Gregoire Courtine. He said he is currently setting up clinical trials and hopes to run tests at Zurich's university hospital within a year.
<urn:uuid:1e954be6-c7cc-4128-b616-9efb8fe5f60c>
CC-MAIN-2013-20
http://technology.iafrica.com/news/science/791278.html
2013-05-22T15:13:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955682
378
July 31, 2008 Study: Geothermal Could be Cost-Competitive for a Fraction of Oil and Coal’s R&D Investments By James Burgess, Breakthrough Fellow A recent study at NYU's Stern School of Business analyzes the returns on government energy R&D investments and comes to the conclusion that geothermal and wind power could, for a relatively low price, become cheaper than fossil fuel electricity in a matter of years. The study used a well-known method of analyzing technology cycles that predicts learning curves for emerging technologies. This "S-curve" heuristic guesses that the performance of new technologies, plotted against effort (i.e. total money invested) is shaped like an S. Early in the life of the technology, improvements are gradual as the basic properties are worked out and an effective design is formed. Next comes a period of rapid growth as the now-stable technology captures "process innovations" and economies of scale. Finally, the rate of improvement slows as the technology becomes mature and improvements become hampered by the dominant structure of the technology and its industry - until the potential emergence of a new competing technology with its own S-curve. Although such an analysis makes some major simplifications, these S-curve cycles are well-documented throughout history in technologies as diverse as disk drives, steam engines, semiconductors, and automobiles (to name a few). With the S-curve model in hand, the authors of the report sought to determine the curves of some major alternative energy technologies in order to project how much investment is necessary to reduce the their marginal costs. Their results show that the total sums are surprisingly small - in the context of energy R&D investments. Just over $3 billion would be necessary to make advanced geothermal technologies cost-competitive with fossil fuels, the authors conclude. (N.B.: that's "hot rock" or "enhanced" geothermal technology, which can be used essentially everywhere, not just at hot springs locations.) This is because geothermal's S-curve is currently going steeply up - each additional investment causes a huge reduction in the cost of the technology. That's $3 billion, total - not annually. A landmark 2007 MIT study The Future of Geothermal Energy similarly concluded that for investments totally less than a half a billion dollars annually, advanced geothermal energy technologies could provide cost-competitive, carbon-free, baseload energy to rival coal. Compare this number with the $38 billion spent by the US government (dwarfed by the industry's own spending) on fossil fuel R&D between 1974 and 2005. The authors of this paper also plot fossil fuel technologies onto S-curves. The data show overwhelmingly that fossil fuel technologies have reached the top of their curves. That is, having reached the limit of achievable cost savings, the marginal price on fossil fuels is almost entirely driven by market fluctuations - not innovation - making further R&D investments a far less effective use of funding than investments in less mature but potentially breakthrough technologies like advanced geothermal. Given the need for non-emitting energy that is fast, clean, and cheap, and the poor return that we're getting on our annual fossil fuel investments, isn't it time to move our government's money to technologies with more promise?
<urn:uuid:631425f2-55fb-4cb9-b4e7-f94f021fe7e0>
CC-MAIN-2013-20
http://thebreakthrough.org/archive/study_geothermal_could_be_cost
2013-05-22T15:15:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950887
676
Yesterday‘s post mentioned that Britain and France sometimes entrusted the development of Canada to private enterprise rather than doing it themselves. An example of such enterprise was the company called “The Merchant Adventurers of England trading into Hudson’s Bay,” formed in 1670. The Hudson’s Bay Company concluded a big deal on January 13, 1849 when it leased Vancouver Island from the British Government for seven shillings a year — in those days, seven shillings was worth about $1.75! The reason the Hudson’s Bay Company made such a bargain was that Britain was in the throes of a depression, including a famine in Ireland. In addition, the United States had in the year before bought California from Mexico and now controlled the Pacific to the 49th parallel. Britain needed a naval base on the Pacific coast, and Vancouver Island was the logical place for it. The Hudson’s Bay Company had exclusive trading rights on the island, but in return agreed to pay for the cost of defence and to bring in settlers. The agreement was supposed to last until 1859, but was kept in effect until 1866 when Vancouver Island was united with the mainland, and the whole area became British Columbia. Some of the earlier settlers were quickly disenchanted. The first governor, Richard Blanchard, was sent out by the British Government. He agreed to serve without pay because he hoped the post would be the first step in a diplomatic career. However, he also expected that he would have a mansion and an estate of extensive lawns as in England. Not finding them, he lasted only a few months before asking to be recalled. Other settlers arrived with coaches and horses, only to find that there were no roads. Some brought equipment for playing cricket, but, alas, it takes a long time to convert a forest into a cricket pitch! Still, they were no more badly informed than tourists over one hundred years later, who often arrive in Canada in July bringing skis and winter clothing. - A Government on Horseback! (tkmorin.wordpress.com) - 2013 ~ Vancouver Island’s year of the Labyrinth (insideawarenessblog.wordpress.com)
<urn:uuid:90843702-5f76-4bce-97aa-913d2fb605cf>
CC-MAIN-2013-20
http://tkmorin.wordpress.com/2013/01/12/vancouver-island-leased/
2013-05-22T14:59:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.981591
455
This is the last of these expository posts in the series. Next week, I'll put up my final evaluation of the various energy sources that I've considered. However, this week, I wanted to take a look at a possible energy source that has been talked about, it seems to me, for years, yet has never really gone anywhere. I'm referring to nuclear fusion. Fusion reactors produce energy by fusing two light atomic nuclei into a heavier one. The idea is that bringing the two nuclei together will allow the strong nuclear force in the nuclei to pull them together into a larger atom; as this new atom has slightly less mass than the sum of the original two masses, the difference is released as energy according to good ol' E = mc2. However, if the input atoms are heavier than iron, then the output atom will be heavier than the total mass of the inputs; in this case, then, the reaction will actually consume energy rather than release it. The trick, though, is that input atoms also have an electrostatic force -- the net positive charge of the nuclei. In order to overcome it so that the atoms can combine, energy needs to be introduced into the process. The easiest way to do this, according to Wikipedia at any rate, is to heat the atoms, usually to the point at which they become plasma. The temperature that must be reached is a function of the total charge, thus hydrogen reacts at the lowest temperature; since helium has a very low mass per nucleon (i.e., nuclear particle, a proton or neutron), it tends to be the product. Perhaps the easiest way to harness this for electricity generation is as part of a thermal power plant, of which I have discussed several types already. The process is, again, to use heat to heat/boil water (or some other substance), then drive a turbine, which drives a generator, which produces electricity. There are a few technical challenges facing fusion as a source of commercial electricity. The first is related to the choice of fuel. One model (D-T) takes deuterium and tritium as inputs to produce helium-4 and a neutron. Since finding tritium is quite tricky (although deuterium is not), it must be bred from lithium and a neutron. So, the cycle here is obvious: D and T produce He-4 and n; n and either Li-6 or Li-7 produce T and He-4 (and, in the case of Li-7, another n). Due to the prevalence of neutrons in these reactions, though, D-T fusion results in induced radioactivity (i.e., the absorption of neutrons by the reactor structure, creating radioactive materials). (It should be noted, though, that it might be possible to convert this radiation directly into electricity, rather than trying to transport the reactor's power by some other means.) Along the same lines, the use of tritium can be a problem, as tritium is hard to contain; thus some radiation would leak into the environment. Furthermore, lithium supplies are limited, so this form of fusion would not last forever. Finally, only about 20% of the energy output is in the form of charged particles, which basically forces the reactor to be used as part of a thermal power plant. (The relative lack of charged particles means, as I understand it, that little energy can be harvested directly from the reaction.) Another (D-D) model combines two deuterium atoms to produce, with equal probability, tritium and a proton or helium-3 and a neutron. This model, then, has a similar problem with tritium as the D-T model; and, if the tritium is burned before leaving the reactor, it will produce more neutrons, resulting in the problem of induced radioactivity again. Furthermore, the energy confinement must be significantly better, and less power is produced by the reactor. The basic advantage, though, is that the reactor doesn't require tritium breeding nor the use of lithium. The second is related to confinement. Creating uncontrolled fusion reactions isn't that hard -- they're called hydrogen bombs. Suffice to say, an H-bomb isn't really useful for commercial electricity production; there has to be some way to control the fusion reaction. One is using magnetic confinement, in the tokamak -- a transliterated Russion word, created from the (Russian words for) "toroidal chamber in magnetic coils". Frankly, the physics of them is a bit beyond me. My sense is that a magnetic field is used to rapidly heat (and maintain the heat) of the plasma in a fusion reactor. Alternatives include the Z-pinch system (again, the physics surpasses me) and laser inertial systems. Confinement has to maintain the plasma in the fusion reactor in a dense and hot enough state that it will undergo fusion and produce energy, and to keep the plasma in this state such that it can continue to undergo fusion. The third is the choice of materials. As said, fusion reactions can induce radioactivity in the materials used in the reactor structure. Furthermore, the temperatures in a fusion reactor are extremely high. Very few materials would be able to withstand the thermal and mechanical pressures inherent in a commercial fusion power plant. Unfortunately, at this point in research, it's by no means clear that commercial electricity generation from fusion reactors is even possible, at least in a way that would be economically viable. The promise is of fairly high power-generation without interruption, and without significant environmental effects. But whether this promise can be fulfilled is extremely unclear.
<urn:uuid:c4e65562-5b5f-488f-9f9f-fa258929960a>
CC-MAIN-2013-20
http://trssastt.blogspot.com/2007/08/energy-generation-dream-source.html
2013-05-22T15:08:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94451
1,158
Environmentalists fight for Guatemala's endangered macaws (2:16) April 2 - A coalition of conservation groups in Guatemala is reporting tentative signs of success in their efforts to save the spectacular scarlet macaw from extinction. The species is critically endangered due to habitat loss and poaching, so the groups are closely monitoring nests throughout the protected Maya Biosphere reserve to ensure as many newly-hatched birds as possible reach adulthood. Rob Muir reports. ( Transcript ) Hard-edged reporting, insight and analysis, Reuters TV breaks ground creating informative news and financial videos. Showcasing Reuters’ 3000 award-winning journalists, Reuters TV delivers high-energy investigative journalism with concise explanations. Check it out and let us know what you think.
<urn:uuid:44c1b754-d110-4eff-8c54-d08a736f01f8>
CC-MAIN-2013-20
http://uk.reuters.com/video/2012/04/01/environmentalists-fight-for-guatemalas-e?videoId=232741536&feedType=VideoRSS&feedName=Environment&videoChannel=82
2013-05-22T15:21:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.91471
149
Main Subject Area: Science Duration of Lesson: 45 minutes Additional Subject Area Standard(s): Students will create their own battery. A mild soap to clean your coins Salt water solution 4 paper napkins Voltmeter (available at hardware stores, need just one for whole class) Coins Used in Lesson: Grade Level(s): 3-5 6-8 1. To create the battery, students should first clean their coins with the soap. Note- Remember, do not perform this experiment with any coins that you may be saving for a collection. Washing coins is not recommended for coins intended for a coin collection. 2. Next, soak the napkins in the saltwater solution. 3. Now have your students create a “sandwich” with the coins and the napkins. Fold the napkins until they are just a little big bigger than the coins. Alternating a penny, napkin, and then a dime, create a sandwich or stack. Make sure the ends of the stacks are different coins. Connect a voltmeter to the ends of the stack. Why does this work? The saltwater solution is an electrolyte. It reacts with the metals, which are electrodes. Since there are two kinds of metals (in the two different coins) one metal reacts more strongly than the other, which leaves an electrical potential difference (voltage) between the two types of metals. Assessment / Evaluation: Differentiated Learning Options:
<urn:uuid:b46c0ef6-1409-4fd9-8f92-81257b1b6384>
CC-MAIN-2013-20
http://usmint.gov/kids/teachers/lessonPlans/viewLP.cfm?lessonPlanId=138
2013-05-22T15:13:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.872898
309
Khan Academy Presents: More choices as to when you get your money. Now, I’ll give you a slightly more complicated choice between two payment options. Both of them are good because in either case you’re getting money. So choice one today, I will give you $100.00 so today, you’ll get a $100.00. Choice two is that not in one year but in two years. So, let’s say this is your year one and now this is year two. Actually, I want to give you three choices that will really hopefully hit things home. So actually, let me scoot this choice too over to the left. So choice two, I am willing to give you, let’s say $110.00 in two years. So, not in one year, in two years, I am going to give you a $110.00. And so, I’ll circle in magenta when you actually get your payment. And then choice three is going to be fascinating. I am making this up on the fly as I go. Choice three, I am going to pay you $20.00 today. I am going to pay you $50.00 in one year. So, let’s see. That’s 70. Let me make this so it’s close. And then I am going to pay you $35.00 in year three. So, all of these are payments. I want to differentiation between the actual dollar payments and the present values. And just for the sake of simplicity, let’s assume that I am guaranteed. I am the safest person available if the world exists. If the sun does not supernova, I will be paying you this amount of money. So, I am as risk-free as a federal government and they had opposed on the previous present value where someone talked about if the federal government really that safe. And this is the point. The federal government when if borrows from you. Let’s say it borrows a $100.00 and it promises to pay it in year, it will give you that $100.00. The risk is what is that $100.00 worth? Because they may inflate the currency to death. Anyway, I won’t go into that right now. Let’s just go back to this present value problem. And actually sometimes governments do default on that but the U.S. government has never defaulted. It inflated its currency, so that’s kind of a roundabout way of defaulting but is never actually said, “I will not pay you” because if that happened, our entire financial system would blow up and we would all be living off the land again. Anyway, back to this problem; enough commentary from Sal. So, let’s just compare choice one and choice two and once again, let’s say that risk-free I could lend it to the federal government at 5%. And it does not matter over what risk-free rate is 5%. And for the sake of simplicity, in the next video, I will make that assumption less simple but for the sake of simplicity, the government will pay you 5% whether you give them the money for one year, whether you give them the money for two years or whether you give them the money for three years. So, if I had a $100.00, what would that be worth in one year? We figured that out already. It is 100 × 1.05, so that’s $105.00. And if you got another 5%, so the government is giving you 5% per year. It would be a 105 × 1.05. And what is that? So, I have 105 × 1.05 = $110.25. So, that is the value in two years. So immediately, without even doing any present value, we see that you’ll actually be better off in two years if you were to take the money now and just lend it to the government because the government risk-free will give you a $110.25 in two years while I’m only willing to give you a $110.00. So, that’s all fair and good but the whole topic is what we’re to solve is present values so let’s take everything in today’s money and to take this $110.00 and say, what is that worth today? We can just discount it backwards by the same method. So, $110.00 in two years; what is its one year value? Well, you take a $110.00 and you divide it by 1.05. You’re just doing the reverse. And then you get some number here. Well, that number you get is 110 ÷ 1.05. And then, to get its present value, its value today, you divide that by 1.05 again. If I were to divide by 1.05 again, what do I get? I divide it by 1.05 and then I divide it by 1.05 again, I am dividing by 1.052. And what is that equal? And I am writing this in purpose because I want to get used to this notation because this is what all of our present values and our discounted cash flow; this type of dividing by one plus the discount rate to the power of however many years out. This is what all of that’s based on. And that’s all we’re doing though. We’re just dividing by 1.05 twice because we’re two years out. So, let’s do that. Well, let’s just do that. 110/1.052 = $99.77. So, it equals 99.77. So once again, we have verified by taking the present value of a $110.00 in two years to today. If we assume of 5% discount rate and this discount rate is where all of the fudge factors occurs in finance; you can tweak that discount rate and make a few assumptions in discount rate and pretty much assume anything. But right now, for a simplification, we’re assuming a risk-free discount rate. But a present value based on that, you get $99.77 and you say “Wow!” Yeah, this really isn’t as good as this. I would rather have a $100.00 today than $99.77 today. Now, this is interesting. Choice number three, how do we look at this? Well, what we do is we present value each of the payments, right? So, the present value of $20.00 today; well, that is just $20.00. What’s the present value of $50.00 in one year? So, +$50 ÷ 1.05, that is the present value of the $50.00 because one year out. And then I want the present value of the $35.00, so that’s +$35 divided by what? It is two years out, so you have to discount it twice; divide it by 1.052 just like we did here. So, let’s figure out what that present value is because notice, I am just adding up the present values of each of those payments. So, the present value of the $20.00 payment is $20.00 plus the present value of $50.00 payment. Well, that is just 50 ÷ 1.05 plus the present value of our $35.00 payment. And it is two years out, so we discount it by our discount rate twice. So, it’s divided by 1.052 and then, that is equal to $99.37. So now, we can make a very good comparison between the three options. This might have been confusing before. You have this guy coming up to you and this guy is usually in the form of some type of retirement plan or insurance company where they say, “Hey! You pay me this for the years A, B and C and I’ll pay you that near as B, C, D.” And you’re like, “Well, how do I compare if that’s really a good value?” Well, this is how you compare it. You present value of the payments and you say, “Well, what is that worth to me today?” And here we did that and we said, “Well, actually choice number one is the best deal.” And it just depended on how the mathematics worked out. If I’d lower the discount rate, if this discount rate is lower, it might have changed the outcomes and maybe I’ll actually do that in the next video just to show you how important the discount rate is. Anyway, I am out of time and I’ll see you in the next video.
<urn:uuid:36d1eafe-f55a-4108-b451-a3893d7cb9fc>
CC-MAIN-2013-20
http://videos.parentdish.com/learn-about-present-value-2-99173432/
2013-05-22T15:28:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968279
1,867
An Icelandic volcano, dormant for 200 years, has erupted, ripping a 1km-long fissure in a field of ice. The volcano near Eyjafjallajoekull glacier began to erupt just after midnight, sending lava a hundred metres high. Icelandic airspace has been closed, flights diverted and roads closed. The eruption was about 120km (75 miles) east of the capital, Reykjavik. What volcanic scientists fear is the fact that this eruption could trigger an eruption of Katla, one of the most dangerous volcanic systems in the world. Eruptive events in Eyjafjallajökull are often followed by a Katla eruption. The Laki craters and the Eldgjá are part of the same volcanic system. Insta-melt could occur: At the peak of the 1755 Katla eruption the flood discharge has been estimated between 200,000–400,000 m³/s; for comparison the combined average discharge of the Amazon, Mississippi, Nile, and Yangtze rivers is about 290,000 m³/s. More here: http://en.wikipedia.org/wiki/Katla Video of the eruption: Volcano Eruption in Eyjafjallajökull Iceland 20 Mars 2010. The volcano near the Eyjafjallajoekull glacier began to erupt shortly after midnight, leading to road closures in the area. No one was in immediate danger, but 500 people were being moved from the area. It is almost 200 years since a volcano near Eyjafjallajokull, 120km (75 miles) east of Reykjavik, last erupted. The last volcanic eruption in the area occurred in 1821. Taken from C-FQWY / TF-SIF DHC-8-314Q Dash 8
<urn:uuid:59aee3ab-58d4-4166-9aa1-924c6c47ffdd>
CC-MAIN-2013-20
http://wattsupwiththat.com/2010/03/21/icelandic-fissure-eruptuon-triggers-worries/
2013-05-22T15:16:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932361
391
This section of the Celebrating Alberta's Italian Community website provides contextual information- Canadian, Albertan and local. It includes historical, demographic, cultural, economic and social information. It is a primary resource on an area of Alberta's history that has not been previously documented-either in the form of print-based history or on the World Wide In Italian, la storia, is both history and story. The website includes both and makes use of information obtained through oral histories, a primary tool when documenting community history, as well as a range of documentary sources. It puts the Italian community on the map and documents its contribution to every aspect of Canadian and Albertan life. In this section, we read and listen to the stories that contributed to the making of Alberta. We examine the historical circumstances that led to Italian emigration, and the resultant immigration patterns that developed in Canada, more specifically in the Province of Alberta, and particularly in the cities of Edmonton and Calgary. We outline the political and economic factors in Italy and Canada that led to successive influxes of emigration from Italy. World events also influenced the attitude toward and treatment of ethnic immigrants and the site explores these issues and challenges.
<urn:uuid:5285a5a7-7ab1-4d2d-b110-d869ee48a9c1>
CC-MAIN-2013-20
http://wayback.archive-it.org/2217/20101215220210/http:/www.albertasource.ca/abitalian/background/history.html
2013-05-22T15:21:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932999
263
Portal Home | Public Services | Wisconsin Facts | Health and Safety | Wisconsin Capital Tour Rising between the picturesque waters of Lake Monona and Lake Mendota, the majestic granite structure of Wisconsin's Capitol building glows like a beacon, accenting the Madison skyline. On October 25,1836, the first Wisconsin Legislature convened in a rented building located in old Belmont (now Leslie, Lafayette County). A long struggle ensued regarding a permanent location for state government. Eventually, Madison was chosen to be the site. Built in 1838, the first Madison Capitol stood for 25 years until it was replaced by a larger building in 1863. After a devastating fire left the second Madison Capitol badly damaged, George B. Post & Sons designed the current Capitol, which was built between 1906 and 1917 at a cost of $7.25 million. The Madison Capitol is distinguished as being the only State Capitol ever built on an isthmus. Reaching to a height of over 200 feet, the Capitol dome is topped by Daniel Chester French's elegant gilded bronze statue, "Wisconsin." Edwin Blashfield's mural "Resources of Wisconsin" lavishly decorates the ceiling of the rotunda, which is the only granite dome in the United States. Inside, visitors are treated to the unique textures of 43 varieties of stone from around the world, hand-carved furniture and exquisite glass mosaics. The state's diverse ethnic heritage is reflected in the architecture, art and furnishings throughout the Capitol. Styled after the council chambers of the Doge's Palace in Venice, the walls and ceilings of the Governor's conference room are adorned with 26 historical and allegorical paintings by Hugo Ballin. The room also boasts French walnut furniture and a Wisconsin hardwood floor. The heritage theme is echoed in the chambers of Wisconsin's highest court and its bicameral legislature. The State Supreme Court room is decorated in German and Italian styles and features extensive use of marble, as well as four murals by Albert Herter. The Senate Chamber is decorated with French and Italian marble, and is highlighted by a colorful skylight and a Kenyon Cox mural depicting the opening of the Panama Canal. Down the hall, the Assembly Chamber features New York and Italian marble, Wisconsin oak furniture, a thirty-foot skylight and an Edwin Blashfield mural symbolizing Wisconsin's past, present and future. The best way to experience the beauty and grandeur of Wisconsin's Capitol building (located at 2 East Main Street, Madison, WI 53702) is to see it for yourself. It is open to the public weekdays from 8:00 a.m. to 6:00 p.m. and weekends and holidays from 8:00 a.m. to 4:00 p.m. Free tours are offered daily, year round except on the following holidays: New Year's Day, Easter, Thanksgiving, Christmas Eve and Christmas. Tours start at the information desk Monday through Saturday at 9:00, 10:00, 11:00 a.m. and 1:00, 2:00, 3:00 p.m.; and Sundays at 1:00, 2:00, 3:00 p.m. A 4:00 p.m. tour is offered weekdays (Monday - Friday), excluding holidays, during Memorial Day through Labor Day. The sixth floor museum and observation deck are open during the summer months. Groups of ten or more can make an on-line reservation for a tour of the State Capitol or call (608)266-0382. Wondering where to park in downtown Madison? Here's the latest information on Madison's parking ramps and lots. If coming by bus, passengers may be dropped off from the right-hand bus lane of the Capitol Square. Parking for buses is available at Olin Turville Park. Directions from the Capitol to Olin Turville Park are as follows: Turn right onto East Washington Avenue, then turn right on Blair Street which leads into John Nolen Drive. Follow John Nolen Drive around the lake and make a left turn on Olin Turville Court. Graphic Version | Site Map | Agency Index | Legal Notices | Privacy Notice | Acceptable Use Policy | Accessibility
<urn:uuid:91cb37c1-bce6-4d0e-9e57-70b73e65f3e3>
CC-MAIN-2013-20
http://wisconsin.gov/state/core/wisconsin_state_capitol_tour_t.html
2013-05-22T15:27:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929235
873
Tuning Carbon Nanotubes Schematic diagram of an experiment for resonantly exciting multi-walled nanotubes (MWNT). The nanotubes are attached to a nickel tip and are set in motion by an oscillating voltage applied to two anodes. Electrons spewed from nanotube ends provide a record of nanotube motions, which in turn indicate the resonant mechanical frequencies of the nanotubes. Tip-anode distance: 2 mm. Tip-screen distance: 3 cm.
<urn:uuid:76963a33-1710-4441-9f81-dc3378925089>
CC-MAIN-2013-20
http://www.aip.org/png/2002/173.htm
2013-05-22T15:02:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.818844
110
This is the first book to approach Stonehenge without any theoretical position. It describes what is known and believed about the monument's ... Show synopsis This is the first book to approach Stonehenge without any theoretical position. It describes what is known and believed about the monument's construction from c. 3000 BCE onwards. The Middle Ages were content with the story of it having been brought by Merlin from Ireland. The post Reformation antiquaries gave us the conception of Stonehenge as a historical monument. It played a significant role in the imagination of writers and artists. Then the Victorians invented prehistory and Darwin himself came to measure it. In 1918, it passed into public ownership and 1926 saw the first forced entry by Druids. The Earth Mysteries Movement now sees the stones as part of a greater web of ley lines and other phenomena. Archaeologists, united in their disdain for that, remain divided on many other points. And perhaps the most fraught issue now is conservation as the henge stands between two thundering main roads. This rich and provocative book explores all this in presenting a monument whose history is as fascinating as its secret.
<urn:uuid:dd40d637-29c4-48d9-a53c-77bff09c5841>
CC-MAIN-2013-20
http://www.alibris.com/booksearch?qwork=10790079
2013-05-22T15:38:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968892
228
Science Fair Project Encyclopedia Sonic the Hedgehog - For the video game, see Sonic the Hedgehog (Genesis). Sonic the Hedgehog is the flagship character and mascot for the video and arcade game company Sega, which has released a series of video games in which he either stars or plays a role. Sonic was competing head-to-head with Nintendo's mascot, Mario, for over a decade until Sega left the console market. His games are now on various other consoles. Sonic replaced Alex Kidd, who was Sega's mascot prior to 1990. Naoto Oshima designed the character while Yuji Naka (who would later become head of the Sonic Team division) was the main programmer. The "game planner" was Hirokazu Yasuhara. The music of the first two Sonic the Hedgehog games on the Megadrive and Genesis was composed by Masato Nakamura of the Japanese band Dreams Come True. Sega promoted the game's use of "Blast Processing", supposedly a feature of the Sega Genesis which allowed it to draw sprites faster, but which in reality simply referred to the console's fast CPU clock rate. Sonic was an early example of the "obscure anthropomorphic animal starring in a platform game" character archetype that was later seen in characters such as Crash Bandicoot, Spyro the Dragon, Blinx , and Sly Cooper. Sonic is a blue, 15 year old hedgehog who lives on the Pacific Ocean (some American and European cartoons featuring the character are instead set on the planet Mobius). He has the ability to run at supersonic speeds, hence his name. American sources often claim that Sonic's favourite food are chili dogs. His blue pigmentation was explained in an issue of gaming magazine GamePro as being the result of getting caught in an explosion involving cobalt, but this is probably not canonical. Stay Sonic, a book about the character written by Mike Pattenden and published only in the UK, provided an alternative explanation, which later became the basic origin for all subsequent UK publications. This origin is covered in detail below. Sonic has numerous abilities, including the Homing Attack (also known as the "Kaiten Jump"), in which he hits enemies with his spines while jumping, and the Light Dash, which allows him to run along a path of rings, even in the air. He is a poor swimmer, however, and will drown in water after a short amount of time, even as Super Sonic. (However, as Hyper Sonic in Sonic 3 & Knuckles, he can stay alive in water.) In the video games, Amy Rose believes she is Sonic's girlfriend. He is quite repelled by Amy's constant advances. However, in the anime series Sonic X, there is a mutual vibe between the two characters. In the SatAM cartoon, Sonic's love interest is Princess Sally Acorn. The Adventures of Sonic the hedgehog cartoon features a girlfriend named Breezie Hedgehog, while the Archie comics series featuring Sonic includes both Sally Acorn and Mina Mongoose. Sonic is also incredibly popular with the fangaming community, with possibly more fanmade games than any other video game star. In all games after Sonic Adventure, Sonic is voiced by either Jun'ichi Kanemaru or Ryan Drummond. In the TV shows, he is voiced by five different actors (specific to each show): Jaleel White, Masami Kikuchi, Martin Burke , Jun'ichi Kanemaru and Jason Griffith . Due to the many differences between universes, Sonic's history and world varies greatly. These are some of the backstories. We know very little about Sonic's past; he was supposedly born on Christmas Island and that he has frequently visited South Island, and that he and Dr. Eggman have a fierce rivalry. Beyond that, though, his past is a complete mystery. Sonic is something of a nomad; he travels from area to area of the Earth searching for new things to see and do, rarely stopping for anything or anyone unless he's needed, often times getting himself involved in Eggman's schemes to take over the world. Former US/UK version The origin of Sonic's blue colouration and super speed was first featured in a promotional comic strip in the US Disney Adventures comic and later described in more detail in Mike Pattenden's Stay Sonic book. It was used in most subsequent UK publications (including Sonic the Comic and the "Martin Adams" series of Sonic novels published by Virgin). Although an official Sega book, it should not be taken as canon for anything else; neither the video games themselves nor their translated manuals make any mention of it, and the Japanese backstories were adopted for the Sonic Adventure games. Sonic was originally an ordinary brown hedgehog with few remarkable qualities. But one day, he accidentally burrowed his way into the secret underground lab of Doctor Ovi Kintobor, a kindly scientist who wanted to make the world a true paradise by removing all evil from it using his Retro-Orbital Chaos Compressor machine. Of course, Sonic found that a laudable goal, and helped Kintobor by searching Mobius for the seventh and final emerald that he required to contain all the negative energy that he had gathered using the ROCC. Kintobor also helped Sonic to increase his speed using a treadmill he designed himself. Sonic eventually ran so fast that he broke the sound barrier, the resultant shockwave fusing his quills together and turning his body cobalt blue. Sonic failed to find the seventh emerald, but Kintobor apparently deduced a way to complete the transfer of the chaotic energy to the six emeralds without it. Before initiating the process, the pair planned to eat - but opening the fridge, the found it to contain only one rotten egg. Holding it in his hand, distracted by it, Kintobor walked back over to the ROCC, only to trip on a cable and fall, his hand slamming into the ROCC control panel. The machine overloaded and exploded, bathing Kintobor - and the egg - in chaos energy, and scattering the golden rings that comprised it across the planet. Doctor Ovi Kintobor had been transformed into the evil Doctor Ivo Robotnik. Sonic the Comic version Sonic the Comic's version is identical to the former US/UK Version, but it also later featured a story involving time travel that revealed that Sonic himself was responsible for Kintobor's accident. His foes, the Brotherhood of Metallix, had travelled back and removed the rotten egg from the fridge, preventing Robotnik from being created and leaving them free to dominate the planet. In order to prevent this future, Sonic had to replace the egg, and pulled the cable that tripped Kintobor - thereby making himself responsible for the creation of his greatest enemy. Archie Comics version The Archie comic series offers another angle on the origins of the person who would become the dreaded Dr. Robotnik. On the planet Mobius, humans (known as "Overlanders") existed for a time in a state of hostilities with the anthropomorphic animal beings Sonic and his friends represented. Julian Ivo fled from Overlander civilization after some transgression, and was subsequently taken in by King Acorn (Princess Sally Acorn's father) of Mobotropolis. Ivo became an important advisor to the King, but ultimately staged a coup (with the help of his nephew, Snively) in which he seized power and renamed both himself (to Ivo Robotnik) and the city he had come to rule (to Robotropolis). The premise of the games revolves around Doctor Eggman (Doctor Ivo Robotnik in the earlier releases in North America and Europe) trying to take over the world by turning the animals into robots (often called Badniks, though this is an US/EUR term and hasn't been used since Sonic Adventure). Sonic is charged with saving them. In later games he is joined by Tails (Miles "Tails" Prower), Amy Rose, Knuckles the Echidna, Cream the Rabbit and a host of other characters. Sonic must collect rings to protect himself from the robots, and as long as he has at least one, he is invulnerable save for drowning or being crushed. He ultimately must collect the Chaos Emeralds from the Special Stages in order to become his most powerful form, Super Sonic. However, Sonic's quest does not necessitate collecting the Emeralds himself; he must only prevent Eggman from collecting them and dooming the world with their power, as well as deal with numerous other foes, such as Metal Sonic (Mecha Sonic), Fang the Sniper (formerly Nack the Weasel in the West, still Nack in the comic books), Shadow the Hedgehog, and Rouge the Bat. - Adventures of Sonic the Hedgehog (AoStH, US) - Sonic voiced by Jaleel White, Robotnik voiced by Long John Baldry - Sonic the Hedgehog (SatAM, US) - Sonic voiced by Jaleel White), Robotnik voiced by Jim Cummings - Sonic Underground (US, France) - Sonic voiced by Jaleel White - Sonic the Hedgehog (Anime, Japan) - Sonic voiced by Masami Kikuchi and Martin Burke - Sonic X (Anime, Japan) - Sonic voiced by Jun'ichi Kanemaru and Jason Griffith - Sonic the Hedgehog (Shogakukan, Japan) - Sonic the Hedgehog (Archie Comics, US) - Sonic the Comic (Fleetway, UK) Sonic fan-made dōjinshi have also been released in Japan. A series of six Sonic Adventures gamebooks were published in the UK by Puffin: - Book 1 - Metal City Mayhem, James Wallis - Book 2 - Zone Rangers, James Wallis - Book 3 - Sonic v Zonik, Nigel Gross and Jon Sutherland - Book 4 - The Zone Zapper, Nigel Gross and Jon Sutherland - Book 5 - Theme Park Panic, Marc Gascoigne and Jonathan Green - Book 6 - Stormin' Sonic, Marc Gascoigne and Jonathan Green - Stay Sonic, Mike Pattenden. Developed the "Kintobor origin" (first introduced in the Disney Adventures comic) in more detail. This background was used as the basis of most subsequent UK Sonic stories. James Wallis, Marc Gascoigne and Carl Sargent (under the pseudonym of Martin Adams) wrote four Sonic the Hedgehog novels based on the origin established in Stay Sonic. They were published in the UK by Virgin. - Book 1 - Sonic the Hedgehog in Robotnik's Laboratory - Book 2 - Sonic the Hedgehog in the Fourth Dimension - Book 3 - Sonic the Hedgehog and the Silicon Warriors - Book 4 - Sonic the Hedgehog in Castle Robotnik Michael Teitelbaum also wrote a series of Sonic novels: - Sonic the Hedgehog - Sonic the Hedgehog: Robotnik's Revenge - Sonic the Hedgehog: Fortress of Fear - Sonic the Hedgehog: Friend or Foe? - Sonic & Knuckles - Sonic X-Treme - Where's Sonic? - Where's Sonic Now? - Sonic Central - Official website. - The Green Hill Zone - A website dedicated to chronicling every Sonic Team game designed to date, including all of the Sonic games. - Shadow of a Hedgehog - General fansite - Sonic the Hedgehog Information Treasury - A Sonic community-driven wiki. - Sonic CulT - A Sonic game research site. - Sonic Stadium - A popular fansite. - Sonic Fan Games HQ - A site containing downloadable Sonic the Hedgehog fangames. - Sonic Team Speaks - Interviews with Sonic Team and Sega staff over the years. - The Sonic Art Archive - High resolution artwork. - Fans United for SatAM - Fansite dedicated to the Sonic the Hedgehog television series The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:cd19653b-6774-4a93-a344-0dfab00f95a2>
CC-MAIN-2013-20
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Sonic_the_Hedgehog
2013-05-22T15:34:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950704
2,515
Science Fair Project Encyclopedia Technological convergence is the modern presence of a vast array of different types of technology to perform very similar tasks. For example, in today's society one can communicate with a friend via mail, online chatting, cellphones, e-mail, and many other forms of modern technology. Though the forms of technology are all very different, they all essentially provide the same basic service: person-to-person communication. The notion of a one to many form of communication have since diluted. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:f6953872-aa49-4e45-bff0-dca9aa2a9df2>
CC-MAIN-2013-20
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Technological_convergence
2013-05-22T14:59:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903811
133
More than 400 rock paintings adorn the Canadian Shield from Quebec, across Ontario and as far west as Saskatchewan. The pictographs are the legacy of the Algonkian-speaking Cree and Ojibway, whose roots may extend to the beginnings of human occupancy in the region almost 10,000 years ago. Archaeologist Grace Rajnovich spent fourteen years of field research uncovering a multitude of clues as to the meanings of the paintings. She has written a text which is unique in its ability to "see" the paintings from a traditional native viewpoint. Skilfully weaving the imagery, metaphors and traditions of the Cree and Ojibway, the author has recaptured the poetry and wisdom of an ancient culture. Chief Willie Wilson of the Rainy River Band considers Grace's work "innovative and original."
<urn:uuid:eac60882-d0f0-48b8-b5f4-a2b846933050>
CC-MAIN-2013-20
http://www.amazon.ca/Reading-Rock-Art-Interpreting-Paintings/dp/155488473X
2013-05-22T15:37:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956607
168
POPULAR NEWSPAPERS DURING WORLD WAR II Part 2: 1940 (The Daily Express, the Daily Mirror, the News of the World, The People & the Sunday Express) Popular newspapers were one of the primary means through which ordinary people in Britain received their news about the war. They now provide an excellent and immediate source for students and scholars alike, describing the progress of the war and its impact on the home front. The papers provide: - Hundreds of thousands of photographs and maps - making the war more intelligible and reducing it to a human scale. The Daily Mirror printed wonderful pictures of the Queen walking amongst the devastation of the London Blitz and The People's 'Cavalcade of the Blitz' brings together pictures of heroes with accounts of the deeds of individual firemen, ARP wardens and police officers. - Detailed accounts of the latest developments whether on the home front or the battlefield. How was Dunkirk transformed from a humiliating retreat into a morale-boosting episode? - Insightful articles by leading writers and politicians, such as Somerset Maugham describing his visit to Strasbourg in the News of the Worldand Herbert Morrison writing in The Daily Mirror and advising Prime Minister Chamberlain to "GET OUT!" - Masses of material for the study of popular culture, ranging from film and theatre reviews to regular football and cricket reports from writers such as Alex James, Henry Cotton and Freddie Fox. Part 2 covers 1940, when the war fanned out across Europe and civilian populations in Britain were subjected to mass bombing attacks. It was the year that Japan joined the war and Britain launched its offensive in North Africa.
<urn:uuid:77e53fef-1078-4182-8df7-bf753a546351>
CC-MAIN-2013-20
http://www.ampltd.co.uk/collections_az/PopNewsII-2/highlights.aspx
2013-05-22T15:07:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935284
336
Golden Spiral or Fibonacci Spiral The golden ratio was often used in the design of Greek and Roman architecture. Sacred geometry is geometry that is sacred to the observer or discoverer. This meaning is sometimes described as being the language of the God of the religion of the people who discovered or used it. Sacred geometry can be described as attributing a religious or cultural value to the graphical representation of the mathematical relationships and the design of the man-made objects that symbolize or represent these mathematical your first fibonacci spiral: First get some graph paper, with a pencil draw 2 parallel lines 13 sqaures long. One on top of the other, look at the picture below, the 2 lines are the top and bottom of the rectangle. Next join the two lines' ends to make a rectangle. Ok, next you draw a line at the eighth square to make a square 8 boxes long and eight boxes high. The box you just made is box 8 below. Next in the upper right hand corner draw another square 5 boxes long by 5 boxes high, that will be square number 5 in the picture below. Then you draw another square 3 boxes by 3 boxes where the number 3 box is below. Next you draw a square 2 boxes by 2 boxes like the one below. Next 2 by 2, and next 1 by 1. Then you draw your spiral which will look something like this. Page 1 2 3
<urn:uuid:aa3c92d1-ac8e-42b5-904d-c5b4f701703c>
CC-MAIN-2013-20
http://www.ancient-symbols.com/golden_spiral.html
2013-05-22T15:29:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.905405
309
Press Release - Salt and dust help unravel past climate change Issue date: 23 Mar 2006 Tiny amounts of salt and dust trapped in the Antarctic ice sheet for the last 740,000 years shed new light on changes to the Earth’s climate. The results, published this week in the journal Nature, come from the team who extracted a 3 km long ice core from Dome C, high on East Antarctica’s plateau - the oldest continuous climate record obtained from ice cores so far. Since reporting in 2004 that the Earth experienced eight climate cycles (each consisting of an ice age and warm period) the team have been analysing the chemical impurities in the cores to unravel how different parts of Earth’s climate varied over the last 740,000 years. This work is vital for understanding future climate change. By measuring the varying amount of salt in the cores the team can estimate how far the sea ice around Antarctica extended every time Antarctica got colder. The salt appears to come from brine expelled to the top of newly formed sea ice (frozen sea water). The white sea ice replaces the dark ocean, making the Earth reflect more sunlight. Small dust particles are blown by the wind from surrounding continents. Many more of them are found in ice from cold times, and the team conclude that the nearest continent, southern South America, was much drier or windier. The extra dust may have provided nutrients to the ocean, helping microbes to take up CO2 from the atmosphere. From the different responses of salt and dust, the authors propose that each time the Earth warmed, emerging from an ice age, there was an order of events, with South America responding early, and sea ice extent responding late. Lead author Dr Eric Wolff from British Antarctic Survey said, “Our research shows that, throughout the last 740,000 years, every time cold conditions gave way to mild ones, similar changes occurred in the same sequence. We conclude that the Earth follows rules when climate changes and if we can understand those rules we can improve climate models and make better predictions for the future.“ The Dome C drilling is part of the ‘European Project for Ice Coring in Antarctica’ (EPICA). The team at Dome C endured summer temperatures as low as minus 40ºC at the remote drilling site over a thousand kilometres from the nearest research station. The consortium completed the drilling in December 2004 after penetrating 3260 m of ice. Issued by the British Antarctic Survey on behalf of the EPICA chemistry consortium The paper ‘Southern Ocean sea-ice extent, productivity and iron flux over the past eight glacial cycles’, is published in Nature on 23 March. For more information, contact: Eric Wolff +44 1223 221491, [email protected], or British Antarctic Survey Press Office: Linda Capper – tel: (01223) 221448, mob: 07714 233744, email: Becky Allen – tel: (01223) 221414, mob: 07736 921693, email: [email protected] For more information in other countries of co-authors on the paper, contact: Denmark : Jorgen Peder Steffensen : + 45 35 32 05 57, [email protected] France: Martine de Angelis: +33 (0)4 76 82 42 33, [email protected] Germany: Hubertus Fischer: +49 471 48311174, [email protected] Sweden: Margareta Hansson: +46 86747865, [email protected] Switzerland: Thomas Stocker: +41 31 631 44 64, [email protected] NOTES TO EDITORS: EPICA (European Ice Core Project in Antarctica) is a consortium of 10 European countries (Belgium, Denmark, France, Germany, Italy, Netherlands, Norway, Sweden, Switzerland, UK). EPICA is coordinated by the European Science Foundation (ESF), and funded by the participating countries and by the European Union. The EPICA research team is using the unique climate record from ice cores to investigate the relationship between the chemistry of the atmosphere and climate changes over the past 800,000 years, especially the effects of carbon dioxide, methane and other components of the atmosphere. The results will be used to test and enhance computer models used to predict future climate. EPICA’s aim was to drill two ice cores to the base of the Antarctic ice sheet, one at Dome C, the other in Dronning Maud Land. Both drillings have now reached the base of the ice sheet, and further analyses are underway. The ice cores are cylinders of ice 10 cm in diameter that are brought to the surface in lengths of about 3 metres at a time. Snowflakes collect particles from the atmosphere, and pockets of air become trapped between snow crystals as ice is formed. Analysis of the chemical composition and physical properties of the snow and the trapped air, including atmospheric gases such as CO2 and methane, shows how the Earth’s climate has changed over time. The Antarctic fieldwork is challenging both scientifically and environmentally. Dome C (75° 06’S, 123° 21’E) is one of the most hostile places on the planet, and average annual temperatures are below –54 degrees Celsius. Researchers and their equipment have to be transported 1000 km from coastal stations. British Antarctic Survey is a world leader in research into global issues in an Antarctic context. It is the UK’s national operator and is a component of the Natural Environment Research Council. It has an annual budget of around £40 million, runs nine research programmes and operates five research stations, two Royal Research Ships and five aircraft in and around Antarctica. More information about the work of the Survey can be found on our website: www.antarctica.ac.uk
<urn:uuid:da309b06-4bc5-4669-9ad0-29307743fa43>
CC-MAIN-2013-20
http://www.antarctica.ac.uk/press/press_releases/press_release.php?id=71
2013-05-22T15:21:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.892522
1,260
St. Zenobia of Aegae, and her brother Heiromartyr Zenobius Commemorated on October 30 The Hieromartyr Zenobius, Bishop of Aegea, and his sister, Zenobia, suffered martyrs’ deaths in the year 285 in Cilicia. From childhood, they were raised in a Christian Faith by their parents, and they led pious and chaste lives. In adulthood, shunning the love of money, they distributed their inherited wealth to the poor. For his beneficence and holy life, the Lord rewarded Zenobius with the gift of healing various maladies. He was also chosen bishop of a Christian community in Cilicia. As bishop, St. Zenobius zealously spread the Christian Faith among the pagans. When Emperor Diocletian (284-305) began a persecution against the Christians, Bishop Zenobius was the first one arrested and brought to trial before Governor Licius. “I shall only speak briefly with you,” said Licius to the saint, “for I propose to grant you life if you worship our gods, or death, if you do not.” Zenobius answered, “This present life without Christ is death. It is better that I prepare to endure the present torment for my Creator, and then with Him live eternally, than to renounce Him for the sake of the present life, and then be tormented eternally in Hades.” By order of Licius, they nailed him to a cross and tortured him. St. Zenobia, his sister, saw his suffering, and bravely confessed her own faith in Christ before the governor. She was also tortured. By the power of the Lord, they remained alive after being placed on a red-hot iron bed, and then in a boiling kettle. The saints were ultimately beheaded. The priest Hermogenes secretly buried their bodies in a single grave. Sts. Zenobius and Zenobia are invoked by those suffering from breast cancer. Troparion (Tone 4) – As brother and sister united in godliness together you struggled in contest, Zenóbius and Zenobía. You received incorruptible crowns and unending glory and shine forth with the grace of healing upon those in the world. Kontakion (Tone 8) – Let us honor with inspired hymns the two martyrs for truth: the preachers of true devotion, Zenóbius and Zenobía; as brother and sister they lived and suffered together and through martyrdom received their incorruptible crowns. By permission of the Orthodox Church in America (www.oca.org)
<urn:uuid:f57fcd0d-6865-42af-abf2-f8795f94a8f0>
CC-MAIN-2013-20
http://www.antiochian.org/node/16782
2013-05-22T15:01:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966369
569
Learning How to Ask Questions Knowing how to ask students the right question at the right time for the right reason is an important skill that preservice teachers can learn that will serve them for a lifetime of teaching. Classroom questions can be divided into three types, which are linked to each other: - Information questions that deal with facts. - Processing questions that help students analyze. - Imaginative questions that encourage students to propose possibilities (What if … ?) or think outside the box. Teachers or teaching teams first develop good questions to guide a lesson during the lesson planning process when they consider what students ought to learn. The video clip focuses on how teachers can ask informational questions so that all students in a class have a similar starting point based on the same information—in this case, story problems. Initial information questions (e.g., Who? What? When? Where?) can help students recall and share facts and ideas from their reading or previous learning or experiences. Repeating or rephrasing questions and allowing sufficient wait time gives students space to think. You can then use such shared information for discussion at the next level, through the processing questions that ask students to connect information, analyze the facts, or draw conclusions based on the connections they make. After students think more deeply about the material through processing questions, they are primed to make the content their own by applying it to new situations or creating something new with it. These are experiences that teachers can be encouraged with imaginative questions. Source: From The How To Collection: Helping New Teachers, [DVD], 2006, Alexandria, VA: ASCD. ASCD Express, Vol. 7, No. 10. Copyright 2012 by ASCD. All rights reserved. Visit www.ascd.org/ascdexpress.
<urn:uuid:b276ca23-6262-4d51-8a0f-99090d5812de>
CC-MAIN-2013-20
http://www.ascd.org/ascd-express/vol7/710-video.aspx
2013-05-22T15:35:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932966
366
Peacemaking needs women Women play a special role in peacekeeping and conflict prevention; that’s why the United Nations Security Council has long been calling for them to be more involved in peace processes. On 19 December the Federal Government approved a national action plan intended to improve the implementation of Security Council resolution 1325, which was adopted in 2000. This resolution on “Women, Peace and Security” calls for women to be involved in peace processes in a variety of ways. From the start Germany has actively campaigned for the goals of this resolution, which calls for the participation of women in crisis prevention, conflict management and post conflict peacebuilding and for women to be protected from gender-based violence and in particular sexual abuse in situations of armed conflict. Sharper focus to activities The action plan to implement resolution 1325 gives a sharper focus to Germany’s various activities in this area. It incorporates suggestions put forward by the Bundestag as well as by non governmental organizations and the research community. In drawing up the action plan, the German Government has responded to a recommendation of the UN Secretary-General. The German Government’s commitment to strengthening the role of women in peacekeeping and conflict prevention has taken various forms to date, including training for German civilian and military personnel serving with UN led or UN mandated peace missions. It has also supported measures to involve women in efforts to resolve particular conflicts. In addition, the German Government supports UN schemes designed to promote the participation of women in peace processes and their protection in situations of armed conflict. Last updated 19.12.2012
<urn:uuid:95cf62ec-f862-4f5b-8c2c-90fbe9d18404>
CC-MAIN-2013-20
http://www.auswaertiges-amt.de/sid_6B119F58B818284B3F0F30B1941BDE1E/EN/Aussenpolitik/Menschenrechte/Aktuell/121219_Aktionsplan_Res1325.html
2013-05-22T15:10:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94679
324
Pine disease prevents species from thriving in Kansas Today we start a two-part series about one of the most popular trees in Kansas, the pine tree, and some of the disease problems pines face. Pines are a favorite part of most landscaping projects because of their evergreen qualities, but unfortunately, some pines around here are not very green anymore. It seems pines all too often fall victim to the many diseases and environmental hardships our midwestern climate poses. An interesting piece of information about pines is that they are not native to Kansas; in fact, Kansas is the only state among the 50 states in the union that does not have a single native pine species. The reasons for this may become more obvious as I highlight the two major pine diseases — tip blight and pine wilt — we see here in the Midwest, as well as several environmental problems pines face. I will dedicate today’s article to discussion of the pine disease tip blight. Tip blight affects Austrian, Ponderosa, Scots and Mugo pines. Usually it’s most severe on trees more than 20 years old, and can be lethal to pine trees over time. The symptoms first occur in late May or early June. Tip blight keeps the new shoots (candles) from growing and elongating and causes them to eventually turn brown. The needles at the end of the branches will be brown and have a stubby appearance. The disease can also spread to a tree’s old wood, where it will form a canker (a physical wound of the trunk or limb). Fungal cankers are known to kill entire branches and sections of trees. Besides looking at the tree’s new growth, or lack thereof, you can also see symptoms of tip blight by looking for black specks on the bottom side of 2-year old pine cones. These black specks are the pycnidia, or the spore producing stages of the disease, and become visible during the late summer months. Removal of infected branches is one way to control the disease but once a tree has the infection, you cannot truly get rid of it. If a tree has tip blight, the best way to prolong the health of the tree is to maintain its vigor. The best way to do this is to adequately supply water and nutrients to the tree. Prevention is the best measure against this disease, and the use of fungicides is the best way to prevent infection. These fungicides need to be applied to the new growth as it is emerging in order to protect it from fungal infection. Most copper fungicides will work for this purpose and are available to homeowners. Every year you will need to spray around the third week of April. Spray again 10-14 days after the initial treatment, and then again 10-14 days after the second treatment for maximum protection of new growth. There are several other fungicides that work well in preventing tip blight, but they are mostly restricted-use chemicals that need to be applied by a professional tree-care service. A good reason to consider a professional would be that they will have the equipment necessary to adequately reach all parts of large, established trees (remember that tip blight normally affects trees more than 20 years old). Keep in mind that next week’s article will focus on the other major pine disease, pine wilt, while also mentioning a few of the environmental stresses pines must endure. If you have questions about pine trees or pine diseases, you can contact me at the Leavenworth County Extension Office on the corner of Hughes and Eisenhower roads in Leavenworth, or call (913) 250-2300. I can also be reached via email at [email protected].
<urn:uuid:6fa9b3b8-b13b-444b-9fd7-d3ff9e242a05>
CC-MAIN-2013-20
http://www.basehorinfo.com/news/2009/jan/15/pine-disease-prevents-species-thriving-kansas/
2013-05-22T15:14:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958258
767
Lesson 6: Babel In chapter 11 of Genesis we are told that there was only one language in the earth during the time of Noah and his relatives. By this we know that everyone could understand each other, no matter where they lived. Today we are not able to understand people who live in France, Germany, Italy or any other foreign country because they speak a different language from us. In our lesson today, we are going to learn why this is so. Our story begins in the land of Shinar where many of the people were living at that time. They decided among themselves to build a city and a tower which would reach unto heaven. By this they would become well-known throughout the earth. To build this tower they needed bricks and then slime to hold the bricks in place. However, in all their plans there was one important thing that they forgot, and that was to ask God's advice. We know that the Bible teaches that we must look to God for help and must never leave Him out of our plans. He has also commanded us to do certain things if we want to be a part of His Kingdom. We should pray to God always and ask Him for His help in everything that we do.(Luke 21:36) Soon God saw what they were doing and He gathered the angels, and said, "Let us go down and confound the language." To confound the language means to mix it up. God made it so that these people could not understand each other when they were to talk. God also scattered them upon the face of the whole earth. The name of this tower was Babel which means confusion. Wherever these people went to live after God scattered them, their language became the language of the country. This is the reason we have so many different languages today. 1. What was the name of the tower that the people built? 2. What did God do to these people? 3. What did the people forget to do when they started to build the tower?
<urn:uuid:f4ba4ae5-cadf-4694-b8aa-663b4ef7c9f0>
CC-MAIN-2013-20
http://www.bereans.org/lenny/index.php?f=06
2013-05-22T15:21:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.991228
409
By: Charleen Barr Colorado Master Gardener in Larimer County No, it is not a spelling test! Gardening terms are important to know. They help us understand the workings of a garden. Sometimes we can be confused, even overwhelmed, by the many gardening specific words and terms used by those who are regularly engaged in fooling around with earth and its bounty. Perhaps a brief introduction to a few often used terms found in magazines, brochures, and at garden nurseries will help in becoming familiar with garden vocabulary. Leggy – What constitutes leggy? What do leggy seedlings look like? Seedlings become leggy as they are reaching for the sun, usually as they are getting a hint of their second set of leaves but they may be over four inches tall. They look tall, thin and awkward; they almost make us wonder if the stems will support the leaves. Soil amendments has nothing to do with the U.S. Constitution, but refers to what one does to correct soil deficiencies and increase the health and productivity of the soil. What you add to the soil is, of course, dependent upon the soil’s present condition and what is, or will be, growing there. This calls for a soil test. Call your local CSU Extension Office for the form and protocol. Hardening off describes the process of acclimating plants to outdoor conditions after growing them from seed indoors, or simply keeping them indoors for a period of time. Hardening off plants means gradually exposing them to outdoor temperatures during the day rather than immediately planting them into the garden. Plants should also be protected from full sun while they are being hardened off. This also applies to plants purchased from greenhouses. Double digging should not be confused with “double-dipping” that refers to certain income tax or food practices. It means tilling or turning the soil twice, creating a trench with the first dig, piling that soil to the side, and then going deeper for the second dig to provide more soft soil depth within an area. Plants, such as asparagus needing to be placed deeper into the soil, may require double digging. Native plants or trees have definitions that are as numerous as the number of such species found in any given geographic area. “Native” refers to plants, shrubs, and trees that were present in a defined area prior to European settlement. A defined area may be a site, state, region, or ecological classification system in the U.S. or North America. Front Range native species are identified in Colorado Flora, Eastern Slope, 3rd ed. by William A. Weber and Ronald C. Wittmann. Determinate and Indeterminate are terms used to describe the growth patterns and productivity periods of plants. For example, determinate species of tomatoes tend to reach a certain mature size, stop growing, produce fruit over a limited period of time, and then decline. Indeterminate plants continue growing until frost arrives and produce fruit throughout their lifetimes. An invasive plant sounds very threatening. These plants tend to spread quickly by roots, seeds, shoots, or all three. Left unchecked, they can literally take over an area, choking out other desirable plantings. The definition of invasive may depend on the individual gardener’s idea of what they like or dislike. Loam is really the texture of the soil between fine-particle clay and coarse-textured sand. It remains pliable and well drained but holds moisture, and plants thrive in it. Loose-textured clay is described as “clay loam.” Loam with many large particles in it is categorized as “sandy loam.” Over many years, loam has come to be called topsoil and vice versa, but it is particle gradation, not a description of fertility. Cultivar is simply an artificially contrived species not found naturally in nature. The volumes of varieties of roses available are examples, as are lilies and daisies. pH factor is not the past history of our gardens, but a measure of the acidity or alkalinity of our soil where 7.0 represents neutrality and lower numbers indicate increasing acidity and higher numbers increasing alkalinity. Front Range soils generally range in pH from 7.5-8.5. See Planttalk #1606 Soil Tests at www.planttalk.org. Integrated Pest Management (IPM) refers to the attempt to use a variety of strategies to keep garden pests under control, while at the same time attempts to minimize damage to the environment. A biological, rather than chemical control (releasing ladybugs to control certain insects) is an example. Till, spade, hoe, seed and roots are terms we know, but to become successful gardeners, be sure to ask questions about gardening terms that are unfamiliar. Gardening season is upon us and Colorado State University Extension is available to answer your questions. Visit www.ext.colostate.edu or call the Larimer County Extension Office at 970-498-6000. Master Gardener volunteers are available to answer gardening questions during the week from 9 a.m.-1 p.m. The author has received training through Colorado State University Extension’s Master Gardener program and is a Master Gardener volunteer for Larimer County. Larimer County is a county-based outreach of Colorado State University Extension providing information you can trust to deal with current issues in agriculture, horticulture, nutrition and food safety, 4-H, small acreage, money management and parenting. For more information about CSU Extension, Larimer County, telephone (970) 498-6000 or visit www.larimer.org/ext Visit PlantTalk Colorado ™ for fast answers to your gardening questions! www.planttalk.org PlantTalk is a cooperation between Colorado State University Extension, GreenCo and Denver Botanic Gardens.Print This Post
<urn:uuid:0ed9450c-f69e-4576-ac23-658410d18138>
CC-MAIN-2013-20
http://www.berthoudrecorder.com/2010/04/05/grow-cab%E2%80%99-u-lar-y-and-introduction-to-gardening-terms/
2013-05-22T15:23:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944143
1,228
For several months last spring, the Vanderbilt greenhouse held more members of a rare species of native sunflower than are known to exist in the wild. This unusual bounty was the result of research being conducted by Jennifer Ellis, a doctoral student in the biological sciences department, working under the supervision of Professor David E. McCauley. The species is called the giant whorled sunflower, Helianthus verticillatus. It was discovered in 1892 in Tennessee but was thought to be extinct until 1994 when it was rediscovered in Georgia. Today, it is known to exist in only four locations in West Tennessee, Alabama and Georgia. It has been a candidate for listing as a federal endangered species since 1999. In the last four years Ellis has conducted a series of genetic studies of the whorled sunflower that significantly increase the odds that this gangly plant will make the endangered species list. Once a species is listed then the federal government is empowered to take a number of steps to protect it. Her study came at a perfect time and gave us answers that we really needed, says Cary Norquist, assistant field supervisor and botanist in the Ecological Services Field Office of the U.S. Fish and Wildlife Service in Jackson, Miss., who has recommended upgrading the sunflowers application for listing from a low to a high level as a result of the new information. One of the questions that Ellis research answered was whether the whorled sunflower was a distinct species or a hybrid of two common varieties. If it was a hybrid then it would not qualify as an endangered species. Her work definitely confirmed that it is a distinct species, says Norquist. The other answer that Ellis has provided is a more accurate count of the number of genetic individuals that exist in the wild. According to previous estimates, there were several thousand whorled sunflowers growing in Coosa Valley Prairie in Alabama and Georgia, about 7,000 |Contact: David F. Salisbury|
<urn:uuid:07ae1fbf-edbf-45c9-ab6d-6c7f7c9e9460>
CC-MAIN-2013-20
http://www.bio-medicine.org/biology-news-1/Student-study-bolsters-case-for-adding-a-rare-sunflower-to-the-endangered-species-list-567-1/
2013-05-22T15:16:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963507
405
For thousands of years, Native Americans and historic peoples were drawn to this isolated area of black mesas along what was the Gila River. On the mesas, among rocks blazed black by the desert sun, these peoples left their marks here at Sears Point, depicting life as it once was. Sears Point is a very special area that lies at a crossroad of historical events and cultures. It embraces a wide array of archaeological sites, including rock alignments, cleared areas, intaglios, petroglyphs, and aboriginal foot trails. This fragile evidence of human history spans thousands of years with some dating to the Archaic Period. The prehistoric cultures which are believed to have utilized this archaeological district between 10,000 BCE and 1450 CE include the Desert Archaic, Patayan, and Hohokam cultures. The Desert Archaic Period, known as the Amargosa in western Arizona, is characterized by nomadic lifestyles. The people living at Sears Point at this time were well adapted to living in desert conditions. They migrated seasonally based on the ripening of certain plant products and hunting conditions. Sears Point was a more lush area at this time and the Gila River was an important part of survival. The Patayan and Hohokam peoples lived during what is known as the Ceramic Period. These people experimented with early agriculture and ceramics became important as a way to store food. Changes in population densities and rainfall may have played a role in this shift from a hunting and gathering emphasis to a more sedentary life closer to major streams and rivers. A new cultural era is obvious by the presence of more recent petroglyphs of a new style known as the Sears Point Patayan. Often the new style of petroglyphs is superimposed over the top of the older Archaic period petroglyphs. The Sears Point area contains evidence that suggests an unusual association between Hohokam and Patayan features, which cannot be seen elsewhere. Sears Point is hypothesized to have been a boundary area between these cultures where the two groups maintained contact with each other. Very little of the prehistory at Sears Point is well understood. Petroglyphs are difficult to date, and often the archaeological evidence is very subtle and fragile. Sears Point is a unique area and it holds an enormous amount of information about past lifestyles. However, we cannot learn from it if it is not kept in good condition. Please help us in our research by being gentle as you appreciate what Sears Point has to offer.
<urn:uuid:f50efe40-71dc-41a7-a8d9-8aa6a84a5467>
CC-MAIN-2013-20
http://www.blm.gov/az/st/en/prog/cultural/sears.print.html
2013-05-22T15:22:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.9716
519
BPMInstitute.org defines Business Process Management (BPM) as the deliberative, collaborative, and increasingly technology-aided definition, improvement and management of a firm’s end-to-end enterprise business processes in order to achieve three outcomes crucial to a performance-based, customer-driven firm: 1) clarity on strategic direction, 2) alignment of the firm’s resources, and 3) increased discipline in daily operations. BPM 101 is the first course of the BPM curriculum. It provides an overview of BPM as both a management discipline and as a set of enabling technologies, and establishes the foundation for the courses that follow. The course teaches the student the key concepts, terms, methodologies, techniques, and technologies in BPM. It describes what a process is, what process modeling, analysis and design are, and what process management is. It provides an overview of the tools and technologies used to support the BPM discipline including process modeling tools and a BPM platform known as a Business Process Management Suite. Students will learn about the practices and the technologies that are making “process thinking” a new approach to solving business problems and continuously improving organizational performance. |Your Rate||You Save| Ways to save! This course is available via the following methods: TIP: Click the full-screen icon in lower right to watch full-screen preview.
<urn:uuid:ff5fe0c6-3763-4e54-9297-5c00c8241cc4>
CC-MAIN-2013-20
http://www.bpminstitute.org/training/on-demand/bpm-101
2013-05-22T15:07:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909033
287
Henry IIIArticle Free Pass Henry III, (born Oct. 28, 1017—died Oct. 5, 1056, Pfalz Bodfeld, near Goslar, Saxony [Germany]), duke of Bavaria (as Henry VI, 1027–41), duke of Swabia (as Henry I, 1038–45), German king (from 1039), and Holy Roman emperor (1046–56), a member of the Salian dynasty. The last emperor able to dominate the papacy, he was a powerful advocate of the Cluniac reform movement that sought to purify the Western church. Youth and marriage Henry was the son of the emperor Conrad II and Gisela of Swabia. He was more thoroughly trained for his office than almost any other crown prince before or after. With the emperor’s approval, Gisela had taken charge of his upbringing, and she saw to it that he was educated by a number of tutors and acquired an interest in literature. In 1036 Henry married Gunhilda (Kunigunde), the young daughter of King Canute of England, Denmark, and Sweden. Because her father had died shortly before, the union with this frail and ailing girl brought with it no political advantages. She died in 1038, and the emperor Conrad died the following year. His 22-year-old successor as German king resembled him in appearance. From his mother Henry inherited much, especially her strong inclination to piety and church services. His accession to the throne, unlike that of his two predecessors, did not lead to civic unrest, but his reign was burdensome from the beginning. Probably over questions of principle, the self-willed emperor quarrelled with the aging Gisela during her last years. He devoted his energies above all to the contemporary movement to bring an end to war among Christian princes, although his own policies were not always pacific. In possession of the duchies of Franconia, Bavaria, Swabia, and Carinthia, he had attempted to carry on his father’s policy of supremacy in the east and, in fact, attained sovereignty over Bohemia and Moravia. It may have been at this time that Henry, prematurely believing he had reached the zenith of his power, displayed openly, as if it were a matter of governmental policy, his leanings toward the clerical-reform party. Intending to re-create a theocratic age like that of Charlemagne, he failed to realize that this could be done only as long as the papacy was powerless. Still a childless widower, he married Agnes, the daughter of William V of Aquitaine and Poitou, in 1043. The match must have been intended primarily to cement peace in the west and to assure imperial sovereignty over Burgundy and Italy, and Agnes’ total devotion to the church reform advocated by the Cluniac monasteries probably confirmed Henry in his decision to take her for his wife. In November 1050 she bore him a son, who later became the emperor Henry IV. There followed another boy, Conrad, and three daughters. What Henry still lacked was the highest honour—his coronation as emperor at the hands of the pope. What made you want to look up "Henry III"? Please share what surprised you most...
<urn:uuid:8ea4875c-c4f3-4cde-8e39-7831a7cc2960>
CC-MAIN-2013-20
http://www.britannica.com/EBchecked/topic/261624/Henry-III
2013-05-22T15:16:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.987183
698
Have you had a bit too much sun for your own good? Decades of boating, fishing, hiking, golfing and just plain drowsing on the deck contribute to your lifetime exposure and risk of developing skin cancer. But there are simple steps you can take now to reduce your risk and catch worrisome skin blemishes before they turn into a threat – particularly malignant melanoma, the most dangerous form of skin cancer. “The biggest misconception is that there is no turning back once you have accumulated sun damage and sunburn,” says Dr. Daniela Kroshinsky, an assistant professor at Harvard Medical School and medical dermatologist at Massachusetts General Hospital. “But adopting good sun habits prevents accumulation of additional damage that could contribute to the overall risk for melanoma and even more so for non-melanoma skin cancers and pre-cancers.” Add to that a reasonable level of surveillance for suspicious skin blemishes, and you can drastically reduce the chance of getting into the danger zone. Sunlight primarily consists of three wavelengths: UVA, UVB and UVC. Earth’s ozone layer filters the most damaging UVC rays. UVA represents 95 percent of the sun’s energy that reaches Earth’s surface, and penetrates deepest into the skin. UVB carries just 5 percent of the sun’s energy but can still burn the skin’s outermost layer. Men at risk Nearly twice as many men die of melanoma as women. “Older men are the population at greatest at risk for a bad outcome because they do not access care and they don’t look at their skin as much as other groups do,” Dr. Kroshinsky explains. That’s unfortunate, because if it’s caught before it spreads, melanoma is highly treatable. According to the American Cancer Society, the five-year survival rate for people whose melanoma is diagnosed and treated before it spreads is 98 percent. After the cancer spreads, survival plummets to 16 percent. It is fortunate, then, that the skin is the only organ entirely available to inspection. “That’s the beauty of dermatology – the skin is an accessible organ,” Kroshinsky notes. “This isn’t like your heart or lungs, where you have to wait until you have chest pain or develop a cough that won’t go away. It’s an organ that you can look at every day.” Skin’s accessibility also means it is vulnerable to damage from exposure to UV radiation in sunlight. That is why it’s so important, especially after accruing a lifetime of UV exposure, to adopt good sun-protection habits and keep an eye out for suspicious skin blemishes. Types of cancer According to the American Academy of Dermatology, one in five people in the United States will develop skin cancer at some time in their lives. There are three types: Squamous cell cancer begins in the middle layer of the epidermis, affects only its surroundings, but eventually forms a raised patch with a rough surface. Basal cell cancer is associated with the lowermost epidermal layer. The cells invade surrounding tissues, forming a painless bump that later becomes an open ulcer with a hard edge. Malignant melanoma, which accounts for 75 percent of skin cancer deaths, occurs in the pigment-producing cells (melanocytes) in the basal layer of the epidermis or in moles. The cells reproduce uncontrollably and invade distant body sites. Assess your risk Your risk of developing a skin cancer depends primarily on two factors: genetics and sun exposure history. Genes: Do you have red hair, fair skin and blue eyes? Then you are at higher risk than someone with darker skin. Do you have a first-degree relative (parent, sibling) who has been diagnosed with melanoma? Do you or others in your family tend to develop a lot of moles and skin blemishes, some of which have turned out to be “atypical” or abnormal in growth, shape, size, or color? These are all risk factors for skin cancer. Exposure: The more sun exposure in your life, the higher your overall risk for skin cancer. In particular, repeated sunburns or blistering sunburns boost your lifetime risk. If your sun-exposure or family history suggests elevated risk, Kroshinsky recommends that you discuss it with your primary care provider. “Talk to your doctor,” she advises. “Ask if, based on moles and amount of sun damage, you should be looked at by a dermatologist.” The risk assessment should include a quick check of your medications. Some can leave you more sensitive to the sun. These include the commonly prescribed fluoroquinolone antibiotics like Cipro and some blood pressure medications. Check your skin The doctor’s role: Once your baseline risk is known, either your primary doctor or a dermatologist can perform the needed skin exam at an appropriate frequency. For people at normal risk, every one to two years might be sufficient. Being at greater risk may warrant more frequent screening. Kroshinsky examines people with atypical moles every six months and people with a history of non-melanoma pre-cancers every three months. Your role: Learn how to identify worrisome skin blemishes. Certain features indicate that a mole should be examined by a doctor. “Look for anything that is new, that looks different from other things on your body, or anything that’s changing, growing or bleeding,” Kroshinsky says. “Also, anything that doesn’t heal in a week or two.” A simple rubric, the ABCDE of malignant melanoma, should guide your self-exams. Make sure to include areas hidden from your view, with the help of a spouse, intimate partner, or friend. These areas include the back, buttocks, and rear thighs; the neck and top of the head; and the soles of the feet and between the toes. Skin checks can save your life. “Most of these cancers, if you catch them early and remove them, they’re cured,” Kroshinsky says. Develop safe sun habits Do not spend extended periods in the sun with your skin exposed and unprotected. The longer you’re exposed, the higher your risk. “We counsel people to avoid prolonged sun exposure between 9 a.m. and 4 p.m.,” Kroshinsky says. “But that’s the peak time when people like to do things.” If you are out, take these steps: 1. Wear a hat with a brim that covers the ears and shades the nose. 2. Always use sunscreen. 3. If it is comfortable, wear long sleeves and pants. Boaters and fisherman beware: Reflected light off the water surface can increase exposure. Use sunscreen properly When you use sunscreen, use it properly. Many people use sunscreen but burn anyway because they did not apply enough, did not replenish it often enough, or applied it after actually becoming exposed to the sun. Here are Kroshinsky’s sunscreen recommendations: When: Apply the sunscreen at least 20 to 30 minutes before you go out in the open sun. What: Use a broad-spectrum sunscreen that protects against UVA as well as UVB rays. Use a sunscreen with an SPF rating of at least 45. How much: Adults should apply 6 teaspoons (1 ounce) on your body and face, or about the volume of a shot glass. Be sure to coat the ears, back of the neck and exposed skin on the head. How often: Reapply every hour if you are in the water or sweating heavily. Reapply every two to three hours if not in the water or not sweating. Reapply the sunscreen frequently even if the product you buy is formulated to be sweat- or water-resistant. The ABCDE of melanoma If you spot a mole or skin blemish with one of these characteristics, have it examined by a doctor: Asymmetry: One half does not match the other half. Border irregularity: The edges are ragged, notched, or blurred. Color: The pigmentation is not uniform. Different shades of tan, brown, or black are often present. Dashes of red, white and blue can add to the mottle appearance. Diameter: Melanomas usually are greater than 1/4 inch (6 mm) in diameter, or about the width of a pencil eraser, when diagnosed, but they can be smaller. Evolving: A mole or skin lesion looks different from the rest or is changing in size, shape, or color.
<urn:uuid:189d734b-af6c-4f05-9139-fd9fdbf61b43>
CC-MAIN-2013-20
http://www.buffalonews.com/apps/pbcs.dll/article?AID=/20120830/LIFE/120839599/1063
2013-05-22T15:01:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934193
1,860
It may first be helpful to briefly review how a HID gas lamp works. Let’s take a xenon short arc lamp as an example. In a xenon short arc lamp, a ballast will supply the initial electrical current or pulse and ionize the lamp and vaporize the solid material inside, provided there is any. Electrical current will travel through a conductor and eventually form an electrical field inside the bulb’s quartz envelope via the cathode. A xenon short arc lamp, and all HIDs for that matter, does not reach full brightness instantaneously because the gas must become sufficiently excited by the electrical current and release sufficient electrons to produce large amounts of light. The light forms in a small arc between the cathode and the anode at the center of the quartz envelope, and hence ‘short arc’. Here we have the answer to the question, ‘where is the filament?’ The answer is there is none in any conventional sense because a HID bulb does not produce light in the same way an incandescent bulb does. Sometimes customers are also concerned by the state in which their HID lamp arrives in. Before we tackle this set of issues, you should remember two things about gas discharge lamps. First, gas discharge lamps are filled with a type of gas depending on what type of lamp it is (xenon, argon, neon, or krypton) and often additional materials such as sodium (i.e. low/high pressure sodium bulbs), mercury, or metal halides. Second, as stated before, gas discharge bulbs take a few minutes to reach full luminosity. Now, with these two considerations in mind we will move on to customer concerns. One common concern among customers is that when a customer receives his or her gas discharge lamp, particularly high pressure or low pressure sodium lamps, there is a loose solid metal substance rolling around in the glass envelope. This is not a defect of the lamp, and in fact it is how they should come. Before the sodium in a sodium vapor lamp is vaporized, it is in a solid metallic state, this is what you are seeing. Another common concern, particularly pertaining to xenon short arc bulbs, is that upon their arrival the quartz envelope is blackened, leading customers to understandably think that the bulb arrived burned out! This is not the case, what has happened is that when the bulb was tested it was not left in operation long enough to reach full brightness, the blackening results from this. There is also a chance that the blackening resulted from a faulty ballast or related issue, however the blackening does not indicate the bulb has burned out like it does for an incandescent bulb. A xenon short arc bulb that has reached the end of its life will have its envelope bulged out in one direction and will actually be all white inside. As the adage goes, ‘know your enemy,’ and although light bulbs should never be your enemy, sometimes it can feel like they are. So before you receive your HID lamp, make sure you read up on it and know what to expect when you take it out of the box. As always, comment on the blog or call BulbAmerica at 1888-505-2111 for any questions concerning your light bulb or lighting needs.
<urn:uuid:a9b2d660-e0fb-43a7-a21f-44cf3f08e6fe>
CC-MAIN-2013-20
http://www.bulbamerica.com/kbase/High_Low_Pressure_Sodium_Lamps_and_Xenon_Short_Arc_Lamps_A_Guide_for_the_Perplexed.html
2013-05-22T15:28:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95709
676
It was not the British who named Myanmar Burma. The once British colony has always been called Burma in English and bama or myanma in Burmese. The best explanation of the difference between bama and myanma is to be found in the old Hobson Jobson Dictionary, which despite its rather unorthodox name remains a very useful source of information: "The name (Burma) is taken from Mranma, the national name of the Burmese people, which they themselves generally pronounce Bamma, unless speaking formally and empathically." Both names have been used interchangeably throughout history, with Burma being the more colloquial name and Myanmar a more formal designation, somewhat similar to Muang Thai and Prathet Thai in Thai. If Burma meant only the central plains and Myanmar the Burmese and all the other nationalities, how could there be, according to the Myanmar Language Commission, a "Myanmar language"? I have at home their latest Myanmar English Dictionary (1993), which also mentions a "Myanmar alphabet". Clearly, Burma and Myanmar (and Burmese and Myanmar) mean exactly the same thing, and it cannot be argued that the term "Myanmar" includes any more people within the present union than the name "Burma" does. But the confusion is an old one and when the Burmese independence movement was established in the 1930s, there was a debate among the young nationalists as to what name should be used for the country: bama or myanma. The nationalists decided to call their movement the Dohbama Asiayone ("Our Burma Association") instead of the Dohmyanma Asiayone. The reason, they said, was that "since the dohbama was set up, the nationalists always paid attention to the unity of all the nationalities of the country ... and the thakins (Burmese nationalists) noted that myanma meant only the part of the country where the myanma people lived. This was the name given by the Burmese kings to their country. Bamanaingngan is not the country where only the myanma people live. Many different nationalities live in this country, such as the Kachins, Karens, Kayahs, Chins, Pa-Os, Palaungs, Mons, Myamars, Rakhines and Shans. Therefore, the nationalists did not use the term myanmanaingngan but bama naingngan. That would be the correct term. All nationalities who live in Bamanaingngan are called bama." Thus, the movement became the Dohbama Asiayone and not the Dohmyanma Asiayone ("A Brief History of the Dohbama Asiayone", an official government publication published in Burmese in Rangoon in 1976). The Burmese edition of the Guardian monthly, another official publication, concluded in February 1971: "The word myanma signifies only the myanmars whereas bama embraces all indigenous nationalities." In 1989, however, the present government decided that the opposite was true, and it is that view which many foreigners keep on repeating. The sad truth is that there is no term in Burmese or in any other language which covers both the bama/myanma and the ethnic minorities since no such entity existed before the arrival of the British. Burma with its present boundaries is a creation of the British, and successive governments of independent Burma have inherited a chaotic entity which is still struggling to find a common identity. But insisting that myanma means the whole country and in some way is a more indigenous term than bama is nonsense. Rangoon or Yangon is another reflection of the same kind of misunderstanding. Rangoon begins with the consonant "ra gaut", or "r", not "ya palait" or "y". In English texts, Rangoon is therefore a more correct spelling. The problem is that the old "r" sound has died out in most Burmese dialects (although not in Arakanese and Tavoyan, which both have a very distinct rround, Rrrangoon, almost) and softened to a "y" sound in the same way as "r" often becomes "l" in Thai. The usage of "Yangon" is as childish as if the Thais insisted that Ratchaburi had be spelt "Latbuli" in English, or Buriram Bulilam. Further, there is another dimension to the recent "name changes" in Burma. It was not only the names of the country and the capital which were "changed". In the minority areas names also changed, and here it was a real change. A few examples from Shan State: Hsipaw became Thibaw, Hsenwi became Theinli or Thinli, Kengtung became Kyaingtong, Mong Hsube became Maing Shu, Laihka became Laycha, Pangtara became Pindaya and so on. The problem here is that the original names all have a meaning in the Shan language; the "new" names are just Burmanised versions of the same names, with no meaning in any language. This undermines the argument that the changes were done in order to make them "more indigenous" and not only reflecting the majority Burmans. published with the kind permission of the author May 26: At the 40th State LORC Press Conference, the Information Committee spokesman said: -- "Measures are being taken for the correct use of Burmese expressions. For example, our country is officially called 'Pyidaung-su Myanma Naing-Ngan' and is expressed in English as 'Union of Burma'. 'Burma' sounds like mentioning 'Bama'. In fact, it does not mean the Bama (Burmese nationals), one of the national racial groups of the Union only. It means 'Myanma', all the national racial groups who are resident in the Union such as Kachin, Kayah, Karen, Chin, Mon, Rakhine Bama, and Shan nationals. Therefore, to use 'Burma' is incorrect and 'Myanma' should be used instead. Accordingly, 'Union of Myanma' will be used in the future. Furthermore, measures are being taken for using words such as 'Yangon', 'Pyi', 'Sittwe', 'Mawlamyaing' and 'Pathein' in place of 'Rangoon', 'Prome', 'Akyab', 'Moulmein' and 'Bassein' respectively. These have been told to you, journalists in advance to have first hand knowledge." BURMA PRESS SUMMARY (from the WORKING PEOPLE'S DAILY) Vol. III, No. 5, May 1989, available in electronic form at www.ibiblio.org ("SLORC" was the predecessor of the "SPDC".) The Burma Center Prague is a non-profit and non-governmental organization based in the Czech Republic. Our goal is to restore peace, justice, democracy and human rights in Burma and to support and empower Burmese refugees. If you want to be informed about the latest news on Burma and receive notifications about our events, please click here to subscribe to our newsletter.
<urn:uuid:b4e679fb-995f-4177-ae32-2c67dd8ea786>
CC-MAIN-2013-20
http://www.burma-center.org/en/publications/articles/item/104-burma-or-myanmar/
2013-05-22T15:23:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955592
1,539
This led researchers on a search for cheaper ways to sequence DNA — with the goal of finding a way to sequence an entire human genome for under $1,000. Multiple companies have been striving toward this goal. Some of these groups think the way to the $1,000 genome is through nanopores. These structures are tiny protein-based holes — built into a membrane — that the DNA strand is threaded through. Moving one letter of the genome at a time, the electrical conductivity of the DNA is read. Because each base of the DNA has a different size and shape, which changes the conductivity of the pore, this enables a sensor on the other side of the membrane to read the changes and identify the DNA sequence. A company called Oxford Nanopore Technologies is about to release a set of technologies based on this idea. They are called the GridION and MinION systems, and could herald in a next generation of DNA sequencing on the cheap. [SEE HOW OXFORD NANOPORE'S DNA SEQUENCER WORKS] "The GridION platform is an electronic analysis system that can be tailored for the analysis of DNA, RNA, protein and other analytes. This novel technology has applications across personalized healthcare," the Oxford Nanopore website says. Being able to quickly and cheaply determine someone's genetic code could be incredibly useful in not just research settings, but in hospital settings as well. A person's genome sequences could be used to identify the causes of rare diseases, especially those which haven't been identified before. However, this will only be useful in a limited number of patients, and even if the genetic culprit of a disease is identified, there's no promise that this information will help find a cure or a treatment for the disease. Another way that doctors can use this information is to determine one's likelihood of contracting one disease or another. Some companies, for example 23andMe, are currently doing this by scanning a person's genome for telltale markers of disease, but these have to be identified first. A full genome scan could potentially shed a lot more light onto a person's genetic risk for certain diseases. It could also encourage someone to make lifestyle changes if they are found to carry genes that could cause an increased risk for disease when linked to environmental factors. The genetic information from a full genome could also be useful if you learn that you are a carrier for a genetic disease. If your partner is a carrier too, this could mean you should watch out for the disease in your children, screen embryos for the defect, or get your children tested and treated for the disease early. Knowledge of certain genetic characteristics will also help doctors tailor medicine to individual patients. We know there are certain liver proteins that act differently in different people. For example, one person with the more traditional liver proteins may have no side effects to a drug, but someone who has a less prevalent liver protein profile may react badly to it.
<urn:uuid:67ee22c8-f409-476f-8ea2-9658e8731c9e>
CC-MAIN-2013-20
http://www.businessinsider.com/what-cheap-genome-sequencing-means-for-the-future-of-medicine-2012-8
2013-05-22T15:00:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954808
601
ATSC(Advanced Television Systems Committee) What is ATSC? ATSC is a set of standards developed by the Advanced Television Systems Committee for digital television transmission over terrestrial, cable, and satellite networks. ATSC digital television standard is composed of four separate levels, and it has a clear interface between levels. The top one is the image layer, it is to determine the form of images, including pixel array, amplitude ratio and frame rate. Followed by image compression layer, the use of MPEG-2 compression standard. Further down the system multiplexing layer, specific data is compressed into a different package, it is used of MPEG-2 compression standard. Finally, the transport layer, it is can determine the modulation and channel coding scheme of the data transmission . For terrestrial broadcasting system, developed by Zenith 8-VSB transmission mode, the 6MHz terrestrial broadcast channels can be achieved 19.3Mb / s transfer rate. The standard also includes cable television system suitable for high data rate 16-VSB transmission mode can be achieved in the 6MHz cable channel 38.6Mb / s transfer rate. Two shared the following general data transmission. Determine the top two floors on the basis of data transmission in the general run of specific configuration, such as HDTV or SDTV; ATSC standard also identified the specific image format support, a total of 18 species (HDTV 6 species, SDTV 12 species), which uses progressive scan 14 way. In 6 format of HDTV, because the 1920 × 1080 format does not fit within a 6MHz channel at 60 frames / second for progressive scan, it is replaced by interlaced scanning. SDTV's 640 × 480 VGA image format and the computer the same format, to ensure the applicability of the computer. In the 12 SDTV formats, there are 9 with progressive scan, leave 3 as a way to adapt the existing interlaced video system. In addition, ATSC has also developed and adopted for use in countries with 50Hz frame rate of the prior standard. The pixel array HDTV format are the same, but the frame rate is 25Hz and 50Hz;the SDTV's lines of vertical resolution is 576,but the horizontal resolution is different; also contains 352 × 288 format, to adapt the set of window
<urn:uuid:d2a9be4c-08ad-4383-bf11-99de0ea6c077>
CC-MAIN-2013-20
http://www.cardvdseller.com/faq_info.html?faqs_id=20&fcPath=3
2013-05-22T15:28:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904222
474
Popular belief has it that obesity only affects wealthier societies where food is plentiful: the curse of the developed world epitomized by hulking Americans that struggle to order their king-size Big Mac, French Fries and Coke without breaking sweat. Obesity is no longer exclusive to the developed world The reality is a very different. Obesity and its associated diseases - diabetes, hypertension and kidney diseases – respect neither wealth nor class and strike instead into the heart of every society where there is easy access to convenience food, low physical activity and ubiquitous advertisements for sugar-fat-salt-rich food. Heart disease, stroke, cancer and other chronic diseases associated with poor diet and low exercise have now made serious inroads into the lives of people in poor and middle-income nations. In total, these accounted for 80% (28 million) of the cases of chronic illness in 2005, according to the World Health Organisation (WHO), which fears that a further 388 million people will die from such illnesses over the next ten years. Photo by Malias Across South East Asia, cases of chronic disease are also high, accounting for 54% of all deaths during 2005. The situation in Thailand is particularly serious, says the WHO, which estimates that the number of obese 5-to-12 year olds increased from 12.2% to 15.6% in just two years. Obesity is generally associated with older age groups, but has yet to permeate into poorer areas where the price of convenience food associated with the epidemic is prohibitive. China, too, has an emerging epidemic with one or two pockets of high incidence. Overall, obesity levels range from under 5% to almost 20% in some areas, according to regional surveys conducted during 2003. Most concerning, however, is high prevalence among the young. In Wuhan Province 8.9% of 10-12 year-olds were classified as obese by the study. Some areas, such as Beijing, also suggest that there is a gender perspective to the epidemic. In the capital more than 10% of 10-12 year old boys were obese – more than three times the rate for girls in the same study. Responsibilities are divided The existence of a genetic predisposition to obesity would provide a straight-forward explanation for the world’s growing stock of rotund individuals, but the precise causes of obesity are multiple. Changing diets have clearly contributed to the development of the pandemic, driven by the move towards food processing that relies heavily on high injections of sugar and salt. Recent research by The Thai Health Promotion Foundation, for example, found that more than 90% of its sample of 700 pre-packed foods to contain excessive levels of sugar, fat and salt – a cocktail that can lead to diabetes and hypertension as well as obesity. Choice, of course, enables informed individuals to avoid (or moderate their consumption of) foods that are known to have damaging health effects, but bad labeling, the study suggests, does not help in the decision-making process. Just one third of the sample in Thailand, for example, managed to provide adequate nutritional information on their packaging or list ingredients. Where available, say researchers, labels also tended to use small fonts and present information in a way that is difficult to understand. At least part of the blame, therefore, lies with the food industry itself. Photo by Malingering Children are most at risk For now, young Thais have refrained from overindulgence in burgers and chips on account of taste. But tastes are changing and so is the food industry. Pizza Hut (aka Pizza Company in Thailand) has already rewritten its menu to include a Tum Yum Kung (spicy prawn soup) variety. Western convenience food, which contains 3 or 4 times more fat, sugar and salt than healthier local Thai snacks, is now thought to pose one of the greatest dangers to a country of “snackers.” Catering to oriental taste in order to boost market share is only one dimension of the corporate weaponry. Intensive marketing activity now mostly targets children and changing cultural values now mean that a visit to see Ronald McDonald has become a symbol of growing affluence and status. The price of a Big Mac in Bangkok (the equivalent of USD 1.5 or Baht 60) may cover the food costs of one meal for a family of four, but younger Thais are prepared to splash out on junk-food if it means impressing friends – especially girlfriends. Similar trends are noted throughout many of China’s larger central and eastern metropolises. Shopping malls in Cambodia also house fashionable western eateries that only the privileged can afford. Obesity ought not to be a problem affecting children, but cases as young as 3 are not exceptional. And for those that then become obese adults the risks (particularly in developing countries) have alarming potential – an increasing susceptibility to illness coupled with reliance on fragile health care systems that may not be able to offer or afford treatment. In China, there is only a very basic social safety net and hospitals are run like profit-making concerns: Only those that can afford treatment receive treatment Child obesity is expected to soar worldwide according to the International journal of Pediatric obesity, and could start to erode health gains in many countries. Both morbidity and cases of premature death are expected to rise over the next decade costing the economies of China, India and Russian billion of dollars according to the WHO. China alone will lose $558 billion over the next 10 years of its national income due to heart disease, stroke and diabetes. And other important Asian economies - Thailand, Malaysia, Indonesia and others – are fast reaching western levels of development and consumption. Photo by Robad0b An incomplete response Political will and increased public awareness will decide whether obesity is here to stay or go, according to Prof. Philip James, the chair of the London-based International Obesity Task Force (IOTF). “It is noticeable,” he says, “that the public and Ministers readily accept the problem of obesity in adults…..then often and very conveniently blame the individual for their predicament rather than questioning whether their obesity reflects the impact of deliberate policy and industrial developments over the last few decades.” While the political elite ponder their next move a coalition of five international non-governmental organisations (NGOs) – known as the Global Prevention Alliance – has already pledged new action worldwide to combat obesity-driven chronic diseases. Obesity, the alliance says, ranks alongside HIV/AIDS in terms of importance and impact. “Cutting death rates alone will not be enough,” according to Prof. James, adding that “No health system or economy can afford the cost of spiraling cases of chronic disease. The only way to address this is to recognize the need to revolutionise our approach to delivering healthier diets and reducing consumption of the foods high in fats, sugar and salt.” Obesity is a new challenge for countries like China, which suffered a major famine in 1961, suffered routine food shortages until the mid-70s and received food aid from the World Food Programme until 2005. But a solution is not out of reach. As many as 80% of the cases of premature heart disease, stroke and type-2 diabetes could be prevented by a healthy diet according to the WHO. Missing only is the political will to legislate, educate and take on the powerful Food Industry. Homepage photo by Afdn The Author: Roger Tatoud holds a Ph.D. in Cell and Molecular Biology. He has worked in North Africa as a teacher and Europe as a scientist, volunteer fundraiser for an HIV/AIDS organisation and, most recently, as programme coordinator for a project that tackles insulin resistance and obesity.Related articles: 1. Health effects of obesity 2. Children becoming too big for car seats
<urn:uuid:03b1402a-d387-4fe3-9443-3a1ae22df981>
CC-MAIN-2013-20
http://www.chinadialogue.net/article/show/single/en/295-French-fries-and-fat-kids-Asia-s-next-epidemic
2013-05-22T15:07:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95111
1,601
But the Francophone militia in Lower Canada still existed on paper and continued to meet. In 1828, at the request of the governor-in-chief, some men were even able to obtain uniforms. Even more striking, in Dorchester County, Beauce, a Francophone horse company dressed in grey with black collar and cuffs, armed by the government, pursued deserters as a police force would. But none of this activity could hide a deep malaise. In reality, the French Canadians were seriously questioning the values of the militia. Control over this institution, which in the past had been so central to its interests and so dear to its heart, was being lost. In the end, French Canadians turned away from an organization that no longer represented them. Because they were being assimilated and humiliated, they would isolate themselves socially in order to keep their identity and to truly belong only to the institutions they could control: their Church and their political parties. The militia, and more generally the very idea of military service, became a matter "for others" from then on, their only concern being to defend their immediate territory. In 1830, the French-Canadian militia organization, although it continued to subsist, was virtually wiped out. This situation, aggravated by a political landscape resembling a minefield, encouraged the rebellions of 1837 and 1838.
<urn:uuid:60781227-62ba-4cdf-a801-1252feea9a2e>
CC-MAIN-2013-20
http://www.cmhg.gc.ca/cmh/page-419-eng.asp
2013-05-22T15:14:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.987014
276
Herpes simplex virus (HSV) infections are caused by 2 types of HSV: type 1 (HSV-1) and type 2 (HSV-2). Most cases of HSV infection are caused by HSV-2. Most persons with HSV-1 or HSV-2 infection have no or only minimal signs or symptoms. When signs do occur, they usually appear as one or more small blisters or sores on or around the mouth, lips, nose, face, genitals, and buttocks. HSV infections are very contagious and are spread by direct contact with the skin lesions. Herpes labialis is a common disease caused by infection of the mouth area with HSV-1. Most persons in the United States are infected with HSV-1 by age 20 years. The blisters or sores associated with HSV infection may take 2 to 4 weeks to heal completely the first time they occur. Typically, another outbreak can appear weeks or months later at the same site, but it almost always is less severe and of shorter duration than the initial outbreak. Although HSV can remain in the body indefinitely, the number of recurrent HSV infections tends to decrease over a period of years. Herpes simplex virus type 1 Although lesions caused by HSV-1 infection typically appear on the mouth, lips, nose, and face, they can develop anywhere on the skin. For example, a 25-year-old woman sought evaluation of an itchy lesion on her neck, shown in Figure 1. Almost a year earlier, a similar lesion had occurred at the identical site after she had worked in the yard. Her physician had suspected rhus dermatitis at the time and had prescribed a topical corticosteroid; the lesion had resolved. Figure 1 – An itchy, vesicular lesion was caused by an unusual case of herpes simplex virus type 1 infection. (Photo courtesy of David L. Kaplan, MD. Overview adapted from Dermclinic in Consultant. 2008;48:673-680.) A diagnosis of HSV infection was made on the basis of the vesicular lesion depicted here; the lesion recurred at the same site, and the physician initiated antiviral therapy. A culture was positive for HSV-1. Herpes simplex virus type 2 Persons with HSV-2 infection are often unaware of their illness for several reasons. They may have become infected with HSV-2 during sexual contact with a person with a genital HSV-2 infection who does not have visible sores and may not know that he or she is infected. In addition, they may never have signs and symptoms or may have very mild signs that either they do not notice or they mistake for such skin conditions as insect bites. However, if signs and symptoms occur during the initial outbreak, they may be pronounced and may include sores, fever, and swollen glands. Figure 2 shows a pruritic eruption that was confined for several days to the buttocks of a 73-year-old woman. She was otherwise healthy. She was widowed and had had no sexual contacts for many years. Antiviral therapy was started because the woman’s physician suspected HSV infection. A culture confirmed the diagnosis of HSV-2 infection. Additional information from the history revealed that the woman had been exposed to HSV-2 infection many years ago. Figure 2 – This pruritic eruption on the buttocks of an elderly woman is an example of herpes simplex type 2 infection. (Photo courtesy of David L. Kaplan, MD. Overview adapted from Dermclinic in Consultant. 2008;48:673-680.) Figure 3 depicts a tender eruption that was present on a 28-year-old woman’s posterior right thigh for 3 days. She had no history of a similar eruption. She was otherwise healthy but had had seasonal allergies as a child. Recently, she had started to use new brands of both soap and shaving cream. Figure 3 – A tender blister on the posterior thigh of a 28-year-old woman is characteristic of herpes simplex virus type 2 infection. (Photo courtesy of David L. Kaplan, MD. Overview adapted from Dermclinic in Consultant. 2008;48:833-840.) A culture identified HSV-2. Treatment with oral antivirals was started. The patient received counseling on being evaluated for other sexually transmitted infections as well as on the ramifications of genital HSV-2 infections. Although initial herpes labialis may not cause symptoms or mouth ulcers, the virus remains in the nerve tissue of the face and may reactivate, producing recurrent cold sores, usually at the same site. Symptoms of primary herpes labialis may include a prodrome of fever, followed by a sore throat and mouth and submandibular or cervical lymphadenopathy. In children, gingivostomatitis and odynophagia are also observed. Painful vesicles develop on the lips, gingiva, palate, or tongue and are often associated with erythema and edema. The lesions ulcerate and heal within 2 to 3 weeks. A classic manifestation of herpes labialis is shown in Figure 4. After returning from a skiing trip to Colorado, a 24-year-old woman sought medical attention for an eruption of sudden onset on her lip. She also had a sore throat and low-grade fever. She was otherwise healthy, and her only medication was an oral contraceptive. Figure 4 – These painful vesicles that erupted suddenly and were accompanied by low-grade fever and a sore throat are characteristic of herpes labialis. (Photo courtesy of David L. Kaplan, MD. Overview adapted from Dermclinic in Consultant. 2008;48:1022-1028.)
<urn:uuid:06cb3206-a650-40f6-9fdc-dc20a3b7ce19>
CC-MAIN-2013-20
http://www.consultantlive.com/infection/content/article/1145625/1418984
2013-05-22T15:35:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965572
1,205
History of horticulture of Australian native plants (from a 1956 booklet, transfered to the web via optical character recognition (OCR). Nindethana Nursery was a significant step in the history of Australian native plants in Australia's horticultural industry. The Nindethana dream eventually became Burrendong Arboretum.) In 1931 the whole countryside of the Central Western Slopes of N.S.W. was oozing with moisture from the abnormal rains. It seemed the ideal time to pick the site of our future home, for we fondly hoped that a place that was reasonably dry in 1931 would always remain so. Little did we dream that the freakish vagaries of the weather gods would bring the 65 inches of rain in 1950 and the 12 inches in seven days in February, 1955. During such prolonged downpours there are no dry spots. The solid clay underlying most of this country at such times turns to a slurry and will often bubble up through the tight crust of earth above it overnight. Even the bitumen roads do not escape and in many cases where the clay is deepest the pressure beneath makes the road bulge alarmingly. Then, as soon as the skin, so to speak, of the road is broken by traffic, whole patches of roadway collapse. Our rainfall is 23 inches annually, spread fairly evenly over the twelve months with some heavy storms during the summer months and steady soaking winter rain in a normal year. We have, however, periodically, drought years when the rainfall drops to well below the 23-inch average. The driest year on record was 1944, with 9.65 inches of rain and the wettest (until 1950) was 34 inches in 1916. During the summer months the temperature frequently rises above 100 fahr. though seldom above 105 fahr. In 1939 we had one reading of 114 fahr. During, the winter there are frequent frosts when the temperature falls below 32 fahr. Most winters we have heavy frosts, but snow is rare. In 56 years we have had only three major falls, when the snow persisted in drifts and sheltered places for some days. Our coldest days in 1955 were in July, when ground readings of 19 and 20 degrees were registered. The lowest reading we Shave had at Nindethana was 18 degrees. You will note from the great variation in temperature that plants must be tough to survive. The bulk of Australian plants are just that. Some may be finicky as to soil requirements, but it is remarkable the number of species which will thrive in any one area, though they may originally have been native to such widely varying habitats as Kosciusko or North Queensland. An amazing number of species from coastal areas and semi - tropical climates will grow and thrive in this climate provided we have the foresight in the first place to establish a wind-tight break of trees and tall shrubs foliaged to ground level. It is not so much the heavy frost that damages plants as the uninterrupted cold air flow. We have all remarked at various times on the cold bands of air that are met with in winter at various places in a short journey of, say, two miles. These are usually not wide and, side by side, are warmer air bands where the frostage will not be nearly so severe. Now if we plant a two or three row break of trees across the path of the air drift—and this usually is from south. south-west and west—we can make ALL the land in a garden as warm as the warmest part was before the break was planted. This is often quite 5 degrees warmer and means the difference between success and failure with certain border line plants. We have found that using this method it is quite possible to grow the W.A. flowering gum (Euc. ficifolia), Jacaranda, Macadamia and many other frost tender plants quite successfully, whereas before we had a decent wind break these species succumbed to frost damage year after year. Even when they over-wintered in a mild winter there would inevitably come that frost with the little extra “bite” which meant finish to many plants—even quite large flowering gums up to 4ft. tall. Whilst on the subject of frost damage here is a little hint for those who have no breakwind (or for that matter where there is one but the frosts are particularly severe): Where a plant has a stem of from a foot or more high, take newspaper that has been dampened and wrap it tightly around the stem, tying firmly at the top and base. Spread the bottom portion out several inches from the base, placing earth on this so as to prevent cold air from percolating upwards. At least eight thicknesses of paper will be necessary. No cover will be needed on the top portion. If the sap stream can be kept warm the flow of sap will not be interrupted and damage to the top will be either obviated altogether or else be very slight. I have known gardeners who year after year go on trying to grow lovely trees such as our native Fire Wheel Tree (Stenocarpus sinuatus) and Jacaranda, only to have them reduced to a blackened stump from which the spring will bring with new hope fresh shoots, which will often make a growth of four or five feet during the summer, only to meet a like fate when winter comes. But we overstep the mark and the story of Nindethana only just beginning. It is the year 1931 and, after much deliberation, we finally chose the site of our home on a gentle slope near the head of the valley of Burrill, looking north to where “out yonder n the distance the Blue Range curves and sweeps along the gleaming sykline to where the river sleeps”. There the massive ramparts of the Dickerton Range and Black Mountain rise up steeply from the usually placid Macquarie River—rise up a thousand feet from the river level into a series of tree-clad peaks, with here and there sheer cliff faces which are home and . haven for plants of many genera. Looking northward in winter from Nindethana over the intervenmq miles, quite often may be seen a sea of dazzling white No matter how often seen, it always has the effect of plucking at the heart shings Nindethana, set like a gem stone on a coronet above the white sea of the fog which blankets the lower levels and, in the far distance, the blue. tops of the cloud-islanded peaks beyond the river. To the east about a mile away rise the Sugar Loaf and the Gigmullarie Hills, heavily mineralised country where much gold has been won. To the West rise the rugged ardesite hills, chief of which are the Cobler covered top to bottom with White Pine (Callitris glauca) and the Lapstone. When the sun is very low and the shadows of even de scending the hills to the East are laved with the last rays of sunlight, but it is always to “the deep ‘sea of the pine” that the roving gaze returns. For the pine hills speak of peace and beauty as no others can. They bring back memories of the long, long days of youth when we rode at racing pace along the narrow, winding paths on the steep, rockstrewn hillsides. Flashing kaleidoscopes of youth and the ways of youth, of its dreams and plans and, above all, the tangy scent of the pine wood. Or, after a sudden short, summer downpour, the sun striking through the pines caught and held the thousand jewels of the clinging raindrops. Each one in turn became a sapphire, a ruby or now flashing with opalescent hues until a sudden a wind gust tumbled the jewelled treasury to earth and oblivion. Southward, a gently risina slope culminates in the mass of Bald Hill where the lava flow from Canobolas volcano stopped in the dawn years of our continent. This, then, was the site we picked, the house to be built on a narrow piece of deep 1igh soil with the typical yellowish gritty clay underlay. On the Eastern side conglomerate limestone cropped up and on the western side ironstone and slate. The whole 25 acres surrounding the homesite was bare and windswept and nomore than six trees of Euc. Albens (White Box) weré Standing on the area. Of these two had to come down to make way for the necessary buildings, but the remainder live on and have acquired hundreds of neighbours culled from the length and breadth of this fair land. IN WHICH A NAME IS GIVEN AND AN IDEA IS BORN. In a district renowned for its lovely liquid sounding native names it seemed natural for us to pick a native name. So Nindethana—meanina “ours”—was chosen to reign amonast such as Burrendong, Wuuluman and Morungulan. From 1932 some native plants, such as Hakea Laurina and Acacia podaliriaefoiia were planted. They throve exceedingly in the genialclimate and, as the years marched slowly by on silent feet, more and more native plants were given the right to air their graces. It was during these formative years that the idea slowly beqan to take shape in my mind of a great national garden to house all of our hardy native species, wherein would grow not just one or two of each species, but great clumps or aroves of 20 or 25 of each. America has its qigantic Arnold Arboretum, the home now of most of the hardy plants of the Northern Hemisphere, but none had attempted in Australia so far, on a national scale, the bringing together in one great garden all of our own hardy plants. The Australian flora is unique and in the main it is very beautiful. As befits the world oldest land mass, we find here some plants which reach back to the beginnings of plant life. We have plants which have no living relatives in any other country of the world. There are many hundreds of species of. plants which are very near to extinction and some alas which have gone forever. In the future they will be known only from meagre dried specimens in the Herbaria of Australian Capital. Cities and overseas. What a sad reflection on our trusteeship of this great land. In the past it has been a case of apathy in governmental and semi-governmental places and also in 90 per cent. of cases of the general public. Something had to be done about this state of affairs and although at the beginning it was a one-man plan with but one shoulder to the wheel, as the years passed more and more helpers and workers have. been found. Although it was not intendEd that Nindethana should become the site of the National Arboretum, it was intended,—and has become so in fact—it should be a proving ground or a pilot plantation for the larger project. The stocking of Nindethana Arboretum has meant many thousands of miles of. travel, a deal of correspondence and the gathering together in loose liaison of the grandest body of people in the world—. the native plant enthusiasts of Australia, N.Z., U.S.A., and elsewhere. It has not been a lonely searching and probing, but has been invariably enriched by the comradeship of my wife and my brother. To them no trail was too arduous, no day too long if the end held the promise of a new flower—and. frequently the long day’s end brought but the deadly tiredness. induced by miles of heatdom and th.e choking dust or, alternatively, the clinging mud of the inland trails. If I might paraphrase something that Louis Bromfield once wiote in “Pleasant Valley” and apply it to Australia and Australians: “Those first pioneers and their descendents had passed over the surface of the land like a plague of locusts,. mining and destroying the land as they went.” That is not to say that all the actions of the pioneers were harmful. Much that these (in many cases) splendid people did was right and necessary. There was, however, uppermost in their minds the urge to clear the land, to destroy the accretion, the life bloodof a thousand thousand years of building. That has been ac complishe and, in some cases, is still under way with no thought of the future and of the “Sword of Damocles” that is poised ever, above their unhe.eding heads. The process of rapine and destruction has gone steadily on for a century and a ‘half, but it will not be so very long before Australia will find herself like U.S.A., with. no more broad acres to pilfer of the natural treasures; no more rivers to silt up; no more horizons to surmount Science has done much over the last decade to rectify some of the mistakes of the past, but even to-day, on the other hand, we find that science is aiding and abetting’ the work of destruction. By the simple process of adding this trace element or thgt to the virgin mallee or sand plain and waste lands, it has been found that valuable grazing land results. The point to be stressed is that vital ground cover s being destrored holus bolus with no thought to the treasures’ that have been built up within those plants from time immemorial. If only the ai.ithorities would learn from past mistakes and set aside strips of the mallee and sand plain wheró the shrub life could hold undisputed sway, we would have a reserve or nucleus of this unique flora which could be used for future expansion if necessary. The Soil Conservation Service is doing yeoman service on country already ravaged, but their be voice and the voices of we who understand to some slight extent the complexities of the situation are but tiny sounds in the wilderness. TRIALS AND TRIBULATIONS. When one has known the glory of our inland country ed and has seen and reverenced the scintillating beauty of the or’ flowers, the stateliness of the trees and the utilitarian value of plants without number, the desire to preserve some at least of of this heritage burns as a steady flame within. So many Ler ‘people think that Sydney is N.S.W. and the narrow fringe of well-watered coast is Australia. They are an integral part, but nd what a tiny part only time will show. To me it seems that any great National Garden must be situated on the Western side of the Dividing Range for no other part of Australia can grow so many genera of our plants and grow them so well. Whilst ‘Nindethana has some of the best attributes needed for a pilot plantation, it is not altogether ideal, lacking as it does an area of sand. However, beggars cannot he choosers and so, bit by bit, the Arboretum has been built up. This has been made possible by the great band of splendid helpers in all States, N.Z. and even in overseas countries. Verily this is a snowball that is ever growing. Often qreat things spring from tiny beginnings, and so it is with the finding and propaqating of many a rare and imperfectly known native plant. In many cases the chain of events begins like this: A more observant person in some isolated spot may see an unusual flower and ‘so one day there is a small package in the post addressed to Nindethana. Sometimes it is a plant rare in those parts’ but common in others. If that is so no harm is done and curiosity satisfied. On the other hand the specimen’may be something very rare indeed. Then identification is followed by a request is for seed, cuttings or plants. Then from being merely a thought-ful observer our friend becomes in some cases a very enthusiastic addict to the lore of the native plants. None of these people to my mind are ever thanked adequately, but we can only hope that the joy of seeing a rare plant going into many gardens and to sanctuaries may be a qreater reward than mere words. Do not run away with the idea that all help has been forthcoming from the country areas. There are the hundreds of plant lovers in the cities who have made available the floral treasures they have amassed during a lifetime of plant love. All of these and many another have contributed to the varied collection of plants in Nindethana Arboretum. In Darticular I want to pay tribute to my wife and my brother Peter. Without their zeal and help little could have been accomplished, and often, when on the verge of giving up, their steadfast help acted as a spur to fresh endeavour. We have passed through many ordeals of drought, flood and fire and each has set us back a little, setbacks that acted as a whip urging all to greater effort. There have been many journeyings into the wastelands to North, South, East and West and when some gem has been snatched from the brink of oblivion, the glow of happiness is as a tonic and an urge to tired limbs and weary hearts. In 1944 with but 965 points. of rain in the twelve months, many plants in the infant arboretum were lost, as well as many in the nursery from the use. of mineral impregnated water, On 1st January, 1950, we had. a record hail storm which practically destroyed the nursery.. Heavy and continuous rain continued all that year so that at the end we had amassed 65- inches—nearly three times our annual average rain. The continuous rain defeated all efforts. to propagate plants successfully. For the next two years little was done to re-build the losses owing to my own illness. Down into the valley of the shadow and the slow emergence occupied a lot of valuable time. Hundreds of good friends have assisted in adding to the numbers of plants in the Arboretum, so that even with the floods of February-March, 1955,. and February-March, 1956, we have over 2,000 species in various stages of growth. THE VISION SPLENDID. This is a great land — a land of limitless possibilities of which we have merely scratched the surface. The future of Australia is linked with the development of the inland; and. when I say development, I mean the wise use of the land and all that grows thereon. Water is the key to the development. of this country, and none will deny that though the bulk of the inland in normal times is poorly watered, with the storage of even a portion of that liquid gold which yearly runs Lo waste we will have sufficient for most needs. Even were this. not so, with the great scientific advances of the past few years. what is to prevent us bringing sea water inland. The extraction of salt, chemicals and minerals and the use of the residue for irrigation would bring vast population and prosperity to the dry centre. A sane monetary system which will make that which is physically possible financially possible, coupled with the peaceful use of atomic energy, will make these dreams come true. Much of our dry country flora has. almost disappeared under the locust-like advance of civilisation. It is up to we of the present generation to see that ALL. of the living species of the inland plants are preserved for posterity. A modest beginning has been made at Nindethana, at the new National Arboretum at Dubbo, and on a more grandiose scale at Glen Morgan in Queensland. In the years to come we may well gaze over the wide expanse of these Arboreta and see much of our wonderful flora growing in happy unison. The glory of th.e Western Australian sand plain flora and that of her meagre upland country was not made to vanish under the wrecking hand of what we are pieased to term progress. It is not only the W.A. wild flower areas which are clothed in floral beauty. South Australia, the dry Centralian uplands, the old inland sea fringe of Queensland, New South Wales and Victoria, the Victorian Grampians, the high alpine areas, rthe coastal sandstone as well as the tropical northland each add their quota of grace, quaintness and scintillating loveliness. Whilst ever flowers are grown we shall each sing the praises of same floral gem, but to me one picture is stamped indelibly on the mind. It is that of the Erernophila species of Western Queensland. These flowers of the inland are seen there in all their bewildering variety. Seven or eight species grow to perfection in the 100 miles between Cunnamulla and Thargomindah. Firstly there are miles of E. maculata and E. glabra and wherever this combination occurred, fantastic hybrids were the rule. The influence of E. glabra had the effect in this area of breaking up the usual crimson flowers of E. Maculata into a myriad colours. There were clear yellows progressing to deep orange and onward through a bewilderinq array of colours, from pale pink to the deepest of reds. These are plants mainly of the sometimes flooded hollows between the ridges. On the hard ridges great drifts of E. latrobei, with fits dazzling scarlet or cerise pendant flowers, made splashes of glorious colour. On the sand hills, gravel ridges arid deep sandy loam three species were associated. F. Bowmani, often tall with powder white, densely hirsute leave and stems and large bells from palest blue to deepest purple; E. gilesi and E. goodwini, lower growing but massed with blue flowers, often covered the ground for miles. Near here also we saw large areas of E. duttoni with sine large orange scarlet fiowes and with floral bract flared out like dancers’ skirts. To bring these and a thousand others together in loose liaiso is a vision to grip and hold the imagination. In all the to-morrows in Nindethana, in Glen Morgan and the Dubbo National Arboretum those who come after us will go to study or just to feast on the peerless beauty and the everlasting variety of our flora. We may confidently hope that. many thousands of species will grow in these Arboreta in happy unison in great groves and clumps. From all the far-flung lands of the earth, students will come to study our plants, just as they do .those of the old world in the Arnold Arboretum in U.S.A. What Nindethana began it may well be that Dubbo and perhaps Glen Morgan will bring to triumphant conclusion. June 14, 1956. . G.W.Althofer
<urn:uuid:cb5fd7dd-b653-4aee-a60b-be48251aa340>
CC-MAIN-2013-20
http://www.cpbr.gov.au/history-horticulture/nindethana/index.html
2013-05-22T15:21:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960625
4,758
- Enter a word for the dictionary definition. From The Collaborative International Dictionary of English v.0.48: Vireo \Vir"e*o\, n. [L., a species of bird.] (Zool.) Any one of numerous species of American singing birds belonging to Vireo and allied genera of the family Vireonidae. In many of the species the back is greenish, or olive-colored. Called also greenlet. [1913 Webster] Note: In the Eastern United States the most common species are the white-eyed vireo (Vireo Noveboracensis), the red-eyed vireo (Vireo olivaceus), the blue-headed, or solitary, vireo (Vireo solitarius), the warbling vireo (Vireo gilvus), and the yellow-throated vireo (Vireo flavifrons). All these are noted for the sweetness of their songs. [1913 Webster]
<urn:uuid:1e516588-f7c1-49c0-8390-a860f23782fd>
CC-MAIN-2013-20
http://www.crosswordpuzzlehelp.net/old/dictionary.php?q=Vireo%20flavifrons
2013-05-22T15:35:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.781199
215