text
stringlengths
150
542k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
15
499
file_path
stringlengths
138
138
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
39
159k
score
float64
2.52
5.03
int_score
int64
3
5
Young goats learn new and distinctive bleating "accents" once they begin to socialise with other kids. The discovery is a surprise because the sounds most mammals make were thought to be too primitive to allow subtle variations to emerge or be learned. The only known exceptions are humans, bats and cetaceans – although many birds, including songbirds, parrots and hummingbirds have legendary song-learning or mimicry abilities. Now, goats have joined the club. "It's the first ungulate to show evidence of this," says Alan McElligott of Queen Mary, University of London. McElligott and his colleague, Elodie Briefer, made the discovery using 23 newborn kids. To reduce the effect of genetics, all were born to the same father, but from several mothers, so the kids were a mixture of full siblings plus their half-brothers and sisters. The researchers allowed the kids to stay close to their mothers, and recorded their bleats at the age of 1 week. Then, the 23 kids were split randomly into four separate "gangs" ranging from five to seven animals. When all the kids reached 5 weeks, their bleats were recorded again. "We had about 10 to 15 calls per kid to analyse," says McElligott. Some of the calls are clearly different to the human ear, but the full analysis picked out more subtle variations, based on 23 acoustic parameters. What emerged was that each kid gang had developed its own distinctive patois. "It probably helps with group cohesion," says McElligott. "People presumed this didn't exist in most mammals, but hopefully now, they'll check it out in others," says McElligott. "It wouldn't surprise me if it's found in other ungulates and mammals." Erich Jarvis of Duke University Medical Center in Durham, North Carolina, says the results fit with an idea he has developed with colleague Gustavo Arriaga, arguing that vocal learning is a feature of many species. "I would call this an example of limited vocal learning," says Jarvis. "It involves small modifications to innately specified learning, as opposed to complex vocal learning which would involve imitation of entirely novel sounds." Journal reference: Animal Behaviour, DOI: 10.1016/j.anbehav.2012.01.020 If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:072e317e-2d2a-4c8e-97c1-335b8f03bdb2>
CC-MAIN-2013-20
http://www.newscientist.com/article/dn21481-young-goats-can-develop-distinct-accents.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972495
569
3.578125
4
Evolution can fall well short of perfection. Claire Ainsworth and Michael Le Page assess where life has gone spectacularly wrong THE ascent of Mount Everest's 8848 metres without bottled oxygen in 1978 suggests that human lungs are pretty impressive organs. But that achievement pales in comparison with the feat of the griffon vulture that set the record for the highest recorded bird flight in 1975 when it was sucked into the engine of a plane flying at 11,264 metres. Birds can fly so high partly because of the way their lungs work. Air flows through bird lungs in one direction only, pumped through by interlinked air sacs on either side. This gives them numerous advantages over lungs like our own. In mammals' two-way lungs, not as much fresh air reaches the deepest parts of the lungs, and incoming air is diluted by the oxygen-poor air that remains after ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:ad635de7-8a5e-4c98-be53-8c463594f176>
CC-MAIN-2013-20
http://www.newscientist.com/article/mg19526161.800-evolutions-greatest-mistakes.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948871
207
3.28125
3
Walter Bagehot (February 3, 1826 – March 24, 1877) was a British journalist, political analyst and economist, famous for his analysis of British Parliament and money market. Under his leadership The Economist became one of world’s leading business and political journals. Bagehot recognized that economics in not just a matter of the external, material aspects of financial transactions, but also involves the internal aspects of people's desires, motivations, and personality. Thus, he always emphasized social issues in his writings, and endeavored to make issues of government transparent to the public. Bagehot had an original and insightful mind, recognizing that the character of leaders was often more important than their political affiliation or beliefs. His work has continued to inform and inspire debate, contributing to our understanding of the functioning of human society and its improvement. Walter Bagehot was born in on February 3, 1826, in Langport, Somerset, England, the son of a local banker. He attended the University College London, where he earned a Master's degree in mathematics in 1848. He studied law and was called to the Bar, but decided not to practice, instead joining his father in the banking business, in Stuckey & Co. in the west of England. While still working as a banker, Bagehot started to write, first for some periodicals, and then for The National Review. He soon became the editor of the paper. In 1857, he met James Wilson, founder and editor of The Economist, a political and financial weekly newsmagazine. Bagehot married Wilson’s daughter in 1858. In 1860, Bagehot succeeded his father-in-law, James Wilson, as editor of The Economist. After taking over he expanded the publication's reporting on the United States and on politics, and is considered to have increased its influence among policymakers. Bagehot became influential in both politics and economics, among whose friends were statesmen George Cornewall Lewis and Grant Duff, Lord Carnarvon, Prime Minister William Ewart Gladstone, and the governor and directors of the Bank of England. Bagehot made several attempts to be elected as a Member of Parliament, but without success. He remained at the head of The Economist for the rest of his life. He died suddenly on March 24, 1877 in his home in Langport, Somerset, England, at the age of 51. Bagehot was a person with a whole variety of interests. He wrote on the topics of economics, politics, law, literature, and so forth. He remains most famous however for his three books: The English Constitution (1867), Physics and Politics (1872), and Lombard Street (1873). In addition to these volumes, he commanded substantial influence through his editorship of The Economist. The English Constitution In 1867, Bagehot wrote The English Constitution which explored the constitution of the United Kingdom, specifically the functioning of the British Parliament and the British monarchy, and the contrasts between British and American government. Bagehot revealed how the Parliament operated as it were "behind a curtain," hidden from public knowledge. He divided the constitution into two components: - The Dignified – symbolic side of the constitution, and - The Efficient - the real face of the constitution, the way things actually work and get done. Instead of describing the constitution from the point of the law, as a lawyer would, Bagehot focused on the practical implications of the constitution, as experienced by the common man. The book soon became widely popular, ensuring Bagehot worldwide fame. He criticized American presidential system, claiming that it lacked flexibility and accountability. While in the English parliament real debates took place, after which changes could take place, in the American Congress debates had no power, since the President made the final decision. In Bagehot's view: a parliamentary system educates the public, while a presidential system corrupts it. (The English Constitution 1867) He also criticized the way American presidents are chosen, saying: Under a presidential constitution the preliminary caucuses that choose the president need not care as to the ultimate fitness of the man they choose. They are solely concerned with his attractiveness as a candidate. (The English Constitution, 1867) Physics and Politics Bagehot wrote Physics and Politics in 1872, in which he tried to apply the principles of evolution to human societies. The subtitle of the book reads: Thoughts on the Application of the Principles of "Natural Selection" and "Inheritance" to Political Society. The book represented a pioneering effort to make a relationship between the natural and the social sciences. Bagehot explained the functioning of the market, and how it affects the behavior of the people. For example, he believed that people tend to invest money when the mood of the market is positive, and restrain from it when it comes to a negative phase. In this book Bagehot also reflected on the psychology of politics, especially on the personality of a leader. He stressed two things as essential for leadership: the personality of a leader and his motivation. Bagehot believed that motivation played one of the key roles in good leadership, and that the personality of a leader often counted more than the policy he endorsed: It is the life of teachers which is catching, not their tenets.” (Physics and Politics 1872) Bagehot claimed that the personal example of the leader sets the tone for the whole governance. That is why “character issues” are so important for any government. Character "issues" still play an important role in deciding the potential candidate for any leadership position in today’s modern world. Bagehot coined the expression "the cake of custom," denoting the sets of customs that any society is rooted in. Bagehot believed that customs develop and evolve throughout human history, with the best organized groups overthrowing the poorly organized groups. In this sense Bagehot’s views are a clear example of cultural selection, closer to Lamarckian than Darwinian evolution. The central problem in his book was to understand why Europeans could break away from tradition and “the cake of custom” and instead focus on progress and novelty. He saw tradition as important in keeping societies cohesive, but also believed that diversity was essential for progress: The great difficulty which history records is not that of the first step, but that of the second step. What is most evident is not the difficulty of getting a fixed law, but getting out of a fixed law; not of cementing (as upon a former occasion I phrased it) a cake of custom, but of breaking the cake of custom; not of making the first preservative habit, but of breaking through it, and reaching something better. (Physics and Politics 1872) In his famous Lombard Street (1873), Bagehot explained the theory behind the banking system, using insights from the English money market. As with his analysis of the English constitution six years earlier, Bagehot described the English banking system through the eyes of a simple person, as experienced in everyday life. Bagehot showed that the English money system was solely relying on the central bank, the Bank of England. Bagehot had warned that the whole reserve was in the central bank, under no effectual penalty of failure. He proposed several ideas how to improve that system. Bagehot’s work can be closely associated with the English historicist tradition. He did not directly oppose Classical economics, but advocated for its reorganization. He claimed that economics needed to incorporate more factors in its theory, such as cultural and social factors, in order to be more accurate in theorizing about economic processes. Bagehot was one of the first to study the relationship between physical and social sciences from a sociological perspective. In his contributions to sociological theory through historical studies, Bagehot may be compared to his contemporary Henry Maine. He also developed a distinct theory of central banking, many points of which continue to be valued. With his analysis of English and United States political systems in the English Constitution, Bagehot influenced Woodrow Wilson to write his Congressional Government. In honor of his achievements and his work as its editor, The Economist named its weekly column on British politics after him. Every year the British Political Studies Association awards the Walter Bagehot Prize for the best dissertation in the field of government and public administration. - Bagehot, Walter. 1848. Review of Mill's Principles of Political Economy. Prospective Review, 4(16), 460-502. - Bagehot, Walter. 1858. Estimates of Some Englishmen and Scotchmen. London: Chapman and Hall. - Bagehot, Walter. 1875. A New Standard of Value. The Economist, November 20. - Bagehot, Walter. 1879. Literary Studies. London: Longmans, Green and Co. - Bagehot, Walter. 1998. (original 1880). Economic Studies. Augustus M Kelley Pubs. ISBN 0678008523 - Bagehot, Walter. 2001. (original 1867). The English Constitution. Oxford University Press. ISBN 0192839756 - Bagehot, Walter. 2001. (original 1873). Lombard Street: A description of the money market. Adamant Media Corporation. ISBN 140210006X - Bagehot, Walter. 2001. (original 1877). Some Articles on the Depreciation of Silver and on Topics Connected with It. Adamant Media Corporation. ISBN 140216288X - Bagehot, Walter. 2001. (original 1889). The Works of Walter Bagehot. Adamant Media Corporation. ISBN 1421254530 - Bagehot, Walter. 2006. (original 1881). Biographical Studies. Kessinger Publishing. ISBN 1428608400 - Bagehot, Walter. 2006. (original 1872). Physics and Politics. Dodo Press. ISBN 1406504408 - Bagehot, Walter. 2006. (original 1885). The Postulates of English Political Economy. Cosimo. ISBN 1596053771 - Barrington, Russell. 1914. Life of Walter Bagehot. Longmans, Green and Co. - Buchan, Alastair. 1960. The spare chancellor: The life of Walter Bagehot. Michigan State University Press. ISBN 087013051X - Cousin, John William. 1910. A Short Biographical Dictionary of English Literature. New York, E.P. Dutton. - Morgan, Forrest. 1995. The Works of Walter Bagehot. Routledge. ISBN 0415131545 - Orel, Harold. 1984. Victorian Literary Critics: George Henry Lewes, Walter Bagehot, Richard Holt Hutton, Leslie Stephen, Andrew Lang, George Saintsbury, and Edmund Goss. Palgrave Macmillan. ISBN 0312843046 - Sisson C. H. 1972. The case of Walter Bagehot. Faber and Faber Ltd. ISBN 0571095011 - Stevas, Norman. 1959. Walter Bagehot a Study of His Life and Thought Together with a Selection from His Political Writings. Indiana University Press. - Sullivan, Harry R. 1975. Walter Bagehot. Twayne Publishers. ISBN 0805710183 All links retrieved December 6, 2012. - Bagehot and the Age of Discussion – Commentary on Bagehot’s Physics and Politics - Major Works – Some full-text works of Walter Bagehot - Quotations from Walter Bagehot - Walter Bagehot – Biography - Works by Walter Bagehot. Project Gutenberg New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
<urn:uuid:cc871262-6e2e-4e4f-a909-b3fd012a101a>
CC-MAIN-2013-20
http://www.newworldencyclopedia.org/entry/Walter_Bagehot
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928199
2,559
2.921875
3
Collins Field Guide to the Birds of South America: Non-Passerines: From rheas to woodpeckers The only field guide to illustrate and describe every non-passerine species of bird in South America, this superbly illustrated field guide to the birds of South America covers all the non-passerines (Divers to Woodpeckers). All plumages for each species are illustrated, including males, females and juveniles. Featuring 1,273 species, the text gives information on key identification features, habitat, and songs and calls. The 156 colour plates appear opposite their relevant text for quick and easy reference and include all field identifiable species, including subspecies and colour morphs. Distribution maps are included, showing where each species can be found and how common it is, to further aid identification. Vew all titles in Americas: Central & South America combined with South America (GEN) View other products from the same publisher
<urn:uuid:a872a921-c1ed-43ba-a4ce-88984cfb94e0>
CC-MAIN-2013-20
http://www.nhbs.com/collins_field_guide_to_the_birds_of_south_tefno_131101.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.861126
193
3.046875
3
The best gifts are handmade. Make this craft together and give it as a gift to a parent, grandparent, or your child's classmates. This craft is best suited for parents to make on their own or with minimal help from kids. You'll need to allow extra time for glue or paint to dry. Create with us skills involve self-expression, experimentation, and imagination through visual arts (like painting and sculpting), dramatic play, cooking, and dance. Read with us skills focus on early literacy and include: listening, comprehension, speech, reading, writing, vocabulary, letters and their sounds, and spelling.
<urn:uuid:6b5734e7-31f7-4aea-89ae-21a28cd854fc>
CC-MAIN-2013-20
http://www.nickjr.com/crafts/bubble-guppies-halloween-cards.jhtml?path=/crafts/all-shows/seasonal/all-ages/index.jhtml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947864
128
3.1875
3
In January 1968, Nixon decided to once again seek the nomination of the Republican Party for president. Portraying himself as a figure of stability in a time of national upheaval, Nixon promised a return to traditional values and "law and order." He fended off challenges from other candidates such as California Governor Ronald Reagan, New York Governor Nelson Rockefeller, and Michigan Governor George Romney to secure the nomination at the Republican convention in Miami. Nixon unexpectedly chose Governor Spiro Agnew of Maryland as his running mate. Nixon's campaign was helped by the tumult within the Democratic Party in 1968. Consumed by the war in Vietnam, President Lyndon B. Johnson announced on March 31 that he would not seek re-election. On June 5, immediately after winning the California primaries, former attorney general and then-U.S. Senator Robert F. Kennedy (brother of the late president John F. Kennedy) was assassinated in Los Angeles. The campaign of Vice President Hubert Humphrey, the Democratic nominee for president, went into a tailspin after the Democratic national convention in Chicago was marred by mass protests and violence. By contrast, Nixon appeared to represent a calmer society, and his campaign promised peace at home and abroad. Despite a late surge by Humphrey, Nixon won by nearly 500,000 popular votes. Third-party candidate George Wallace, the once and future governor of Alabama, won nearly ten million popular votes and 46 electoral votes, principally in the Deep South. Once in office, Nixon and his staff faced the problem of how to end the Vietnam War, which had broken his predecessor's administration and threatened to cause major unrest at home. As protesters in America's cities called for an immediate withdrawal from Southeast Asia, Nixon made a nationally televised address on November 3, 1969, calling on the "silent majority" of Americans to renew their confidence in the American government and back his policy of seeking a negotiated peace in Vietnam. Earlier that year, Nixon and his Defense Secretary Melvin Laird had unveiled the policy of "Vietnamization," which entailed reducing American troop levels in Vietnam and transferring the burden of fighting to South Vietnam; accordingly, U.S. troop strength in Vietnam fell from 543,000 in April 1969 to zero on March 29, 1973. Nevertheless, the Nixon administration was harshly criticized for its use of American military force in Cambodia and its stepped-up bombing raids during the later years of the first term. Nixon's foreign policy aimed to reduce international tensions by forging new links with old rivals. In February 1972, Nixon traveled to Beijing, Hangzhou, and Shanghai in China for talks with Chinese leaders Chairman Mao Zedong and Premier Zhou Enlai. Nixon's trip was the first high-level contact between the United States and the People's Republic of China in more than twenty years, and it ushered in a new era of relations between Washington and Beijing. Several weeks later, in May 1972, Nixon visited Moscow for a summit meeting with Leonid Brezhnev, general secretary of the Communist Party of the Soviet Union, and other Soviet leaders. Their talks led to the signing of the Strategic Arms Limitation Treaty, the first comprehensive and detailed nuclear weapons limitation pact between the two superpowers. Foreign policy initiatives represented only one aspect of Nixon's presidency during his first term. In August 1969, Nixon proposed the Family Assistance Plan, a welfare reform that would have guaranteed an income to all Americans. The plan, however, did not receive congressional approval. In August 1971, spurred by high inflation rates, Nixon imposed wage and price controls in an effort to gain control of price levels in the U.S. economy; at the same time, prompted by worries over the soundness of U.S. currency, Nixon took the dollar off the gold standard and let it float against other countries' currencies. On July 19, 1969, astronauts Neil Armstrong and Buzz Aldrin became the first humans to walk on the Earth's moon, while fellow astronaut Michael Collins orbited in the Apollo 11 command module. Nixon made what has been termed the longest-distance telephone call ever made to speak with the astronauts from the Oval Office. And on September 28, 1971, Nixon signed legislation abolishing the military draft. In addition to such weighty affairs of state, Nixon's first term was also full of lighter-hearted moments. On April 29, 1969, Nixon awarded the Presidential Medal of Freedom, the nation's highest civilian honor, to Duke Ellington-and then led hundreds of guests in singing "Happy Birthday" to the famed band leader. On June 12, 1971, Tricia became the sixteenth White House bride when she and Edward Finch Cox of New York married in the Rose Garden. (Julie had wed Dwight David Eisenhower II, grandson of President Eisenhower, on December 22, 1968, in New York's Marble Collegiate Church, while her father was President-elect.) Perhaps most famous was Nixon's meeting with Elvis Presley on December 21, 1970, when the president and the king discussed the drug problem facing American youth. Re-election, Second Term, and Watergate In his 1972 bid for re-election, Nixon defeated South Dakota Senator George McGovern, the Democratic candidate for president, by one of the widest electoral margins ever, winning 520 electoral college votes to McGovern's 17 and nearly 61 percent of the popular vote. Just a few months later, investigations and public controversy over the Watergate scandal had sapped Nixon's popularity. The Watergate scandal began with the June 1972 discovery of a break-in at the Democratic National Committee offices in the Watergate office complex in Washington, D.C., but media and official investigations soon revealed a broader pattern of abuse of power by the Nixon administration, leading to his resignation. The Watergate burglars were soon linked to officials of the Committee to Re-elect the President, the group that had run Nixon's 1972 re-election campaign. Soon thereafter, several administration officials resigned; some, including former attorney general John Mitchell, were later convicted of offenses connected with the break-in and other crimes and went to jail. Nixon denied any personal involvement with the Watergate burglary, but the courts forced him to yield tape recordings of conversations between the president and his advisers indicating that the president had, in fact, participated in the cover-up, including an attempt to use the Central Intelligence Agency to divert the FBI's investigation into the break-in. (For more information about Watergate, please visit the Ford Presidential Library and Museum's online Watergate exhibit.) Investigations into Watergate also revealed other abuses of power, including numerous warrantless wiretaps on reporters and others, campaign "dirty tricks," and the creation of a "Plumbers" unit within the White House. The Plumbers, formed in response to the leaking of the Pentagon Papers to news organizations by former Pentagon official Daniel Ellsberg, broke into the office of Ellsberg's psychiatrist. Adding to Nixon's worries was an investigation into Vice President Agnew's ties to several campaign contributors. The Department of Justice found that Agnew had taken bribes from Maryland construction firms, leading to Agnew's resigning in October 1973 and his entering a plea of no contest to income tax evasion. Nixon nominated Gerald Ford, Republican leader in the House of Representatives, to succeed Agnew. Ford was confirmed by both houses of Congress and took office on December 6, 1973. Such controversies all but overshadowed Nixon's other initiatives in his second term, such as the signing of the Paris peace accords ending American involvement in the Vietnam war in January 1973; two summit meetings with Brezhnev, in June 1973 in Washington and in June and July 1974 in Moscow; and the administration's efforts to secure a general peace in the Middle East following the Yom Kippur War of 1973. The revelations from the Watergate tapes, combined with actions such as Nixon's firing of Watergate special prosecutor Archibald Cox, badly eroded the president's standing with the public and Congress. Facing certain impeachment and removal from office, Nixon announced his decision to resign in a national televised address on the evening of August 8, 1974. He resigned effective at noon the next day, August 9, 1974. Vice President Ford then became president of the United States. On September 8, 1974, Ford pardoned Nixon for "all offenses against the United States" which Nixon "has committed or may have committed or taken part in" during his presidency. In response, Nixon issued a statement in which he said that he regretted "not acting more decisively and forthrightly in dealing with Watergate."
<urn:uuid:83e3cd95-9b04-47c5-bec6-208cac680d84>
CC-MAIN-2013-20
http://www.nixonlibrary.gov/thelife/apolitician/thepresident/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972684
1,729
3.65625
4
An annular pancreas is a ring of pancreatic tissue that encircles the duodenum (the first part of the small intestine). Normally, the pancreas sits next to, but does not surround, the duodenum. Annular pancreas is a congenital defect, which means it is present at birth. Symptoms occur when the ring of pancreas squeezes and narrows the small intestine so that food cannot pass easily or at all. Newborns may have symptoms of complete blockage of the intestine. However, up to half of people with this condition do not have symptoms until adulthood. There are also cases that are not detected because the symptoms are mild. Conditions that may be associated with annular pancreas include: Newborns may not tolerate feedings. They may spit up more than normal, not drink enough breast milk or formula, and cry. Adult symptoms may include: Surgical bypass of the blocked part of the duodenum is the usual treatment for this disorder. The outcome is usually good with surgery. Adults with an annular pancreas are at increased risk for pancreatic or biliary tract cancer. Call for an appointment with your health care provider if you or your child has any symptoms of annular pancreas. Semrin MG, Russo MA. Anatomy, histology, embryology, and developmental anomalies of the stomach and duodenum. In: Feldman M, Friedman LS, Brandt LJ, eds. Sleisenger & Fordtran's Gastrointestinal and Liver Disease. 9th ed. Philadelphia, Pa: Saunders Elsevier; 2010:chap 45. Updated by: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M. Health Solutions, Ebix, Inc. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed physician should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. Copyright 1997-2013, A.D.A.M., Inc. Duplication for commercial use must be authorized in writing by ADAM Health Solutions.
<urn:uuid:c685bad5-da59-433c-992d-96246edc9d74>
CC-MAIN-2013-20
http://www.nlm.nih.gov/medlineplus/ency/article/001142.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917334
510
2.90625
3
Coastal Clash: Defining Public Property and the History of the Public Trust Doctrine "Coastal Clash" is a one-hour documentary focusing on the urbanization of California's coastline. The activities and lesson plans for the film "Coastal Clash" target students at the high school level and align with the California State Standards for Government. In this lesson plan, students will do research and group work related to the concept of the Public Trust Doctrine. Enhancing Modern Languages Teaching: Student Participation and Motivation Enhancing Modern Languages Teaching: Student Participation and Motivation The Icarus Syndrome: A history of American hubris The Icarus Syndrome tells a tale as old as the Greek–a story about the seductions of success. In conversation with Associate Professor Brendan O'Connor from the US Studies Centre, Peter Beinart portrays three extraordinary generations: the progressives... (Running Time 60:06) Oberlin History as American History This site offers exhibits that tell about the lives and histories of the people of Oberlin, Ohio. The website features the story of an Amistad captive, Oberlin women and the struggle for equality, and the city's cooperative tradition. It also includes city maps and pictures, letters and essays related to the city's founding and development, newspaper articles regarding the Niagara movement, and census data. Ancient and Medieval Philosophy, Fall 2006 This course will concentrate on major figures and persistent themes in ancient and medieval philosophy. A balance will be sought between scope and depth, the latter ensured by a close reading of selected texts. Ancient Wisdom and Modern Love, Spring 2007 Built around Plato's Symposium, Shakespeare (including A Midsummer Night's Dream), Catholic writings (including Humanae Vitae), and several movies, this course explores the nature of romance and erotic love. We will examine such topics as sexuality, marriage, and procreation with an eye towards how we can be better at being in love. The course generally tries to integrate the analytic approach of philosophy with the imaginative approach of literature. Medicine and Public Health in American History, Fall 2007 This course offers an introduction to differing conceptions of disease, health, and healing throughout American history, the changing role and image of medicine and medical professionals in American life, and the changing social and cultural meanings and entanglements of medical science and practice throughout American history. Creating People Centred Schools: Section Two, School organization: a brief history This provides an overview of organizational styles and the importance of cultures as well as structures in organizational models and change. Welsh history and its sources This unit is a teaching and learning resource for anyone interested in Welsh history. It contains study materials, links to some of the most important institutions that contribute to our understanding of the history of Wales, and a pool of resources that Great Unsolved Mysteries in Canadian History This site includes a collection of nine historical mysteries which draw students into Canadian history, critical thinking and archival research through the enticement of solving historical cold crimes. Each of the mystery archives includes an average of 100,000 words in English (and in French), as well as up to several hundred images plus maps. Some of the mystery websites also include 3-D recreations, videos and oral history interviews. Site users can look at the collections of archival materia He who destroyes a good Booke, kills reason itselfe: an exhibition of books which have survived Fire In 1955, Robert Vosper of the University of Kansas Libraries put together what would become an internationally recognized exhibit of materials that have been banned and/or censored. This catalog of the exhibit explains why each item was of concern in its time, and includes images of many. Works date from the 1500s to mid-1950s. Research Guide for Doing Undergraduate History A website designed to help undergraduates use internet (and printed) resources in researching and writing history papers at a more sophisticated level than the traditional term paper based on secondary materials. History of Migraine and Risk of Pregnancy Induced Hypertension This peer reviewed article studies the relationship between women who have a history of migraine headaches in relation to developing preeclampsia or gestrational hypertension during pregnancy. The study included 172 women with preeclampsia and 254 with gestrational hypertension. The control included 505 women with no history of hypertension before pregnancy. The study concluded that women who had a history of migraines may be at a higher risk for developing hypertension during pregnancy. History of Science in Latin American and the Caribbean: A Virtual Archive This site is " a comprehensive database of primary sources on the history of science in Latin America and the Caribbean. The site, launched in January 2010, provides a virtual archive of over 200 primary sources along with introductions based on the latest scholarly findings."According to the site, it "is organized into Topics that are organized approximately chronologically, but each one stands alone. The archive, or database of primary sources, is designed in a modular fashion, so viewers from East Asia in World History This site is designed as a resource site for teachers of world history, world geography, and world cultures. It provides background information and curriculum materials, including primary source documents for students.The material is arranged in 14 topic sections. The topics and the historical periods into which they are divided follow the National Standards in World History and the Content Outline for the Advanced Placement Course in World History. Seventeen Moments in Soviet History Begins with the Bolshevik seizure of power in 1917 & ends with the dissolution of the Soviet Union in 1991. It includes the Kronstadt uprising (1921), the death of Lenin (1924), the liquidation of the Kulaks as a class (1929), the year of the Stakhanovite (1936), the end of rationing (1947), the virgin lands campaign (1954), Khrushchev's secret speech (1956), the first cosmonaut (1961), the intervention in Czechoslovakia (1968), & Chernobyl (1986). (NEH) The Mongols in World History A sophisticated web site on the history and impact of the Mongols. Separate pages deal with such topics as the nature of nomadic life, key figures, the Mongol Conquests, and the impact of the Mongols on China and the world. An image gallery and set of historical maps as well as other class materials and readings add to the value of the site. That one of the leading experts on the Mongols, Morris Rosabe, was a consultant gives the site much creditability. A Radically Modern Approach to Introductory Physics Volume 2 This is the second part (chapters 13-24) of a pdf textbook for a one-year introductory physics course. The text was developed out of an alternate beginning physics course at New Mexico Tech designed for students with a strong interest in physics. A broad outline of the text is as follows: Newton's Law of Gravitation; Forces in Relativity; Electromagnetic Forces; Generation of Electromagnetic Fields; Capacitors, Inductors, and Resistors; Measuring the Very Small; Atoms; The Standard Mode; Atomic MacTutor History of Mathematics Archive An award-winning site concerning the history of mathematics. In-depth coverage of numerous people, topics, mathematical curves, and more. Extensively cross-linked; powerful search engine. Rich and growing source of materials. This Land is Your Land? This Land is My Land! Mapping the History of Territory Acquisition in the US In this lesson, students will research the many territory acquisitions in United States history and create an annotated map that tells the history of U.S. expansion.
<urn:uuid:64970b38-05dd-4d6e-b665-19273f009542>
CC-MAIN-2013-20
http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Harvesting%20history,%20Laxton%20:%20the%20medieval%20village%20that%20survived%20the%20modern%20&start=1240&end=1260
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921699
1,583
2.75
3
Ice Core Gateway: Vostok Ice Core CO2 Data The Vostok ice core has a long record of global carbon dioxide concentrations, with variations caused by factors other than photosynthesis and human activity. Ice core data sets from three different authors are available for download. Users can also link to other NOAA paleoclimate projects and information. Phases of the Moon This site contains a series of visualizations of the sun, moon and Earth System and how they relate to the changing face of the moon. Animations are in the form of Java applets, forms for field observation of the moon, and a collection of exercises and PDF versions of background material. There are practice questions and quizzes that discuss the animations. Planetary Climate Exercise This MS Word document explains roles for a Planetary Climate role-playing exercise dealing with the atmospheres of Venus and the Earth. Roles include experts on coal, carbon dioxide, heat balance, spectroscopy, atmospheric transmission and the water cycle. Starting Out With Earth History This activity asks students to place 6-10 events in Earth history on a timeline, first working in small groups and then as a class. Then, through questions, important points such as how certain events are dated, where humanity fits in, and so forth, can be brought up. The Starting Point website builds a context for the exercise by detailing the learning goals, teaching notes and materials (downloadable), and additional resources. Japan's Nuclear Policy Ambassador Ryukichi Imai-journalist, nuclear engineer, and general manager at Japan Atomic Power Company-was Japanese ambassador to the United Nations Disarmament Conference from 1982 to 1987. In this video segment, Imai explains why he believes that Japan will never embark on a nuclear-weapons program. He also predicts that, while Japan stands alone in its reliance on nuclear energy, rising energy prices-even post-Chernobyl-will revive worldwide interest in nuclear power. In the interview he co From Mutual Assured Destruction to Star Wars Caspar Weinberger served as U.S. president Ronald Reagan's secretary of defense from 1981 to 1987. In this video segment, Weinberger explains how deployment of the MX missile stopped the Soviet Union from believing it could successfully launch a first strike, which he feels is 'the essence of deterrence.' A better alternative to 'mutual assured destruction,' he argues, is the Strategic Defense Initiative, the Reagan administration's hotly contested proposal to design space-based weapons that cou Bruce Kent, ordained a Catholic minister in 1958, became general secretary of the Campaign for Nuclear Disarmament (CND) in 1980 and chairman in 1987, the year he resigned from the ministry. In this video segment, he challenges the damaging spin that secretary for defense Lord Michael Heseltine used to undermine CND rather than engage in public debate about nuclear policy. Kent also refutes accusations that CND was in support of 'one-sided,' full unilateral disarmament. Instead, he argues for 's 'City Archives' was written and directed by Richard Foreman, founder and director of the Ontological Hysteric Theater. He serves as the narrator for this work, discussing the power of 'the foreign' and images, talking directly into a microphone in a purposely stilted manner and addressing questions to the viewer. A sort of classroom overpopulated by adults sets the stage for the work. Phrases are written and erased on a blackboard, and women gaze out a window, physically supporting planks of woo Femme a la Cafetiere, La Acclaimed theater director brings movement to Cezanne's painting, reproduced in the studio for the camera. [Suzushi Hanayagi,] a dancer from the Kabuki theater, performs the role of the woman, whose slight, almost imperceptible, facial and body movements -- together with mysteriously animated objects and strange apparitions -- bring the painting alive. A spoon stirs a cup of coffee without the benefit of human assistance. An off-camera figure manipulates objects. The woman eats green candies. A 'Barbara Two,' by Patrick Ireland, features a close-up portrait of a woman's face, with light and shadow playing across it by the manipulation of the light source. The woman in the piece is Barbara Novak. No master material exists for this piece that is two minutes long. Patrick Ireland was the pseudonym of Brian O'Doherty, a funder and critic of video art. Sydney an der Wupper 'Sydney an der Wupper' is a film featuring the Australian dancer Meryl Tankard. Tankard goes through a day in the city, riding the subway, taking a singing lesson, and bathing at a public house. As the work progresses, it becomes harder and harder to distinguish between fantasy and reality. Tankard's character imagines herself dancing with a man across train platforms and through streets. In one scene the two of them are dancing on a hockey rink, sliding across the icy surface. Tankard climbs la 'Hall's Crossing' refers to a place in the American West where natural rhythms collide with scenic cruisers and tour buses. 'Hall's Crossing' is an electronic 'see America,' set in a place where natural vistas and cultural myths overlap, a place where the canyon meets the road. Scenes of the Grand Canyon portray both the beauty of the area and its invasion by tourists. The tourists attempt to capture the imagery through the medium of photography. At one point a narrator, Dr. Giselda Benda, speak Ellis Island (a work in progress) 'Ellis Island (a work in progress)' is a haunting, reflective piece on Ellis Island and the immigrants who passed through there. Black-and-white, near-static shots of actors and actresses realistically portraying turn-of-the-century immigrants are combined with color shots of a modern-day tour guide conducting a tour of the buildings. Re-creations of the medical examinations the immigrants underwent and the conditions they lived through are filmed in the run-down buildings of Ellis Island before Artists Beth B. and Ida Applebroog use videotaped performance combined with figurative drawing and captions to create a disturbing, provocative program about the unthinkable yet prevalent occurrence of child victimization. The script for the program is delivered in brief monologues by a cast of several men and women reading statements from various texts, including the writings of Freud and the testimonies of Josef Mengele's victims. It is then intercut with a boy's voice repeating 'I am not a ba ofrece las ltimas noticias de la ciencia. La clase de jueves proporciona nuevos planes y actividades de la leccin basados en una historia actual del ttulo y conecta la ltima investigacin de NASA con la instruccin. Pasados asuntos incluyen Buck Rogers, Cuidado!, Adis a la MIR, Despus de tres intentos, se retira La Nia?, y ms. Robotics with the XBC Controller This course offers a brief history of robotics, and a definition of robot and robotics. The course includes an introduction to IC and the XBC, downloading firmware, Updating the bitstream, and IC Environment and simulator. It concludes with an activity building a Demo-Bot. Earth's Magnetic Field The POETRY website explores solar storms and how they affect us, space weather, and the Northern Lights. A 64-page workbook of hands-on activities examines Earth's magnetosphere. Create a classroom magnetometer. Solve the space science problem of the week. FoilSim: Basic Aerodynamics Software This is an interactive simulation software that determines the airflow around various shapes of airfoils. This is is a primer on scientific efforts to understand the origin, evolution, and fate of the universe. Among the questions it explores: What types of matter and energy fill the universe? What is the age and shape of the universe? How rapidly is it expanding? The website examines the Big Bang theory, as well as tests and limitations of the theory. Center for Educational Resources (CERES) Project This is an extensive library of on-line and interactive K-12 science education materials for teaching astronomy. The site contains both classroom science projects and reference materials.
<urn:uuid:dba327cb-af78-4542-95aa-778bc4b82866>
CC-MAIN-2013-20
http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Why%20study%20Rudolf&start=9120&end=9140
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904857
1,732
2.84375
3
National Teachers Initiative The National Teachers Initiative is a project of StoryCorps, the American oral history project. Each month this school year, "Weekend Edition Sunday" will celebrate stories of public school teachers across the country. Learning Works charter school in California takes an unorthodox approach to getting young people to graduate. Students who had previously dropped out get mentors who help with everything from getting to class on time to staying up late studying. Now, some of those who graduated are helping others. December 25, 2011 Teacher John Hunter invented the World Peace Game to get his elementary students to think about major world issues. He also wanted to teach them compassion and kindness. At least two of his former students are on the path he helped to pave. October 30, 2011 Ayodeji Ogunniyi's family came to the U.S. from Nigeria in 1990. His father worked as a cab driver in Chicago, and he always wanted his son to become a doctor. But while Ogunniyi was studying pre-med in college, his father was murdered on the job. At that point, he says, his life changed course. September 25, 2011 As a middle-school student in the '80s, Lee Buono stayed after school one day to remove the brain and spinal cord from a frog. He did such a good job that his science teacher told him he might be a neurosurgeon someday. That's exactly what Buono did. September 25, 2011 StoryCorps is honing in on lessons about learning with a new project for the academic year called the National Teachers Initiative. It'll feature conversations with teachers across the country — teachers talking to each other, students interviewing the teachers who changed their lives, and more.
<urn:uuid:1564cb48-4309-4426-b589-36b3b65e1630>
CC-MAIN-2013-20
http://www.npr.org/series/142967497/national-teachers-initiative
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.985412
355
2.734375
3
Why separation of powers matters: Is freedom inevitable? The answer to that question is obvious but essential. Freedom is not inevitable. Historically, freedom is a temporary condition enjoyed by only a fraction of the earth's population. Since freedom is not inevitable - indeed, the opposite is true; freedom is rare - we must ask, "Why are we free when others or not?" As a nation (and state) of immigrants, we can't claim we are free because of our genetics. Our nation (and state) is blessed with natural resources, but so is Russia. Wealth does not produce freedom. In America (and in Nevada), we are free, because our founders recognized that, as Lord Acton stated, "Power tends to corrupt, and absolute power corrupts absolutely" and designed a government with three branches of government. While these branches of government each have different functions, they also have the ability to check the power exercised by another branch. To ensure that no person or group would amass too much power, the founders established a government in which the powers to create, implement, and adjudicate laws were separated. Each branch of government is balanced by powers in the other two coequal branches: The President can veto the laws of the Congress; the Congress confirms or rejects the President's appointments and can remove the President from office in exceptional circumstances; and the justices of the Supreme Court, who can overturn unconstitutional laws, are appointed by the President and confirmed by the Senate.Because we're so used to this system of government, it's easy to forget how important this system is to ensuring freedom. Government is needed to secure an individual's right to life, liberty and property. But those wielding governmental power tend to corruption, which harms the very rights government was created to defend. But using the checks and balances contained within three separate branches of government, you have a system where the tendency of government officials to amass power is checked by other government officials who usually aren't interested in giving up their power. And it's also why it's so dangerous for one individual to work in two branches of government at the same time. Both the separation of powers and the checks and balances in the system go out the window if one person has authority in two branches of government. Instead of separating power, power is consolidated. Instead of one branch checking another, it could collude with it. The idea of separating powers is so important that it's explicitly required in Nevada's constitution in Article 3, Section 1. The powers of the Government of the State of Nevada shall be divided into three separate departments,-the Legislative,-the Executive and the Judicial; and no persons charged with the exercise of powers properly belonging to one of these departments shall exercise any functions, appertaining to either of the others...And that's exactly why NPRI's Center for Justice and Constitutional Litigation has sued Mo Denis, the Public Utilities Commission, and the State of Nevada for violating the separation-of-powers clause in Nevada's constitution. Even the smallest encroachment in the separation-of-powers clause opens the door for larger and larger encroachments. Hello, Wendell Williams, Chris Giunchigliani, and Mark Manendo. Once you remove the bright-line standard, it's only a matter of time before incremental "exceptions" render the provision meaningless. And once you've removed the structural protections against, what James Madison called, "tyranny," you're left with a system of government dependent entirely on the character of its elected officials to keep it free from corruption and abuse of power. As "power tends to corrupt, and absolute power corrupts absolutely," this is a problem. Freedom isn't inevitable. Freedom is rare, and we should do everything in our power to protect the form and structure of our government - including a clear separation-of-powers provision - which provided us with freedom.
<urn:uuid:0f05ae50-8446-4293-a426-f43dd2639c57>
CC-MAIN-2013-20
http://www.npri.org/blog/detail/why-separation-of-powers-matters-is-freedom-inevitable
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952888
795
2.828125
3
How to Use Reading 1: Three Days of Carnage at Gettysburg (Refer to Map 2 as you read the description of the battle.) Units of the Union and the Confederate armies met near Gettysburg on June 30, 1863, and each quickly requested reinforcements. The main battle opened on July 1, with early morning attacks by the Confederates on Union troops on McPherson Ridge, west of the town. Though outnumbered, the Union forces held their position. The fighting escalated throughout the day as more soldiers from each army reached the battle area. By 4 p.m., the Union troops were overpowered, and they retreated through the town, where many were quickly captured. The remnants of the Union force fell back to Cemetery Hill and Culp's Hill, south of town. The Southerners failed to pursue their advantage, however, and the Northerners labored long into the night regrouping their men. Throughout the night, both armies moved their men to Gettysburg and took up positions in preparation for the next day. By the morning of July 2, the main strength of both armies had arrived on the field. Battle lines were drawn up in sweeping arcs similar to a "J," or fishhook shape. The main portions of both armies were nearly a mile apart on parallel ridges: Union forces on Cemetery Ridge, Confederate forces on Seminary Ridge, to the west. General Robert E. Lee, commanding the Confederate troops, ordered attacks against the Union left and right flanks (ends of the lines). Starting in late afternoon, Confederate General James Longstreet's attacks on the Union left made progress, but they were checked by Union reinforcements brought to the fighting from the Culp's Hill area and other uncontested parts of the Union battle line. To the north, at the bend and barb of the fishhook (the other flank), Confederate General Richard Ewell launched his attack in the evening as the fighting at the other end of the fishhook was subsiding. Ewell's men seized part of Culp's Hill, but elsewhere they were repulsed. The day's results were indecisive for both armies. In the very early morning of July 3, the Union army forced out the Confederates who had successfully taken Culp's Hill the previous evening. Then General Lee, having attacked the ends of the Union line the previous day, decided to assail the Union. The attack was preceded by a two hour artillery bombardment of Cemetery Hill and Ridge. For a time, the massed guns of both armies were engaged in a thunderous duel for supremacy. The Union defensive position held. In a final attempt to gain the initiative and win the battle, Lee sent approximately 12,000 soldiers across the one mile of open fields that separated the two armies near the Union center. General George Meade, commander of the Union forces, anticipated such a move and had readied his army. The Union lines did not break. Only every other Southerner who participated in this action retired to safety. Despite great courage, the attack (sometimes called Pickett's Charge or Longstreet's assault) was repulsed with heavy losses. Crippled by extremely heavy casualties in the three days at Gettysburg, the Confederates could no longer continue the battle, and on July 4 they began to withdraw from Gettysburg. 1. Which army had the advantage after the first day of fighting? What were some reasons for their success? Could they have been even more successful? 2. What was the situation by the evening of July 2? 3. What evidence from the previous day's fighting brought General Lee to decide on the strategy for Pickett's Charge on July 3? What was the result of that assault? 4. Why did General Lee decide to withdraw from Gettysburg? Reading 1 was adapted from the National Park Service's visitor's guide for Gettysburg National Military Park.
<urn:uuid:5d90f0c7-251b-47a6-816f-cabf014038b3>
CC-MAIN-2013-20
http://www.nps.gov/nr/twhp/wwwlps/lessons/44gettys/44facts1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980565
789
3.484375
3
Biomass Technology Analysis Conducting full life-cycle assessments for biomass products, including electricity, biodiesel, and ethanol, is important for determining environmental benefits. NREL analysts use a life-cycle inventory modeling package and supporting databases to conduct life-cycle assessments. These tools can be applied on a global, regional, local, or project basis. Integrated system analyses, technoeconomic analyses, life-cycle assessments (LCAs), and other analysis tools are essential to our research and development efforts. They provide an understanding of the economic, technical, and even global impacts of renewable technologies. These analyses also provide direction, focus, and support to the development and commercialization of various biomass conversion technologies. The economic feasibility and environmental benefits of biomass technologies revealed by these analyses are useful for the government, regulators, and the private sector. Technoeconomic analyses (TEAs) are performed to determine the potential economic viability of a research process. Evaluating the costs of a given process compared to the current technology can assess the economic feasibility of a project. These analyses can be useful in determining which emerging technologies have the highest potential for near-, mid-, and long-term success. The results of a TEA are also useful in directing research toward areas in which improvements will result in the greatest cost reductions. As the economics of a process are evaluated throughout the life of the project, advancement toward the final goal of commercialization can be measured. TEAs performed in previous years have determined the technical and economic feasibility of various biomass-based systems, including: - Direct combustion - Gasification combined cycle power systems NREL's analysis capabilities include proficiency with the following software packages: |ASPEN Plus©||Models continuous processes to obtain material and energy balances| |GateCycle™||Performs detailed steady-state and off-design analyses of thermal power systems| |Questimate©||Performs detailed process plant cost estimates| |MATLAB® and MathCAD®||Perform numeric calculations and mathematical solutions| |Crystal Ball®||Operates within Microsoft Excel® and incorporates uncertainties in forecasting analysis results| Life-cycle assessment (LCA) is an analytic method for identifying, evaluating, and minimizing the environmental impacts of emissions and resource depletion associated with a specific process. When such an assessment is performed in conjunction with a technoeconomic feasibility study, the total economic and environmental benefits and drawbacks of a process can be quantified. Material and energy balances are used to quantify the emissions, resource depletion, and energy consumption of all processes, including raw material extraction, processing, and final disposal of products and by-products, required to make the process of interest operate. The results of this inventory are then used to evaluate the environmental impacts of the process so efforts can focus on mitigation. LCA studies have been conducted on the following systems: - Biomass-fired integrated gasification combined-cycle system using a biomass energy crop - Pulverized coal boiler representing an average U.S. coal-fired power plant - Cofiring biomass residue with coal - Natural gas combined-cycle power plant - Direct-fired biomass power plant using biomass residue - Anaerobic digestion of animal waste Biofuels Production Technologies: - Ethanol from corn stover - Comparison of biodiesel and petroleum diesel used in an urban bus Hydrogen production technologies: - Natural gas-hydrogen production For these analyses, the software package used to track the material and energy flows between the process blocks in each system was Tools for Environmental Analysis and Management (TEAM®). Learn more about our Biomass capabilities and current projects in this area. Access more information on all of our Staff Analysts
<urn:uuid:20b959d9-f123-487a-b3f3-b612ab4f12a6>
CC-MAIN-2013-20
http://www.nrel.gov/analysis/tech_bio_analysis.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.894936
760
2.640625
3
Algorithm Positions Solar Trackers, Movie Stars March 30, 2011 Math and programming experts at a federal laboratory took an algorithm used to track the stars and rewrote its code to precisely follow the sun, even taking into consideration the vagaries of the occasional leap second. Now, the algorithm and its software are helping solar power manufacturers build more precise trackers, orchards to keep their apples spotless and movie makers to keep the shadows off movie stars. The Solar Position Algorithm (SPA) was developed at the U.S. Department of Energy's National Renewable Energy Laboratory to calculate the sun's position with unmatched low uncertainty of +/- 0.0003 degrees at vertex, in the period of years from -2000 to 6000 (or 2001 B.C. until just short of 4,000 years from now). That's more than 30 times more precise than the uncertainty levels for all other algorithms used in solar energy applications, which claim no better than +/- 0.01 degrees, and are only valid for a maximum of 50 years. And those uncertainty claims cannot be validated because of the need to add an occasional leap second because of the randomly increasing length of the mean solar day. The SPA does account for the leap second. That difference in uncertainty levels is no small change, because an error of .01 degrees at noon can throw calculations off by 2 or 3 percent at sunrise or sunset, said NREL Senior Scientist Ibrahim Reda, the leader on the project. "Every uncertainty of 1 percent in the energy budget is millions of dollars uncertainty for utility companies and bankers," Reda said. "Accuracy is translated into dollars. When you can be more accurate, you save a lot of money." "Siemens Industry Inc. uses NREL's SPA in its newest and smallest S7-1200 compact controller," says Paul Ruland of Siemens Industry, Inc. "Siemens took that very complex calculation, systemized it into our code and made a usable function block that its customers can use with their particular technologies to track the sun in the most efficient way. The end result is a 30 percent increase in accuracy compared to other technologies." Science, Engineering and Math All Add to Breakthroughs An algorithm is a set of rules for solving a mathematical problem in a finite number of steps, even though those steps can number in the hundreds or thousands. NREL is known more for its solar, wind, and biofuel researchers than for its work in advanced math. But algorithms are key to so many scientific and technological breakthroughs today that a scientist well-versed in the math of algorithms is behind many of NREL's big innovations. Since SPA was published on NREL's website, more than 4,000 users from around the world have downloaded it. In the European Union, for the past three years, it has been the reference algorithm to calculate the sun's position both for solar energy and atmospheric science applications. It has been licensed to, and downloaded by, major U.S. manufacturers of sun trackers, military equipment and cell phones. It has been used to boost agriculture and to help forecast the weather. Archaeologists, universities and religious organizations have employed SPA, as have other national laboratories. Fewer Dropped Cell-Phone Calls Billions of cell-phone calls are made each day, and they stay connected only because algorithms help determine exactly when to switch signals from one satellite to another. Cell-phone companies can use the SPA to know exactly the moments when the phone, satellite, and the bothersome sun are in the same alignment, vulnerable to disconnections or lost calls. "The cell phone guys use SPA to know the specific moment to switch to another satellite so you're not disconnected," said Reda, who has a master's degree in electrical engineering/measurement from the University of Colorado. "Think of how many millions of people would be disconnected if there's too much uncertainty about the sun's position." From a Tool for Solar Scientists to Widespread Uses SPA sprang from NREL's need to calibrate solar measuring instruments at its Solar Radiation Research Laboratory. "We characterize the instruments based on the solar angle," Reda said. "It's vital that instruments get a precise read on the amount of energy they are getting from the sun at precise solar angle." That will become even more critical in the future when utilities add more energy garnered from the sun to the smart grid. "The smart grid has to know precisely what your budget is for each resource you are using — oil, coal, solar, wind," Reda said. Making an Astronomy Algorithm One for the Sun Reda borrowed from the "Astronomical Algorithms," which is based on the Variations Sèculaires des Orbites Planètaires Theory (VSOP87) developed in 1982 then modified in 1987. Astronomers trust it to let them know exactly where to point their telescopes to get the best views of Jupiter, Alpha Centauri, the Magellan galaxy or whatever celestial bodies they are studying. "We were able to separate and modify that global astronomical algorithm and apply it just to solar energy, while making it less complex and easy to implement," said Reda, highlighting the role of his colleague, Afshin Andreas, who has a degree in engineering physics from the Colorado School of Mines, as well as expertise in computer programming. They spent an intense three or four weeks of programming to make sure the equations were accurate before distributing the 1,100 lines of code, Andreas said. They used almanacs and historical data to ensure that what the algorithm was calculating agreed with what observers from previous generations said about the sun's position on a particular day. "We did spot checks so we would have a good comfort level that the future projections are accurate," Reda said. "We used our independent math and programming skills to make sure that our results agreed, Reda said. Available for Licensing, Free Public Use The new SPA algorithm simply served the needs of NREL scientists, until the day it was put on NREL's public website. "A lot of people started downloading it," so NREL established some rules of use, Reda said. Individuals and universities could use SPA free of charge, but companies with commercial interests would have to pay for the software. Factoring in Leap Seconds Improves Accuracy NREL's SPA knows the position of the sun in the sky over an 8,000 year period partly because it has learned when to add those confounding leap seconds. Solar positioners that don't factor in the leap second only can calculate a few years or a few decades. The length of an Earth day isn't determined by an expensive watch, but by the actual rotation of the Earth. Almost immeasurably, the Earth's rotation is slowing down, meaning the solar day is getting just a tiny bit longer. But it's not doing so at a constant rate. "It happens in unpredictable ways," Reda said. Sometimes a leap second is added every year; sometimes there isn't a need for another leap second for three or four years. For example, the International Earth Rotation and Reference Systems Service (IERS) added six leap seconds over the course of seven years between 1992 and 1998, but has added just one extra second since 2006. The algorithm calculates exactly when to add a leap second because included in its equations are rapid, monthly, and long-term data on the solar day provided by IERS, Reda and Andreas said. "IERS receives the data from many observatories around the world," Reda added. "Each observatory has its own measuring instruments to measure the Earth's rotation. A consensus correction is then calculated for the fraction of second. As long as we know the time, and how much the Earth's rotation has slowed, we know the sun's position precisely." That precision has proved useful in unexpected fields. Practical Uses in Agriculture, Movie Making One person who bought a license for the SPA software has an apple orchard, and wanted to keep the black spots off the apples that turn off finicky consumers, thus making wholesale buyers hesitate, Reda said. The black spots appear when too much sun hits a particular apple, a particular tree or a particular row of trees in an orchard. The spots can be prevented by showering the apples with water, but growers don't want to use more water than necessary. SPA's precise tracking of the sun tells the grower exactly when the automatic sprinkler should spray for a few moments on a particular set of trees, and when it's OK to shut off that sprayer and turn on the next one. SPA communicates with the sprinkler system so, "instead of spraying the whole orchard, the spray moves minute by minute," Reda said. "He takes our tool and plugs it into the software that controls the sprinkler system. And he saves a lot of water." Religious groups with traditions of praying at a particular time of day even have turned to SPA to help with precision. A movie-camera manufacturer has purchased the SPA software to help cinematographers combat the precious waste of money when shadows disrupt outdoor shooting. "They have cameras on those big cranes and booms, and typically they'd have to manually change them based on the shadows," Reda said. "This company that bought it has an automatic camera positioner." Combining the positioner with the SPA's calculations, the camera can tell the precise moment when the sun will, say, peak above the tall buildings of an outdoor set. "They don't have to make so many judgments on their own about where the camera should be positioned," Reda said. "It gives them a clearer picture." Learn more about NREL's solar radiation research and the Electricity, Resources, and Building Systems Integration Center. — Bill Scanlon
<urn:uuid:b8675757-db5b-4acd-a245-0aceb7ed0441>
CC-MAIN-2013-20
http://www.nrel.gov/news/features/feature_detail.cfm/feature_id=1494
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954781
2,039
3.140625
3
NEW ULM A historical impersonator of St. Paul's very first public school teacher provided a snapshot of a time and place in history Saturday during the Junior Pioneers Winter Social. Suzanne de la Houssaye, of the Minnesota Historical Society, performed as Harriet Bishop, who was instrumental in making St. Paul's school into a public school, promoting the national profile of Minnesota and pushing the temperance movement. She was also part of the initial rush of writers and intellectuals to write about the U.S. Dakota Conflict, writing her own record named "Dakota War Whoop." Bishop was an intensely religious Baptist woman who believed in the imminent coming of the Rapture and the need to exuberantly preach the Gospel to all that could hear it. She was also considered an intellectual in the Twin Cities at that time and was instrumental in helping to build the major cities up. She started the St. Paul school in 1847, starting in essentially a log cabin and rapidly growing the number of students until her successful efforts to make it the town's first public school. She even raised Minnesota's profile as a healthy destination to settle due to its "sturdy weather," erroneously claiming certain diseases of the time simply did not exist in Minnesota. Staff photo by Josh Moniz Suzanne de la Houssaye, of the Minnesota Historical Society, performed as Harriet Bishop, St. Paul's first public school teacher, on Saturday at the Junior Pioneers Winter Social. She had several forms of compassion for the Dakota people, but was equally a product of her time in believing the only right way forward was for them to wholly adopt European culture and traditions. She wrote her book from a very emotional standpoint, aimed at drawing up the image of women and children hiding in the basements in New Ulm. Her book also carried many inaccuracies believed at the time and tried to paint Charles Flandreau as the sole savior of the battle at New Ulm at the start of the Conflict. Her spin on the Conflict is largely believed to be due to its elements making a serious impact on her. But, she still fell in the middle of the white settler's beliefs after the Conflict, neither advocating for the extermination of the Dakota people nor being among those who fully accepted the Dakota's right to an independent heritage. Interestingly, she had an almost comically stern view of New Ulm citizens at the time, believing the myths that they were all progressive atheists who forbid priests in their city limits. She literally referred to them as "the infidel Germans" in her book and made indications to somewhat held belief at the time that the Conflict was God's judgment on the town. She also judge New Ulm for having dance halls, which some strict religious sects objected to around that time, and because she believed they would often perform the taboo act of drinking on holy days. She married a widower who served in the U.S. Civil War. The common practice during the time was for widowers to quickly remarry, which was sometimes a sheer matter of survival. It was also a more common occurrence during that time due to the high rate of deaths during childbirth, which was often caused by doctors trying to help women without knowing about the deadly diseases hiding on their hands. However, she eventually undertook the uncommon act of divorcing her husband due to his abusive alcoholism. Her husband's circumstance was frighteningly common, largely due to numerous Civil War veterans returning without any help for psychological issues from their service. This led to her heavy advocacy for the temperance movement, which would eventually see Prohibition passed after her death. She personally was one of the founding members of the Christian Women's Temperance Union. Her cause in that infamous movement was aimed at combatting a very real issue of the day: the prospect of the husband, the only one allowed in that time to earn the living wages, drinking away all the month's food money due to alcoholism. The people of that time drank more than three times more alcohol per week on a normal basis than most people drink today. The movement believed the end of alcohol would address the majority of the terrible acts of abuse laid on women and children at the time. Bishop's belief in the temperance movement even largely influenced how she saw alcohol negatively affecting the Dakota people. She never lived long enough to see Prohibition, but she did see a roughly one year implementation of the "Maine law" that banned alcohol in select locations in the Twin Cities. The temperance movement was also intrinsically linked to intense advocacy for suffrage and abolition. Suzanne de la Houssaye said Bishop was a fascinating woman of her time. She said the Minnesota Historical Society is interested in telling her story, as well as not grazing over her issues in the depiction of the Dakota Conflict to provide a better dialogue about the events. Josh Moniz can be e-mailed at [email protected].
<urn:uuid:b3d3940b-3455-4ba4-9827-4a83ccbbce54>
CC-MAIN-2013-20
http://www.nujournal.com/page/content.detail/id/533724/Impersonator-of-St--Paul-s-first-public-school-teacher-paints-portrait-of-time--place-in-history.html?nav=5126
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.983277
1,004
2.625
3
Tuberculosis (TB) is a chronic bacterial infection that usually infects the lungs, although other organs such as the kidneys, spine, or brain are sometimes involved. TB is primarily an airborne disease. There is a difference between being infected with the TB bacterium and having active tuberculosis disease. There are three important ways to describe the stages of TB. They are as follows: - Exposure. This occurs when a person has been in contact with, or exposed to, another person who is thought to have or does have TB. The exposed person will have a negative skin test, a normal chest X-ray, and no signs or symptoms of the disease. - Latent TB infection. This occurs when a person has the TB bacteria in his or her body, but does not have symptoms of the disease. The infected person's immune system walls off the TB organisms, and they remain dormant throughout life in 90 percent of people who are infected. This person would have a positive skin test but a normal chest X-ray. - TB disease. This describes the person who has signs and symptoms of an active infection. The person would have a positive skin test and a positive chest X-ray. The predominant TB bacterium is Mycobacterium tuberculosis (M. tuberculosis). Many people infected with M. tuberculosis never develop active TB and remain in the latent TB stage. However, in people with weakened immune systems, especially those with HIV (human immunodeficiency virus), TB organisms can overcome the body's defenses, multiply, and cause an active disease. TB affects all ages, races, income levels, and both genders. Those at higher risk include the following: - People who live or work with others who have TB - Medically underserved populations - Homeless people - People from other countries where TB is prevalent - People in group settings, such as nursing homes - People who abuse alcohol - People who use intravenous drugs - People with impaired immune systems - The elderly - Health care workers who come in contact with high-risk populations The following are the most common symptoms of active TB. However, each individual may experience symptoms differently. - Cough that will not go away - Chest pain - Loss of appetite - Unintended weight loss - Poor growth in children - Coughing blood or sputum - Chills or night sweats The symptoms of TB may resemble other lung conditions or medical problems. Consult a physician for a diagnosis. The TB bacterium is spread through the air when an infected person coughs, sneezes, speaks, sings, or laughs; however, repeated exposure to the germs is usually necessary before a person will become infected. It is not likely to be transmitted through personal items, such as clothing, bedding, a drinking glass, eating utensils, a handshake, a toilet, or other items that a person with TB has touched. Adequate ventilation is the most important measure to prevent the transmission of TB. TB is diagnosed with a TB skin test. In this test, a small amount of testing material is injected into the top layer of the skin. If a certain size bump develops within two or three days, the test may be positive for tuberculosis infection. Additional tests to determine if a person has TB disease include X-rays and sputum tests. TB skin tests are suggested for those: - In high-risk categories. - Who live or work in close contact with people who are at high risk. - Who have never had a TB skin test. For skin testing in children, the American Academy of Pediatrics recommends: - If the child is thought to have been exposed in the last five years. - If the child has an X-ray that looks like TB. - If the child has any symptoms of TB. - If a child is coming from countries where TB is prevalent. Yearly skin testing: - For children with HIV. - For children who are in jail. Testing every two to three years: - For children who are exposed to high-risk people. Consider testing in children from ages 4 to 6 and 11 to 16: - If a child's parent has come from a high-risk country. - If a child has traveled to high-risk areas. - Children who live in densely populated areas. Specific treatment will be determined by your physician based on: - Your age, overall health, and medical history - Extent of the disease - Your tolerance for specific medications, procedures, or therapies - Expectations for the course of the disease - Your opinion or preference Treatment may include: - Short-term hospitalization - For latent TB which is newly diagnosed: Usually a six- to 12-month course of an antibiotic called isoniazid will be given to kill off the TB organisms in the body. - For active TB: Your doctor may prescribe three to four antibiotics in combination for a time period of six to nine months. Examples include: isoniazid, rifampin, pyrazinamide, and ethambutol. Patients usually begin to improve within a few weeks of the start of treatment. After two weeks of treatment with the correct medications, the patient is not usually contagious, provided that treatment is carried through to the end, as prescribed by a physician. Click here to view the Online Resources of Infectious Diseases
<urn:uuid:dfb4c79d-f6d9-4b8e-9c67-60687e4e62e0>
CC-MAIN-2013-20
http://www.nyhq.org/diw/Content.asp?PageID=DIW000654
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949727
1,127
3.890625
4
Does Thinness Raise Alzheimer's Risk? < Nov. 23, 2011 > -- In the search for early markers of Alzheimer's disease - in hopes of eventually preventing it - researchers have found that low body weight may somehow play a role. In a study published this week in the journal Neurology, people with early signs of Alzheimer's disease were more likely to be underweight or have a low body mass index (BMI). Earlier studies found that people who are overweight in middle age or earlier are at higher risk for Alzheimer's later in life. Other studies have shown that being overweight later in life seems to protect against the disease. More research needed What the latest study findings mean for diagnosing or preventing Alzheimer's disease is unclear. "A long history of declining weight or BMI could aid the diagnostic process," says study author Eric Vidoni, Ph.D., at the University of Kansas. But, he adds, it's too early "to make body composition part of the diagnostic toolbox." Dr. Vidoni and colleagues studied brain imaging and analyzed cerebrospinal fluid in 506 people. Study participants ranged from those with no memory problems to others with Alzheimer's. Impact of body weight People who had evidence of Alzheimer's - either in brain scans or protein levels in the cerebrospinal fluid - were more likely to have a lower BMI than those who did not show early evidence of the disease. The researchers aren't sure why body weight might have a bearing on Alzheimer's risk. They speculate that the disease may affect the hippocampus, the area of the brain that controls metabolism and appetite. Or, they say, perhaps inflammation is driving both the drop in BMI and the cognitive changes that are the hallmark of Alzheimer's. For more information on health and wellness, please visit health information modules on this website. Although you can't control certain risk factors for Alzheimer's disease like advancing age, you can reduce your odds of developing the condition. The latest findings show you can reduce risk by: Always talk with your health care provider to find out more information. (Our Organization is not responsible for the content of Internet sites.)
<urn:uuid:02eab517-43e9-4b5b-a99e-db4f220d208d>
CC-MAIN-2013-20
http://www.nyhq.org/diw/Content.asp?PageID=DIW010334&More=DIW
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940347
440
2.875
3
Guide to Tanzanian Legal System and Legal Research By Bahame Tom Nyanduga and Christabel Manning Bahame Tom Nyanduga* is Advocate of the High Court of Tanzania, and had been the President of the East Africa Law Society between October 2004 - October 2006. The main research for this compilation has been conducted by Ms. Christabel Manning, LL.B, a graduate of the University of Dar Es Salaam currently working in the Legal Department at KPMG (T) Limited and a member of the Tanzania Women Lawyers Association-TAWLA. Read the Update! Table of Contents The United Republic of Tanzania is situated on the eastern seaboard of the African continent, about one degree south of the Equator. Its eastern border is the Indian Ocean, it shares its northern border with the Republic of Kenya and Uganda, and to the West it borders the Democratic Republic of the Congo, the Republic of Rwanda and the Republic of Burundi. The Republic of Zambia and the Republic of Malawi share its borders on the southwest, while in the South it shares a border with Mozambique; it is the union of two historical countries, Tanganyika and Zanzibar. The United Republic of Tanzania was formed in 1964 through the union of two independent states, namely the Republic of Tanganyika and the Peoples’ Republic of Zanzibar. Zanzibar is an autonomous part of the United Republic, and is made up of two islands, namely Unguja and Pemba, which are found in the territorial waters of the United Republic, in the Indian Ocean. Another island to the south east of the Tanzania, Mafia, is an integral part of mainland Tanzania. Tanganyika gained its independence from the British, who administered her after the end of the WWII under the United Nations Trusteeship; on 9th December 1961 she became a Republic on 9th December 1962. Zanzibar became independent on 12th December 1963. Prior to her independence, Zanzibar, which was ruled by an Arab Sultanate, and enjoyed a protectorate status under the British. One month after she gained independence, the Arab Sultanate regime of Zanzibar was overthrown by a popular revolution on 12th January 1964, which led to the creation of the Revolution Government of Zanzibar. The Republic of Tanganyika and the Peoples’ Republic of Zanzibar entered into a union on 26th April 1964 to form the United Republic of Tanganyika and Zanzibar, which was later renamed on 29th October the United Republic of Tanzania. At the time of the Union, Tanganyika was governed by a political party know as the Tanganyika African National Union (TANU), the nationalist party which won the country its independence, while Zanzibar was rule by the Afro Shiraz Party (ASP) which had lead the popular revolution. The two states were by then governed under the one party system of government, i.e. the one party state democracy, which was then very prevalent in Africa. In 1977 the TANU and ASP merged to form the Chama Cha Mapinduzi-CCM party, (otherwise known as the Revolutionary Party), which continued to exercise political control through out the country under the one party regime. The United Republic of Tanzania was under the leadership of one party system until 1992 when she adopted a new constitution, which enabled the organization of pluralist political parties, and hence in 1995 the first multi party democratic elections were held in the country. Since 1995 the country has held such multi party elections in 2000, and 2005. Tanzania’s legal system is based on the English Common Law system. It derived this system from its British colonial legacy, as it does the system of government, which is based to a large degree on the Westminster parliamentary model. Unlike the unwritten British constitutional system, the first source of law for the United Republic of Tanzania is the 1977 Constitution. The constitutional history of Tanganyika traces its background from the 1961 Independence Constitution, which was adopted at the time of independence. In 1962 Tanganyika adopted the Republican Constitution, which operated from 1962 up to 1965. These two were based on the traditional Lancaster style constitutions negotiated at independence by the British upon handover of state power to newly independent states. In 1965 Tanganyika adopted an Interim Constitution while the country awaited a new constitution to be drafted, after it abolished the multi party political system and adopted a one party state system. The process lingered longer than it was meant to and thus the constitution lasted from 1965 up to 1977 when a new constitution was adopted and it has remained applicable to date, with fourteen subsequent amendments. The Constitution provides for a bill of rights, notwithstanding the fact that it also makes provision for a number of claw-back clauses. In other words the enjoyment of certain rights and freedoms under the constitution is not absolute, but it is subject to legal regulation. The Bill of Rights is found in part three of the first Chapter of the Constitution and the fundamental rights and freedom are stipulated in article 12 to 24, article 25 to 28 imposes duties on every individual to duties and obligations to respect the rights of others and society. Article 29 establishes the obligation of society to every individual. Article 30 of the Constitution limits the application of these rights subject to law and the under the due process of law, as the case may be. The Constitution allows any person to challenge any law or act/omission, which contravenes his or her right, or the Constitution. The second source of law is the Statutes or Acts of Parliament. The Laws Revisions Act of 1994 Chapter Four of the laws of Tanzania [R.E. 2002,] established that all legislations previously known as Ordinances, i.e. those which were enacted by the pre independence colonial administration, as Orders in Council, can now be legally recognized as Acts. These principal legislations, and subsidiary legislations thereto, are published in the Government Gazette and printed by the Tanzania Government Printers. The third source is case law. These are cases from the High Court and Court of Appeal which are either reported or unreported and are be used as precedents, and bind lower courts thereto. Reported Tanzanian cases are found in the Tanzania Law Reports, High Court Digests and East Africa Law Reports. The fourth source is Received Laws established under Section 2.3 of The Judicature and Application Laws Act, Chapter 358 of the Laws of Tanzania [R.E. 2002] (JALA) these include: Common Law, and Doctrine of Equity, Statutes of General Application of England, applicable before the 22 of July 1920, which is deemed to be the Reception date for English Law in Tanzania. The fifth source is the Customary and Islamic law, which are established under section 9 of JALA. Whereby customary law is in effect only when it does not conflict with statutory law whilst Islamic law is applicable to Muslims under the Judicature and Applications of Laws Act, empowering courts to apply Islamic law to matters of succession in communities that generally follow Islamic law in matters of personal status and inheritance. International Laws, that is, Treaties and Conventions, are not self-executing. The Act of Parliament can apply treaties and conventions to which Tanzania is a party in the Courts in Tanzania only after ratification The United Republic of Tanzania is a unitary state based on a multiparty parliamentary democracy. In 1992 the Tanzanian government introduced constitutional reforms permitting the establishment of opposition political parties. All matters of state in the United Republic are exercised and controlled by the Government of the United Republic of Tanzania and the Revolutionary Government of Zanzibar. The Government of The United Republic of Tanzania has authority over all Union matters in the United Republic, as stipulated under the Constitution, and it also runs all non union matters on Mainland Tanzania, i.e. the territory formerly known as Tanganyika. Non-union matters are all those which do not appear in the Schedule to the Constitution which stipulates the list of Union matters. The Revolutionary Government of Zanzibar, similarly, has authority on Tanzania Zanzibar, i.e. the territory composed of the islands of Unguja and Pemba, over all matters, which are not Union Matters. In this respect the Revolutionary Government of Zanzibar has a separate Executive, legislature, known as the House of Representatives, and a judicial structure, which functions from the Primary Court level to the High Court of Zanzibar, which are provided for under the 1984 Constitution of Zanzibar. There are three organs for central government of the United Republic of Tanzania: the Executive, Judiciary and the Legislature. Local Government Authority is exercised through Regional and District Commissioners The functions and powers of each of the three organs are laid out in the 1977 Constitution of the United Republic of Tanzania. Parliament is established under Chapter Three, the Executive is established under Chapter Two and the Judiciary under Chapter Five. The Executive of the United Republic comprises of the President, The Vice-President, President of Zanzibar, the Prime Minister and Cabinet Ministers. The President of the United Republic is the Head of State, the Head of Government and the Commander-in-Chief of the Armed Forces. The President is the Leader of the Executive of the United Republic of Tanzania The Vice President who is the principal assistant to the President in all matters of the United Republic is responsible for: The Prime Minister of the United Republic is the leader of Government business in the National Assembly, controls, supervises and executes daily functions and affairs of the Government of the United Republic, and any other matters the President directs to be done. The President of Zanzibar is the Head of the Executive for Zanzibar, i.e.; the Revolutionary Government of Zanzibar and is the Chairman of the Zanzibar Revolutionary Council. The Cabinet of Ministers, which includes the Prime Minister, is appointed by the President from among members of the National Assembly. The Government executes its functions through Ministers led by Cabinet Ministers. President Jakaya M Kikwete became the current President of the United Republic on the 21st December of 2005 after a historic victory, winning 80.3% of the total votes, and Dr Ali Mohammed Shein is the Vice President of the United Republic of Tanzania. Dr Shein had previously served as Vice-President since 5th July 2001, prior to the 2005 General Elections. Since independence, Tanzania has held peaceful elections. Tanzania was a one-party system of democracy between 1965, 1970, 1975, 1980, 1985, and 1990; in the first elections, held in 1962, the ruling party captured all seats hence the de-facto one party state emerged, to be later regularized by law in 1965. In 1992, following the constitutional reforms, described herein above, the formation and organization of political parties is now conducted under the Political Parties Act 1992. About 18 political parties have been registered since then and multiparty general elections were held under the new multiparty system in 1995, 2000, and 2005. The Legislature, or the Parliament of the United Republic of Tanzania, consists of two parts, i.e. the President and the National Assembly. The President exercises authority vested in him by the Constitution to assent to bills by Parliament in order to complete the enactment process before they become law. The National Assembly, which is the principal legislative organ of the United Republic, has authority on behalf of the people to oversee and the accountability of the Government of the United Republic and all its organs of their particular duties. The Parliament is headed by the Speaker, who is assisted by the Deputy Speaker, and the Clerk as the head of the Secretariat of the National Assembly. The National Assembly also has various standing Committees to support in its various functions. The National Assembly of Tanzania is constituted by one chamber, with members elected form various constituencies across mainland Tanzania and Zanzibar. Under the Constitution, women’s representation is provided for as a special category, in order to increase the participation of women in national politics. Elections are supervised by the National Electoral Commission which is established under the Constitution. The legal system of Tanzania is largely based on common law, as stated previously, but is also accommodates Islamic or customary laws, the latter sources of law being called upon called upon in personal or family matters. The judiciary is formed by the various courts of judicature and is independent of the government. Tanzania adheres to and respects the constitutional principles of separation of powers. The Constitutional makes provision for the establishment of an independent judiciary, and the respect for the principles of the rule of law, human rights and good governance. The Judiciary in Tanzania can be illustrated as follows. The Judiciary in Tanzania has four tiers: The Court of Appeal of the United Republic of Tanzania, the High Courts for Mainland Tanzania and Tanzania Zanzibar, Magistrates Courts, which are at two levels, i.e. the Resident Magistrate Courts and the District Court, both of which have concurrent jurisdiction. Primary Courts are the lowest in the judicial hierarchy. Court of Appeal The Specialized Divisions - High Court of Tanzania -- The High Court of Zanzibar Resident Magistrates Courts Court of Appeal The Court of Appeal of Tanzania, established under Article 108 of the Constitution, is the highest Court in the hierarchy of judiciary in Tanzania. It consists of the Chief Justice and other Justices of Appeal. The Court of Appeal of Tanzania is the court of final appeal at the apex of the judiciary in Tanzania. The High Court of Tanzania (for mainland Tanzania) and the High Court of Zanzibar are courts of unlimited original jurisdiction, and appeals there from go to the Court of Appeal. The High Court of Tanzania was established under Article 107 of the Constitution and it has unlimited original jurisdiction to entertain all types of cases. The High Courts exercise original jurisdiction on matters of a constitutional nature and have powers to entertain election petitions. The High Court’s Main Registry, (which includes the sub-Registries) caters for all civil and criminal matters. The High Court (mainland Tanzania) has established 10 sub Registries in different zone of the country. It also has two specialised divisions, the Commercial Division and the Land Division. All appeals from subordinate courts go to the High Court of Tanzania. These include the Resident Magistrate Courts and the District Courts, which both enjoy concurrent jurisdiction. These courts are established under the Magistrate Courts Act of 1984. The District Courts, unlike the Resident Magistrates Courts, are found throughout all the districts in Tanzania (the local government unit.) They receive appeals from the Primary Courts, several of which will be found in one district. The resident magistrates Courts are located in major towns, municipalities and cities, which serve as the regional (provincial) headquarters. The primary courts are the lowest courts in the hierarchy and are established under the Magistrates Courts Act of 1984. They deal with criminal cases and civil cases. Civil cases on property and family law matters which apply customary law and Islamic law must be initiated at the level of the Primary Court, where the Magistrates sits with lay assessors. (The jury system does not apply in Tanzania) There are specialized tribunals, which form part of the judicial structure. These for example include District Land and Housing Tribunal, Tax Tribunal and the Tax Appeals Tribunal, Labour Reconciliation Board, the Tanzania Industrial Court, and Military Tribunals for the Armed forces. Military Courts do not try civilians. A party who feels dissatisfied with any decision of the Tribunals may refer the same to the High Court for judicial review The High Court of Zanzibar has exclusive original jurisdiction for all matters in Zanzibar, as is the case for the High Court on mainland Tanzania. The Zanzibar court system is quite similar to the Tanzania mainland system, except that Zanzibar retains Islamic courts. These adjudicate Muslim family cases such as divorces, child custody and inheritance. All other appeals from the High Court of Zanzibar go to the Court of Appeal of Tanzania. The structure of the Zanzibar legal system is as follows; Court of Appeal Magistrate Court ↔ Kadhi’s Appeal Courts Primary Courts Kadhi’s Court Court of Appeal of Tanzania The Court of Appeal Tanzania handles all matters from the High Court of Zanzibar. The High Court of Zanzibar is structured with the same structure as the High Court of Tanzania Mainland and it handles all appeals from the lower subordinate courts. These Courts have jurisdiction to entertain cases of different nature, except for cases under Islamic law, which they have no jurisdiction to try which are tried in the Kadhi’s courts. Kadhi’s Appeal Court The main role of the Kadhi’s Appeal Court of Zanzibar is to hear all appeals from the Kadhi’s court, which adjudicates on Islamic law. These are the lowest courts in Zanzibar which have adjudicate all Islamic family matters such as divorce, distribution of matrimonial assets, custody of children and inheritance but only with Muslim families. These have the same rank as the Kadhi’s Courts and they deal with criminal and civil cases of customary nature. There are a number of places one can obtain legal materials in Tanzania: Libraries, The Library of the Court of Appeal of Tanzania, the High Court Library, the High Court Land Division Library, Commercial Division of the High Court Library, the Attorney General’s Office at the Ministry of Justice and Constitutional Affairs, University of Dar Es Salaam, National Archives, Government Bookshop, Dar Es Salaam Bookshop, the United Nations Information Centre, The International Criminal Tribunal for Rwanda in Arusha, Mzumbe University and many more others. Reported cases in Tanzania can be found in a number of Law Reports. Between 1957and 1977 cases reported from the High Court of Tanzania and the East African Court of Appeal appeared in East Africa Law Reports. Law Africa, a law report publishing company has updated the reports for cases from the three East African jurisdictions, of Kenya, Uganda and Tanzania up to 2007. Current editions of the law reports can be sourced from Law Africa Publishers, email [email protected]. Their corporate headquarters address is: Law Africa Publishing (K) Ltd, Coop Trust Plaza, 1st Floor, Lower Hill Road, P.O. Box 4260-00100, GPO, Nairobi, Kenya The Tanzania Law Reports between 1983 and 1997 can be bought online from [email protected]. A complete set of the Statutes of Tanzania, the Laws of Tanzania- Revised Edition of 2002 (21 Volumes,) including a supplementary legislation, and subsidiary legislations can be bought online from the same references above. The Tanzania Government Printer publishes the government's Official gazette. The Official gazette publishes bills, legislative enactments, before and after assent, subsidiary legislations, announcement of all official government appointments and dates of entry into force of all legislations. The same can be ordered through the Government Publications Agency. Any other information on Tanzania can be accessed online. These include the website of Parliament where one can access parliamentary information, including Acts and Bills of Law. Others sites include the government’s public administration page the Tanzanian Law Reform Commission website. These include textbooks such as: Constitutional and Administrative law Contract, Commercial and Company Law Criminal Law and Procedure Civil law and Procedure Family Law, Equity and Succession To pursue a legal career in Tanzania one may start with a Certificate in Law, particularly for persons who have discontinued secondary education, followed by a Diploma in Law, a Degree in law (LL.B) and continue with a Postgraduate Diploma in Law (PGDL), Masters of law (LL.M), Degree of Doctor of Philosophy (Ph.D) and Doctor of Laws (LL.D), which is the highest doctorate to be awarded. Students who have successfully completed advanced secondary education and who qualify with good academic grades can also join a law degree courses offered at any of the Universities in the country. There are a number of Universities, which offer courses in law such as the University of Dar es Salaam, Mzumbe University, Open University, Tumaini University Ruaha University under St. Augustine and other institutes, which offer diploma in law such as Mzumbe University and Lushoto Institute of Judicial Administration. Certificate in Law courses are taught at the other institutes of learning such as the Police College and have enabled successful candidates to pursue law degree courses. Any LL.B degree holder who has attended internship and Pupilage in two years can apply to sit the Bar exam which is held three times a year. The Bar exam is an oral interview conducted under a panel of the Council for Legal Education, which is composed of representatives of the Chief Justice of the United Republic of Tanzania, the Attorney General of the United Republic, the Dean of Faculty of Law, of the University of Dar Es Salaam, and two representatives of the Law Society. A successful candidate is sworn in and enrolled as an Advocate of the High Court of Tanzania and sub-ordinate Courts thereto. Advocates do not have the right of audience before the Primary Courts in Tanzania. More information can be found at the University of Dar es Salaam’s website. Any person enrolled as an advocate under the Advocates Act, Chapter 341 of the Laws of Tanzania [R.E.2002] and listed as a member of the Tanganyika Law Society, established pursuant to the Tanganyika Law Society Act Chapter 307 of the Laws of Tanzania [R.E 2002] can practice law as an Advocate and shall be subject to the disciplinary rules and etiquette as promulgated under the said laws, and subject to the Ethics Committee of the Law Society and the Advocates Disciplinary Committee established under the Advocates Act CAP 341. Any inquiries as to the practice of law in Tanzania may be addressed to the Executive Secretary, Tanganyika Law Society; email; [email protected] * I wish to acknowledge with the thanks the industry and time taken by Christabel Manning for conducting the research and putting together the basic draft for this compilation, without whose assistance, this article would not have been possible.
<urn:uuid:3d26ef91-2a8a-4341-aee0-f4b046edda6d>
CC-MAIN-2013-20
http://www.nyulawglobal.org/globalex/Tanzania.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952454
4,647
2.765625
3
|Guidelines for small-scale fruit and vegetable processors. (FAO Agricultural Services Bulletin - 127) (1997)| |Part 2 - Processing for sale| |2.6. Contracts with suppliers and retailers| Many small scale processors buy fruits and vegetables daily from their nearest public market. Although this is simple and straightforward, it creates a number of problems for a business: for example, the processors have little control over the price charged by traders each day and because of the large seasonal price fluctuations that characterise these raw materials, this makes financial planning and control over cashflow more difficult (Section 2.3.4). The processor is also unable to schedule the raw materials in the quantities required and it is common for production to fail to meet a target because there are simply not enough fruits and vegetables for sale on a particular day. Additionally, the processor has no control over the way fruits and vegetables are handled during harvest and transport to the markets and therefore no influence over the quality of the raw materials that are available (see also Section 2.7.2). To address these problems, a processor can arrange contracts with either traders or farmers, in an attempt to have greater control over the amount of raw materials available for processing each day and their quality and price. This is not a common arrangement at present in most developing countries, possibly because commercial food processing is a relatively recent activity and there is no history of collaboration and formal contracts. However, where this has been done, there are benefits to both processor and suppliers, provided that the arrangements are made honourably and there is mutual trust. The benefits to farmers are a guaranteed price for their crop, based on a sliding scale of quality and a guaranteed market when it is harvested. However, the traders who tour an area to buy crops provide a number of benefits to farmers that processors should not ignore when arranging contracts: for example the traders frequently buy the whole crop, regardless of quality and either sort it themselves for different markets or sell it on to wholesalers who do the sorting. From the farmers perspective, they receive payment at the farm, without having to worry about marketing their crop or disposal of substandard items. Although farmers have a guaranteed market by selling to traders, they have virtually no control over the prices offered and can be exploited, particularly at the peak of a growing season when there is an over-supply of a particular crop. Traders also provide a number of other services that farmers may find difficult to obtain elsewhere: traders may be the only realistic source of farming tools and other inputs such as seeds; they are also a source of immediate informal credit, which farmers may require to buy inputs or for other needs such as funerals and weddings. Although the interest payments on such loans may be much higher than those charged on commercial loans, farmers often have no access to banks or other lenders and in practice have no choice. In many countries, large numbers of farmers are permanently indebted to traders for their lifetimes and are only released from the debt by sale of land. When processors begin to negotiate contracts with farmers, they should therefore be aware that farmers may be unwilling to break the existing arrangements with traders, either because of genuine fears that they will lose the services provided or because they are indebted to traders and have no ability to make other arrangements. The local power of traders should not be under-estimated and may range from a refusal to offer further loans to farmers, a threat not to buy the crop again if sales are made directly to processors, a demand that farmers repay loans immediately and in extreme cases, physical violence. Despite the problems described above, there are possibilities for processors to agree contracts to supply fruits and vegetables of a specified variety and quality with individual farmers or with groups of farmers who may be working cooperatively. Typically a specification would include the variety to be grown, the degree of maturity at harvest, freedom from infection etc. The price paid for the crop is agreed in advance and may be set between the mid-season lowest point and the pre- and post-season high points. Alternatively a sliding scale of prices is agreed, based on one or more easily measurable characteristics such as minimum size or agreed colour range, with an independent person being present to confirm the agreement in case of later disputes. The agreement may also specify the minimum or maximum amount that will be bought. In a formal contract, these agreement are written down and signed by both parties, although such formal contracts are rare in most developing countries. Processors should also consider the other forms of assistance that could be offered to farmers. For example, in some other larger scale processing such as tea and coffee production, processors offer training and an extension service to address problems with the crop as they arise throughout the growing season. Although this may be beyond the resources of small scale processors, more limited types of assistance may include purchasing tools, fertilizer or other requirements in bulk with the savings being passed on to farmers. Alternatively, part-payment for the crop can be made in advance so that farmers can buy inputs without the need for credit and the consequent indebtedness. The advantages to the processor are greater control over the quality of raw materials and the varieties that are planted, some control over the amounts supplied and an advance indication of likely raw material costs which assists in both financial control and production planning (Sections 2.3.4 and 2.7.1). The advantage to the farmer is the security of having a guaranteed market for the crop at a known price, together with any other incentives that may be offered by processors. However, this type of arrangement can only operate successfully when both processors and farmers honour their side of the agreement. In the authors experience, there have been a number of occasions when these forms of agreement have been tried, but have failed because one party breaks their part of the contract. Typically, this can be farmers who sell part of their crop to traders at each end of the season, when the price is higher than that offered by the processor. The expected volume of crop is not then available to the processor and planned production capacity cannot be achieved, seriously damaging both sales and cashflow. Alternatively, the processor delays payment to farmers, resulting in the need for them to take another loan and greater indebtedness. The processor may also fail to buy the agreed amount of crop and farmers are left to find alternative markets without the option of supplying traders who may refuse to buy it or may offer an insignificant price. A slightly different approach is that in which a processor takes a greater degree of control over production of the crop and specifies the types of fruit or vegetable to be grown, supplies seeds and other inputs, even including labour. In effect farmers are paid by the processor for the use of their land. Although this involves greater organisational complexity and higher operating costs for the processor, the benefits of an assured supply of raw materials having the correct qualities for processing may outweigh the disadvantages, particularly in situations where the demand for a crop outstrips the supply. A further development of the approach is for the processor to rent or buy land and set up a separate operation to supply the processing unit. This often happens in reverse when an existing farmer diversifies into processing but retains the farm. In either case the processor hires the labour and supplies all inputs needed to operate the farm. The bulk of the produce supplies the processing unit with any excess being sold in local markets or to traders.
<urn:uuid:84ac689e-4879-46dc-a5dd-a7bd2f47b26a>
CC-MAIN-2013-20
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0aginfo--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&cl=CL3.26&d=HASH0130f8e3035d966e6f07e008.7.6.fc
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961126
1,499
2.890625
3
The Department of Energy (DOE) is committed to expanding the conversation on energy issues and upholding open government principles of transparency, participation and collaboration. One of the key ways we seek to accomplish this is through the use of social media. "Social media" is a broad term for the wide spectrum of interactive and user-driven content technologies (i.e., social networks, blogs, wikis, podcasts, online videos, etc). Like many government agencies, the Department is exploring how best to use social media to accomplish our mission, engage the public in discussion, include people in the governing process and collaborate internally and externally. The Office of Digital Strategy and Communications (formerly the New Media Office) in the Office of Public Affairs is leading the Department's social media efforts. The purpose of this document is to provide guidance on how to take advantage of these social media platforms by defining the broad Department of Energy vision and strategy for social media use, detailing the means by which to contribute to the Department's social media presence, outlining the various rules of the road for utilizing social media in the government space and last but not least, sharing best practices for various social media tools. It's worth noting that while the primary focus of this guidance is on external facing social media, many of the principles and requirements outlined below can be used as a roadmap for inward facing social media activities. Vision and Strategy "The Department of Energy has an urgent role to play in creating a new, clean energy economy that will spark job creation and reduce our dependence on oil, while cutting our greenhouse gas emissions. The Department will also meet its critical responsibilities of reducing nuclear dangers and environmental risks. The foundation of all our work is a commitment to lead the world in science, technology and engineering." - Secretary Steven Chu The Department of Energy's mission is to become the "department of innovators" and discover the solutions to power and secure America's future now. We're building the new clean energy economy, reducing nuclear dangers and environmental risk and expanding the frontiers of knowledge with innovative scientific research. The objective of the Digital Office in Public Affairs is not only to communicate our mission online but also to develop and foster relationships with the public, outside stakeholders and each other around that mission. With that focus, the primary goals of the Digital Office are to amplify the Department's message, promote transparency and accessibility and provide services and engagement opportunities. Social media is integral to achieving these goals, providing the platform for real-time conversation, collaboration and idea sharing. You know how the saying goes - our whole is much greater than the sum of our parts. The entire Department benefits from a strong enterprise brand. And in many ways, that enterprise brand and culture of brand cultivation already exists throughout the Department, powered by the Office of Public Affairs. We're just extending it online and into the social media sphere. The Digital Office in the Office of Public Affairs is responsible for managing Department's enterprise brand online, including social media. Leading by example, this office will push the Department into new social media spaces and drive innovation and online communication programming in this arena. Offices and labs across the Department should help build the enterprise brand by contributing content and ideas to the Digital Office. A strong, well developed, supported and executed enterprise social media brand is the primary tier of the Department's social media strategy. The Digital Office also serves as a support center for driving the second core component of the Department's social media strategy: empowering social media innovation across the Department. Program offices, field offices and labs are encouraged to take full advantage of the opportunities social media offers. The Digital Office provides clear guidance on how to do so -- assisting with compliance of federal rules regulating social media in government, sharing social media best practices and helping offices develop and execute high quality social media strategies. Contributing to the Department of Energy Enterprise Social Media Accounts The foundation of the Department of Energy enterprise social media brand is our mission - and the work being done everyday across the Department to achieve that mission drives the content for our social media accounts. Offices and labs are enthusiastically encouraged to contribute to our enterprise social media accounts and share what they're doing to achieve our mission. These contributions are integral to the success of our enterprise brand. One of the primary reasons Department of Energy enterprise social media accounts were established was to break down some of the resource and regulatory barriers for communicating in this sphere. In that spirit, it's also simple to contribute to our core enterprise accounts: YouTube, Flickr, Twitter and Facebook. Just submit your suggestion to the Digital Office in the Office of Public Affairs via the Department of Energy Social Media Hub (http://energy.gov/socialmedia) and a member of the Digital Office will follow-up as needed within a reasonable timeframe. Establishing an Official DOE Social Media Account To streamline the process of social media account creation, a dedicated Department of Energy Social Media Hub (http://energy.gov/socialmedia) has been developed to empower program offices and labs to review the social media and application vendors with whom we currently have GSA approved terms of service and request permission to create a new account or verify an existing one. All social media sites require active oversight to ensure proper management. Department personnel should take these commitments into account when weighing whether to create a new social media presence. Before requesting an account, personnel should consult with the appropriate actors within their program office or lab to ensure that the proper authorizations and procedures are in place. This includes reaching out to supervisors and the point of contact for records management, privacy, communications/new media and the program's representative from General Counsel. To be granted an account or have your current account recognized by the Department, fill out the Social Media Request form that includes fields such as: For all requests - Name of the person submitting the request - Title of the person submitting the request/office - Contact e-mail - Contact phone number - Are you authorized to make this request? - Social media application(s) you want to utilize - Existing account? (y/n) - Justification for needing an account - Proposed or current account username/URL - Proposed or current account bio - Criteria for following others, friending others, etc. - Content and feedback strategy - Staff management plan, including post frequency - Sample post (if applicable) For new accounts only - Desired launch date - Roll-out plan For existing accounts only - Length of existence - Have you completed a Privacy Impact Assessment (PIA)? - Are you currently covered under DOE's amended terms of service? - What is your current records process? The Digital Office in the Office of Public Affairs will assess and respond to requests within a reasonable time period. The Digital Office approves accounts and will assist as needed with implementation and compliance. Accounts that consistently fail to meet the best practices outlined in this document are subject to review by the Digital Office, who will work with supervisors in that program office or lab to determine appropriate next steps.You can also use the online form to request that the Department pursue a terms of service agreement with a social media tool or application that is offered by apps.gov but not currently part of our portfolio. Should you determine that you would like to forgo the account creation process and simply have your content featured as part of the larger enterprise presence, you can contact the Digital Office to discuss options for assisting with outreach and amplifying your message. From the Privacy Act of 1974 to the Office of Management and Budget policies on third party sites and multi-session cookies, Federal agencies have specific requirements regarding privacy and Personal Identifiable Information (PII). These policies require the Department to file Privacy Impact Assessments (PIA) in order to utilize social media platforms like Facebook or Uservoice or Twitter for official business. The Digital Office in the Office of Public Affairs has filed several PIA's for the Department as a whole in order to empower others to take advantage of these communication tools. They include the following: - Google Analytics Personnel seeking to verify existing social media presences or establish new ones on the platforms above must consult the existing PIA for that platform to make sure that presence is compliant. If you're interested in using a social media platform that's not on this list or have questions about any of the PIA's above, reach out to the Digital Office for assistance. And if you have questions about federal privacy requirements, contact the privacy officer assigned to your office. The Freedom of Information Act (FOIA), 5 U.S.C. 552, provides a right of access to federal agency records, including any information created or maintained by the Department. Voluntary disclosure of information through a social media platform outside the federal government may waive the application of statutory privileges under federal law and compromise the Department's ability to withhold such information in the future. If you are concerned about making information publicly available through social media or have any questions regarding federal information law, contact the Office of General Counsel or the Office of Public Affairs. Comment Policy and Moderation The Department of Energy respects different opinions and hopes to foster conversation within our online presences. To that end, the Department does not pre-moderate users' comments on our enterprise accounts. This means that users' comments are automatically published, but they may be removed by a Department of Energy official if they violate our commenting policy. Comments may be removed from Department of Energy blogs or social media accounts: - Contain obscene, indecent, or profane language; - Contain threats or defamatory statements; - Contain hate speech directed at race, color, sex, sexual orientation, national origin, ethnicity, age, religion, or disability; - Contain sensitive or personally identifiable information; and/or - Promote or endorse specific commercial services or products. All Department of Energy generated content is subject to the National Archives and Records Administration (NARA) for retention, storage and publication. Federal records management policies regarding social media are still evolving. The CIO has issued interim guidance for the Department of Energy regarding the management of social media records. We can expect additional updates to these policies as our work continues to evolve in the social media sphere. For specific questions regarding records management, contact the records management officer assigned to your office. Access to and Use of Social Media The Department of Energy encourages the responsible use of social media consistent with current laws, policies and guidance that govern information and information technology. Department organizations will not arbitrarily ban access or the use of social media. Department of Energy personnel are encouraged to access and contribute content on social media sites in their official capacity. However, personnel should obtain supervisory approval prior to creating or contributing significant content to external social media sites or to engaging in recurring exchanges with the public.Employees are subject to the applicable Standards of Conduct for Employees of the Executive Branch (5 C.F.R. Part 2635) and the Hatch Act (5 U.S.C. 7321-7326) which governs partisan political activity of Executive Branch employees. Personnel are encouraged to review the Office of Special Counsel's "Frequently Asked Questions Regarding Social Media and the Hatch Act" for further guidance or contact the Office of the Assistant General Counsel for General Law (GC-77). Non-public, sensitive, Personally Identifiable Information (PII) and classified information should not be disclosed on public social media platforms. Personal use of social media while on government time is subject to DOE Order 203.1, Limited Personal Use of Office Equipment Including Information Technology, which provides guidance on "appropriate and inappropriate" use of Government resources. If you have questions about this section, please contact GC-77. Security Requirements and Risk Management The Federal CIO Council's Guidelines for Secure Use of Social Media by Federal Departments and Agencies outlines recommendations for using social media technologies in a manner that minimizes risk while also embracing the opportunities these technologies provide. Federal Government information systems are targeted by persistent, pervasive, aggressive threats. In order to defend against rapidly evolving social media threats, Department of Energy program offices, laboratories, and sites should include a defense-in-depth, multi-layered risk management approach, addressing risks to the user, risks to the Department and risks to the federal infrastructure. Organizations should incorporate risk mitigation strategies such as (1) controlled access to social media, (2) user awareness and training, (3) user rules of behavior, (4) host and/or network controls and (5) secure configuration of social media software to determine overall risk tolerance for use of social media technologies. Cyber Security personnel should be consulted before the implementation of any social media technology to provide the opportunity for incorporation of the new technology into current risk management framework. In addition, Cyber Security should help determine secure technical configurations and monitor published vulnerabilities in social media software. For questions regarding cyber security, contact your security officer. In the event of an Emergency, social media tools should be utilized in accordance with the forthcoming Emergency Public Affairs Plan, which calls for a coordinated messaging effort between the Headquarters Office of Public Affairs and any programs, sites or facilities that may be involved: "When Department of Energy headquarters or a DOE site/facility declares an emergency, it is expected to meet the public information obligations of the Department of Energy Orders, guidance and requirements and the comprehensive emergency management plans developed by each site. This guidance and requirement includes the timely provision of media informational materials to the Public Affairs staff at Department headquarters. Every effort should be made by the designated public affairs officers at the site level to consult with the Headquarters Public Affairs Office on the initial dissemination of information to the public and media. From the DOE O 151.1C "Comprehensive Emergency Management System": "Initial news releases or public statements must be approved by the Cognizant Field Element official responsible for emergency public information review and dissemination. Following initial news releases and public statements, updates must be coordinated with the DOE/NNSA (as appropriate) Director of Public Affairs and the Headquarters Emergency Manager." For more information on Emergency communication protocols, reference the Emergency Public Affairs Plan or contact your public affairs representative.
<urn:uuid:a4d02c2c-9805-421f-a211-13ac72fcf0f0>
CC-MAIN-2013-20
http://www.ocrwm.doe.gov/about-us/web-policies/social-media
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.913969
2,895
2.5625
3
Global climate change presents challenges associated with balancing potential environmental impacts with a wide variety of economic, technical, and lifestyle changes that may be necessary to address the issue. A government-industry task force is working to develop technologies and infrastructure for carbon capture and sequestration with the goal of reducing greenhouse gas (GHG) emissions that can contribute to global climate change. US Outer Continental Shelf oil and gas development opponents complained that the Department of the Interior and Minerals Management Service’s preliminary final 5-year OCS plan goes too far, while proponents declared that it doesn’t go far enough. Newark East field in North Texas, center of the Mississippian Barnett shale play, was Texas’s largest gas-producing field in 2006 and could become the largest in terms of ultimate recovery in the Lower 48. A recent study of the European refining industry from Concawe (Conservation of Clean Air and Water in Europe) concludes that the imbalance between demand for gasoline and middle distillates will continue to increase. Changes in the vertical relative position of two liquids pipelines laid in the same trench (one crude, one products) produce only small changes in the temperature of the crude oil, allowing this approach to be used as a viable alternative to dual trenching.
<urn:uuid:41b7ca46-e07e-4fb6-9a6b-6e954a5a4459>
CC-MAIN-2013-20
http://www.ogj.com/articles/print/volume-105/issue-18.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910829
262
2.96875
3
"Almanac"—the word comes from the Arabic al-manakh, meaning "the calendar," earlier "the weather," deriving ultimately from ma-, "a place," and nakha, "to kneel," or a place where camels kneel, a seasonal stopping place, a camp or settlement. Coming as it does from a nomadic human society, it is a fitting word as we talk about our bird life, and their travels and destinations, all as they are influenced by the season of the year. In order to understand the vital interplay of time and space as they determine which birds they'll bring us, let us first set aside time to deal with space. For birds, Ohio's longitude has less to do with time, except as it determines the diurnal rhythms of night and day, and as it figured eons ago in the shifting of continents, where our present longitude marks our place between mountain ranges, at the edge of the feathering-out of the great prairies and the great forests, and consequently midway between the great north- and southbound rivers of birds in the Mississippi and Atlantic flyways. Our latitude, by contrast, is all about time for birds—their seasonal movements north and south, their life cycles along the way, the timing of migrations and even vagrancy, the changing length of daylight and the intensity of Earth's magnetic fields, even their habitats as developed in the topography of our land as formed by mile-high glaciers moving latitudinally thousands of years ago, forming our plains and hills, Lake Erie, and the Ohio River. Survival for birds means successful breeding, and for this success timing is everything. For migrants, early arrival at the breeding grounds is balanced against the risk of arriving too soon to find adequate food; attempting a second brood must be balanced by the risk of an early reduction in food sources. The phenology of predators, frosts, food sources, leafing of local plants, rain cycles, etc., all affect breeding success, and the species we see have successfully adapted to these influences to remain with us today. Humans have recently (here, over the past two hundred years) radically influenced some of these influences, upsetting delicate balances, and our bird life is changing as a result. We have removed some predators, and encouraged the proliferation of others. We have apparently caused climatic warming, with earlier springs and later winters. We have introduced exotic animals and plants. We have bulldozed and burned and filled in and poisoned bird habitats. We allow birds to be killed in great numbers, but not, we reassure ourselves, in numbers too great to diminish them. Our effect on the life cycles of birds is dramatic, ongoing, and uncertain as to ultimate outcome. Reassuringly, it is still possible to discern primeval patterns of birds' natural life cycles throughout the year. Birders find the continuation of these cycles deeply satisfying as a continuous manifestation of the renewal of life, and a way to measure and better understand, during our short span, the passage of time. Fifty or more species of our birds remain pretty much equally abundant year-round, present in good numbers in every month. Many are the most familiar of our familiar birds, but even to them the calendar brings profound changes. The crows, robins, blue jays, and song sparrows we see year-round are not always the same birds, as these are at least in part migratory species, with different cohorts inhabiting different places at different times of year. Their behavior, too, may change radically over the calendar year: robins that are solitary worm-eaters in summer will flock in winter to eat fruit. The breeding cycle, with all its changes over times, governs all—migrating, singing, incubating, fledging, flocking, molting. Many species have expanded their ranges over recent time—mockingbirds, titmice, cardinals, house finches—and many once-common birds have receded beyond Ohio's borders: prairie-chickens, Bachman's sparrows, and Bewick's wrens are no longer to be found here. Time has claimed some of our birds forever—the passenger pigeon, the Eskimo curlew, the Carolina parakeet—but there is time to save the rest.
<urn:uuid:1439912f-f481-4834-9bfb-531cc6451f12>
CC-MAIN-2013-20
http://www.ohiobirds.org/site/library/almanac/introduction.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960016
892
3.671875
4
1933 Unemployment Relief New! Search the database of more than 100,000 individuals listed in the Unemployment Relief records. There are 27 Oklahoma counties included. Search now » 1940 US Census The 1940 US Federal Census records for Oklahoma have now been indexed. Search and view census records online now at familysearch.org/1940census/1940-census-oklahoma/ 1890 Oklahoma Territorial Census The OHS Research Center has completed the index to the 1890 Oklahoma Territorial Census. While the previous index listed only the head of household, this index includes every individual included in the census. Most of the 1890 US Federal Census was destroyed by fire in 1921, making the 1890 Oklahoma Territorial Census one of the few remaining census records from the time. The Oklahoma Historical Society Research Division collections include the original 1890 OT Census pages. Search the index » Own the Complete 1890 Oklahoma Territorial Census Now you can access the 1890 Oklahoma Territorial census in its entirety as part of 1890 Resources, a newly-released DVD from the OHS Research Center. This easy-to-use disc includes: - A complete index to the 1890 OT census and more than 1,200 color pages of census scanned from the original documents. Just locate your ancestor in the index and click on the page number to see the original document. View a sample census page. - Smith's First Directory of Oklahoma Territory for the Year Commencing August 1, 1890, complete with index/namefinding list linked to color scans of the entire directory. View a sample page from Smith's. - A PDF of Bunky's The First Eight Months of Oklahoma City. Beginning with the land run of 1889, this publication explores area businesses, churches, newspapers, politics and citizens. This resource is now available for $45 plus $2 shipping & handling. To order use our printable order form or call (405) 522-5225 - please have your credit card ready. Special Census on Microfilm at OHS - 1890 Oklahoma Territorial Census - 1860 Lands West of Arkansas - 1890 Union Veterans & Widows Census - 1900 US Census - Oklahoma Territory - 1900 US Census - Indian Schedule - Various Mortality Schedules - Additional special censuses for numerous states Online Subscription Services The Research Center offers free access to Ancestry Library Edition® and HeritageQuest Online™. These sites allow patrons visiting the Research Center to search, view and print various items pertaining to genealogy. Ancestry Library® offers US Census, ship logs and passenger indexes, WWI draft registration cards, vital records, and the Social Security Death Index. HeritageQuest™ also includes US Census as well as Revolutionary War pension & bounty-land warrant applications; the Freedman's Bank (1856-1874); and PERSI (Periodical Source Index), an index of almost 2 million genealogical and local history articles.
<urn:uuid:6430b409-714d-4590-afff-a789a8e8fd7b>
CC-MAIN-2013-20
http://www.okhistory.org/research/census
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.871927
604
2.515625
3
Dear OncoLink "Ask The Experts," Carolyn Vachani RN, MSN, AOCN, OncoLink's Medical Correspondent, responds: Magnification colonoscopy uses fiberoptic technology to magnify the view of the colon to about 75 to 100 times its normal size. As a point of comparison, standard colonoscopy uses 45-fold magnification. This test can be particularly helpful to diagnose "flat adenomas" (cancers that do not form as a polyp) or dysplasia (abnormal appearing tissue). During the colonoscopy, the physician sprays a dye into the colon which highlights areas of dysplasia, based on the shape and appearance of the colon (called "pit pattern") and the uptake of dye. Magnification is needed to help the physician better and more fully visualize any areas of dye deposition. This procedure is used to screen for dysplasia and/or cancer in patients who are at high risk for colon cancer. High-risk patients typically have chronic colon inflammation (ie: irritable bowel disease, Crohn's disease, ulcerative colitis, primary sclerosing cholangitis, etc.). Magnification colonoscopy is not yet used for general polyps. The test is currently only available at a limited number of medical centers. The physician performing the test must be trained to recognize the "pit patterns" that signify dysplasia. If this sounds like a test for you, I would try the Gastroenterology department at large, academic centers in your area.
<urn:uuid:3f004561-6f01-4726-8a65-dfe0cfc492a0>
CC-MAIN-2013-20
http://www.oncolink.org/includes/print_article.cfm?Page=2&id=2118&Section=Ask_The_Experts
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917247
321
2.640625
3
A safe place to play If you ask your child what he likes most about school, the answer you are likely to get is, “Recess!” It is important for kids to be active, get some fresh air, and release their pent up energy during and after the school day, and playgrounds are a great place to do so. However, faulty equipment, unsafe surfaces, and lack of appropriate supervision can result in injury. Each year, more than 200,000 children are treated in hospital emergency rooms for playground-related injuries. Schools are addressing this by developing rules for safe outdoor play on and off the playground. There are also a few things that you should keep in mind and convey to any other caregivers of your child about play on and around the playground. Tips for injury-free outdoor fun - Know the school rules. Depending on the amount of outdoor space, the size of the student body, and staff limitations, your child’s school may limit the games students can play on the playground. Games like tag and unsupervised sports such as dodgeball are increasingly being banned due to injuries. Find out what your school’s playground rules are and explain them to your child. If your child wants to take a ball, jump rope, or other equipment to share with friends, be sure to check with the school first. - Find out about supervision. Adequate supervision is the best way to reduce the number of injuries on the playground. The National Program for Playground Safety advises that children be supervised when playing on playground structures, whether these are located in your home, in the community, or at school. Adults in charge should be able to direct children to use playground equipment properly and respond to emergencies appropriately. Make sure your child is supervised on the playground at all times, at and outside of school. - Know what is age appropriate. The Consumer Product Safety Commission requires that playground equipment be separated for 2-5 year-olds and 5-12 year-olds. It is recommended that children be further separated according to age group: Pre K, grades K-2, grades 3-4, and grades 5-6. Most schools separate outdoor play times by grades. If you take child to the playground, make sure he is playing on equipment that he is able to use comfortably. Encourage your child to use equipment appropriately and to take turns. Beware of clothing that could get caught or that your child could trip over, such as untied shoelaces, hoods, or drawstrings.. - Keep an eye on the equipment. Before you let your child play on playground structures, check the equipment and its surrounding area to make sure that it is safe. Check the structure to make sure it is not damaged or broken. Look out for any objects that can cause injuries, such as broken glass, rocks, animal feces, or other debris. According to the National Program for Playground Safety, the surface of a play structure should be of loose or soft materials that will cushion a fall, such as wood chips or rubber. Know how to respond. Even a fall of one foot can cause a broken bone or concussion. If your child is injured while playing on the playground, check him carefully for bruises. If you are not sure of the extent of your child’s injury, take him to the pediatrician or the emergency room. If you think your child may have a head or neck injury or if he appears to have a broken bone and you are afraid to move him, call for help. For a playground safety checklist, visit the Consumer Product Safety Commission at http://www.cpsc.gov This information was compiled by Sunindia Bhalla, and reviewed by the Program Staff of the Massachusetts Children’s Trust Fund.
<urn:uuid:0001f24f-c0bb-45f3-a1a7-b14d8b64f196>
CC-MAIN-2013-20
http://www.onetoughjob.org/child-care/school-safety/playground-safety
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965263
768
3.5625
4
Online Physiology Degree |Online Physiology Degree refers to the degree on Physiology that are provided by the online universities that are situated in different parts on the world. In this kind of degrees you can remain in some remote corner of the world, and can take lessons from the universities that are located at some other parts of the world. These degrees are very much in demand and are recognized by the organizations around the world. What is Physiology? The term Physiology refers o the study which involves the mechanical, physical as well a s biochemical functions of any kind of living organisms. In other world the physical, mechanical as well as different biochemical function that take place in the body of any living organism can be studies in the subject of Physiology. Traditionally, Physiology can be divided into two broader parts – plant physiology as well as animal physiology. Physiology Degrees are much sought after by students interested in the life sciences. However the principles that are followed in physiology are universal irrespective of any particular organism. Human physiology is an important part of the study of animal physiology, too. There are some other major branches that originated from physiology that can be studies individually nowadays. These branches that originated from physiology includes biochemistry, paleobiology, biomechanics, pharmacology and biophysics. At the Online Physiology Degree you can also have a chance to study a few of these branches. Who are eligible to study Online Physiology Degree? An Online Physiology Degree are also beneficial to the working professionals who finds it hard to devote a certain amount of tie every day or at least once or twice a week to go for the part time courses. The Online Physiology Degree will help them to study at night or even between the working hours. The course material being available online the Online Physiology Degree gives them the option of ‘flexible timings’ for their studies. Thus the Online Physiology Degree courses are very beneficial to them. Why choose an Online Physiology Degree? Choosing or opting for an Online Physiology Degree can be a wise decision on the part of the busy professional as well as any modern day student. The course material used in Online Physiology Degree are made keeping in mind a global approach and so the degrees are, usually, recognized across the globe. The Online Physiology Degree courses also offer you the flexibility to choose your time and pace to study. Thus an Online Physiology Degree course has an edge over the conventional degrees that are available. To know more about online science degree keep surfing the links of ONLINEDEGREESHUB.
<urn:uuid:3be16b6c-b679-47fb-bcca-cfb5cb09595f>
CC-MAIN-2013-20
http://www.onlinedegreeshub.com/science/physiology/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947976
525
2.828125
3
Analogue Tachographs: A Brief History Note: Since May 2006, Analogue Tachographs are being phased out in favour of digital versions which record data on a smart card. Find out more about Digital Tachographs. A tachograph displays vehicle speed and makes a record of all speeds during an entire trip. The name ‘tachograph’ comes from the graphical recording of the tachometer or engine speed. Analogue units record the driver’s periods of duty on a waxed paper disc – a tachograph chart. An ink pen records the engine speed on circular graph paper that automatically advances according to the internal clock of the tachograph. This graph paper is removed on a regular basis and maintained by the fleet owner for government records. In the 1950s, there were an increasing number of road accidents attributed to sleep-deprived and tired truck drivers. Concerns for safety led to the rapid spread of the tachograph in the commercial vehicle market, but at this point it was voluntary and not legislated. Fleet operators then found that tachographs helped them to monitor driver hours more reliably, and safety also improved. In Europe, use of tachographs has been compulsory for all trucks over 3.5 tonnes since 1970. For safety reasons, most countries also have limits on the working hours of drivers of commercial vehicles. Tachographs are used to monitor drivers’ working hours and ensure that appropriate breaks are taken. Legislation relating to Tachographs has been in force in the UK for 16 years. The tachograph is now an indispensable tool for managing fleets and ensuring the safety of drivers of commercial vehicles. Find out more about Digital Tachographs.
<urn:uuid:067b127c-5284-4746-a65a-e0278133374c>
CC-MAIN-2013-20
http://www.optac.info/uk/analogue-tachograph.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950168
355
3.234375
3
LESSON ONE: Transforming Everyday Objects Marcel Duchamp: Bicycle Wheel, bicycle wheel on wooden stool, 1963 (Henley-on-Thames, Richard Hamilton Collection); © 2007 Artists Rights Society (ARS), New York/ADAGP, Paris, photo credit: Cameraphoto/Art Resource, NY Man Ray: Rayograph, gelatin silver print, 29.4×23.2 cm, 1923 (New York, Museum of Modern Art); © 2007 Man Ray Trust/Artists Rights Society (ARS), New York/ADAGP, Paris, photo © The Museum of Modern Art, New York Meret Oppenheim: Object (Le Déjeuner en fourrure), fur-lined cup, diam. 109 mm, saucer, diam. 237 mm, spoon, l. 202 mm, overall, h. 73 mm, 1936 (New York, Museum of Modern Art); © 2007 Artists Rights Society (ARS), New York/ProLitteris, Zurich, photo © Museum of Modern Art/Licensed by SCALA/Art Resource, NY Dada and Surrealist artists questioned long-held assumptions about what a work of art should be about and how it should be made. Rather than creating every element of their artworks, they boldly selected everyday, manufactured objects and either modified and combined them with other items or simply se-lected them and called them “art.” In this lesson students will consider their own criteria for something to be called a work of art, and then explore three works of art that may challenge their definitions. Students will consider their own definitions of art. Students will consider how Dada and Surrealist artists challenged conventional ideas of art. Students will be introduced to Readymades and photograms. Ask your students to take a moment to think about what makes something a work of art. Does art have to be seen in a specific place? Where does one encounter art? What is art supposed to accomplish? Who is it for? Ask your students to create an individual list of their criteria. Then, divide your students into small groups to discuss and debate the results and come up with a final list. Finally, ask each group to share with the class what they think is the most important criteria and what is the most contested criteria for something to be called a work of art. Write these on the chalkboard for the class to review and discuss. Show your students the image of Bicycle Wheel. Ask your students if Marcel Duchamp’s sculp-ture fulfills any of their criteria for something to be called a work of art. Ask them to support their obser-vations with visual evidence. Inform your students that Duchamp made this work by fastening a Bicycle Wheel to a kitchen stool. Ask your students to consider the fact that Duchamp rendered these two functional objects unus-able. Make certain that your students notice that there is no tire on the Bicycle Wheel. To challenge accepted notions of art, Duchamp selected mass-produced, often functional objects from everyday life for his artworks, which he called Readymades. He did this to shift viewers’ engagement with a work of art from what he called the “retinal” (there to please the eye) to the “intellectual” (“in the service of the mind.”) [H. H. Arnason and Marla F. Prather, History of Modern Art: Painting, Sculpture, Architecture, Photography (Fourth Edition) (New York: Harry N. Abrams, Inc., 1998), 274.] By doing so, Duchamp subverted the traditional notion that beauty is a defining characteristic of art. Inform your students that Bicycle Wheel is the third version of this work. The first, now lost, was made in 1913, almost forty years earlier. Because the materials Duchamp selected to be Readymades were mass-produced, he did not consider any Readymade to be “original.” Ask your students to revisit their list of criteria for something to be called a work of art. Ask them to list criteria related specifically to the visual aspects of a work of art (such as “beauty” or realistic rendering). Duchamp said of Bicycle Wheel, “In 1913 I had the happy idea to fasten a Bicycle Wheel to a kitchen stool and watch it turn.” [John Elderfield, ed., Studies in Modern Art 2: Essays on Assemblage (New York: The Museum of Modern Art, 1992), 135.] Bicycle Wheel is a kinetic sculpture that depends on motion for effect. Although Duchamp selected items for his Readymades without regard to their so-called beauty, he said, “To see that wheel turning was very soothing, very comforting . . . I en-joyed looking at it, just as I enjoy looking at the flames dancing in a fireplace.” [Francis M. Naumann, The Mary and William Sisler Collection (New York: The Museum of Modern Art, 1984), 160.] By en-couraging viewers to spin Bicycle Wheel, Duchamp challenged the common expectation that works of art should not to be touched. Show your students Rayograph. Ask your students to name recognizable shapes in this work. Ask them to support their findings with visual evidence. How do they think this image was made? Inform your students that Rayograph was made by Man Ray, an American artist who was well-known for his portrait and fashion photography. Man Ray transformed everyday objects into mysterious images by placing them on photographic paper, exposing them to light, and oftentimes repeating this process with additional objects and exposures. When photographic paper is developed in chemicals, the areas blocked from light by objects placed on the paper earlier on will remain light, and the areas exposed to light will turn black. Man Ray discovered the technique of making photograms by chance, when he placed some objects in his darkroom on light-sensitive paper and accidentally exposed them to light. He liked the resulting images and experimented with the process for years to come. He likened the technique, now known as the photogram, to “painting with light,” calling the images rayographs, after his assumed name. Now that your students have identified some recognizable objects used to make Rayograph, ask them to consider which of those objects might have been translucent and which might have been opaque, based on the tone of the shapes in the photogram. Now show your students Meret Oppenheim’s sculpture Object (Déjeuner en fourrure). Both Rayograph and Object were made using everyday objects and materials not traditionally used for making art, which, when combined, challenge ideas of reality in unexpected ways. Ask your students what those everyday objects are and how they have been transformed by the artists. Ask your students to name some traditional uses for the individual materials (cup, spoon, saucer, fur) used to make Object. Ask your students what choices they think Oppenheim made to transform these materials and objects. In 1936, the Swiss artist Oppenheim was at a café in Paris with her friends Pablo Picasso and Dora Maar. Oppenheim was wearing a bracelet she had made from fur-lined, polished metal tubing. Picasso joked that one could cover anything with fur, to which Oppenheim replied, “Even this cup and saucer.” [Bice Curiger, Meret Oppenheim: Defiance in the Face of Freedom (Zurich, Frankfurt, New York: PARKETT Publishers Inc., 1989), 39.] Her tea was getting cold, and she reportedly called out, “Waiter, a little more fur!” Soon after, when asked to participate in a Surrealist exhibition, she bought a cup, saucer, and spoon at a department store and lined them with the fur of a Chinese gazelle. [Josephine Withers, “The Famous Fur-Lined Teacup and the Anonymous Meret Oppenheim” (New York: Arts Magazine, Vol. 52, Novem-ber 1977), 88-93.] Duchamp, Oppenheim, and Man Ray transformed everyday objects into Readymades, Surrealist objects, and photograms. Ask your students to review the images of the three artworks in this lesson and discuss the similarities and differences between these artists’ transformation of everyday objects. Art and Controversy At the time they were made, works of art like Duchamp’s Bicycle Wheel and Oppenheim’s Object were controversial. Critics called Duchamp’s Readymades immoral and vulgar—even plagiaristic. Overwhelmed by the publicity Object received, Oppenheim sunk into a twenty-year depres-sion that greatly inhibited her creative production. Ask your students to conduct research on a work of art that has recently been met with controversy. Each student should find at least two articles that critique the work of art. Have your students write a one-page summary of the issues addressed in these articles. Students should consider how and why the work chal-lenged and upset critics. Was the controversial reception related to the representation, the medium, the scale, the cost, or the location of the work? After completing the assignment, ask your students to share their findings with the class. Keep a list of shared critiques among the work’s various receptions. Make a Photogram If your school has a darkroom, have your students make photograms. Each student should collect several small objects from school, home, and the outside to place on photographic paper. Their collection should include a range of translucent and opaque objects to allow different levels of light to shine through. Stu-dents may want to overlap objects or use their hands to cover parts of the light-sensitive paper. Once the objects are arranged on the paper in a darkroom, have your students expose the paper to light for several seconds (probably about five to ten seconds, depending on the level of light) then develop, fix, rinse, and dry the paper. Allow for a few sheets of photographic paper per student so that they can experiment with different arrangements and exposures. After the photograms are complete, have your students discuss the different results that they achieved. Students may also make negatives of their photograms by placing them on top of a fresh sheet of photographic paper and covering the two with a sheet of glass. After ex-posing this to light, they can develop the paper to get the negative of the original photogram. Encourage your students to try FAUXtogram, an activity available on Red Studio, MoMA's Web site for teens. GROVE ART ONLINE: Suggested Reading Below is a list of selected articles which provide more information on the specific topics discussed in this lesson.
<urn:uuid:31fab53b-eb78-4e38-ae2c-77d787710125>
CC-MAIN-2013-20
http://www.oxfordartonline.com/public/page/lessons/Unit5Lesson1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946177
2,260
3.859375
4
Asthma and Exercise What is exercise-induced asthma? Most people diagnosed with asthma will experience asthma symptoms when exercising. In addition, some who are not diagnosed with asthma will experience asthma symptoms, but only during exercise. This is a condition called exercise-induced asthma. Long-distance running may aggravate exercise-induced asthma. Exercise-induced asthma is different from the typical asthma that is triggered by allergens and/or irritants. Some people have both types of asthma, while others only experience exercise-induced asthma. Asthma is a chronic, inflammatory lung disease that leads to three airway problems: obstruction, inflammation, and hyper-responsiveness. Unfortunately, the basic cause of asthma is still not known. How does exercise cause asthma symptoms? When breathing normally, the air that enters the airways is first warmed and moistened by the nasal passages to prevent injury to the delicate lining of the airways. However, for someone with asthma, the airways may be extremely sensitive to allergens, irritants, infection, weather, and/or exercise. When asthma symptoms begin, the airways' muscles constrict and narrow, the lining of the airways begins to swell, and mucus production may increase. When exercising (especially outside in cold weather), the increased breathing in and breathing out through the mouth may cause the airways to dry and cool, which may irritate them and cause the onset of asthma symptoms. In addition, when breathing through the mouth during exercise, a person will inhale more air-borne particles, including pollen, which can trigger asthma. What are the symptoms of exercise-induced asthma? Exercise-induced asthma is characterized by asthma symptoms, such as coughing, wheezing, and tightness in the chest within five to 20 minutes after starting to exercise. Exercised-induced asthma can also include symptoms such as unusual fatigue and feeling short-of-breath while exercising. However, exercise should not be avoided because of asthma. In fact, exercise is very beneficial to a person with asthma, improving their airway function by strengthening their breathing muscles. Consult your doctor for more information. How can exercise-induced asthma be controlled? Stretching and proper warm-up and cool-down exercises may relieve any chest tightness that occurs with exercising. In addition, breathing through the nose and not the mouth will help warm and humidify the air before it enters the airways, protecting the delicate lining of the airways. Other ways to help prevent an asthma attack due to exercise include the following: Your doctor may prescribe an inhaled asthma medication to use before exercise, which may also be used after exercise if symptoms occur. Avoid exercising in very low temperatures. If exercising during cold weather, wear a scarf over your mouth and nose, so that the air breathed in is warm and easier to inhale. Avoid exercising when pollen or air pollution levels are high (if allergy plays a role in the asthma). If inhaling air through the mouth, keep the mouth pursed (lips forming a small "O" close together), so that the air is less cold and dry when it enters the airways during exercise. Carry an inhaler, just in case of an asthma attack. Wear an allergy mask during pollen season. Avoid exercise when experiencing a viral infection. What sports are recommended for people with asthma? According to the American Academy of Allergy, Asthma, and Immunology, the recommended sport for people with asthma is swimming, due to the warm, humid environment, the toning of the upper muscles, and the horizontal position (which may actually loosen mucus from the bottom of the lungs). Other recommended activities and sports include: Sports that may aggravate exercise-induced asthma symptoms include: However, with proper management and preparation, most people with asthma can participate in any sport.
<urn:uuid:8ae56ef0-e686-40bc-be33-23f48b727742>
CC-MAIN-2013-20
http://www.palomarhealth.org/ContentPage.aspx?nd=18&parm1=P00016&parm2=85&doc=true
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951047
798
3.328125
3
What is endometriosis? Endometriosis (say "en-doh-mee-tree-OH-sus") is a problem many women have during their childbearing years. It means that a type of tissue that lines your uterus is also growing outside your uterus. This does not always cause symptoms. And it usually isn't dangerous. But it can cause pain and other problems. The clumps of tissue that grow outside your uterus are called implants. They usually grow on the ovaries, the fallopian tubes, the outer wall of the uterus, the intestines, or other organs in the belly. In rare cases they spread to areas beyond the belly. How does endometriosis cause problems? Your uterus is lined with a type of tissue called Reference endometrium Opens New Window (say "en-doh-MEE-tree-um"). Each month, your body releases hormones that cause the endometrium to thicken and get ready for an egg. If you get pregnant, the fertilized egg attaches to the endometrium and starts to grow. If you do not get pregnant, the endometrium breaks down, and your body sheds it as blood. This is your Reference menstrual period Opens New Window. When you have endometriosis, the implants of tissue outside your uterus act just like the tissue lining your uterus. During your menstrual cycle, they get thicker, then break down and bleed. But the implants are outside your uterus, so the blood cannot flow out of your body. The implants can get irritated and painful. Sometimes they form scar tissue or fluid-filled sacs (cysts). Scar tissue may make it hard to get pregnant. What causes endometriosis? Experts don't know what causes endometrial tissue to grow outside your uterus. But they do know that the female hormone Reference estrogen Opens New Window makes the problem worse. Women have high levels of estrogen during their childbearing years. It is during these years—usually from their teens into their 40s—that women have endometriosis. Estrogen levels drop when menstrual periods stop (menopause). Symptoms usually go away then. What are the symptoms? The most common symptoms are: - Pain. Where it hurts depends on where the implants are growing. You may have pain in your lower belly, your rectum or vagina, or your lower back. You may have pain only before and during your periods or all the time. Some women have more pain during sex, when they have a bowel movement, or when their ovaries release an egg (ovulation). - Abnormal bleeding. Some women have heavy periods, spotting or bleeding between periods, bleeding after sex, or blood in their urine or stool. - Trouble getting pregnant (Reference infertility Opens New Window). This is the only symptom some women have. Endometriosis varies from woman to woman. Some women don't know that they have it until they go to see a doctor because they can't get pregnant or have a procedure for another problem. Some have mild cramping that they think is normal for them. In other women, the pain and bleeding are so bad that they aren't able to work or go to school. How is endometriosis diagnosed? Many different problems can cause painful or heavy periods. To find out if you have endometriosis, your doctor will: - Ask questions about your symptoms, your periods, your past health, and your family history. Endometriosis sometimes runs in families. - Do a Reference pelvic exam Opens New Window. This may include checking both your Reference vagina Opens New Window and Reference rectum Opens New Window. If it seems like you have endometriosis, your doctor may suggest that you try medicine for a few months. If you get better using medicine, you probably have endometriosis. To find out if you have a cyst on an ovary, you might have an imaging test like an Reference ultrasound Opens New Window, an Reference MRI Opens New Window, or a Reference CT scan Opens New Window. These tests show pictures of what is inside your belly. The only way to be sure you have endometriosis is to have a type of surgery called Reference laparoscopy Opens New Window (say "lap-uh-ROSS-kuh-pee"). During this surgery, the doctor puts a thin, lighted tube through a small cut in your belly. This lets the doctor see what is inside your belly. If the doctor finds implants, scar tissue, or cysts, he or she can remove them during the same surgery. How is it treated? There is no cure for endometriosis, but there are good treatments. You may need to try several treatments to find what works best for you. With any treatment, there is a chance that your symptoms could come back. Treatment choices depend on whether you want to control pain or you want to get pregnant. For pain and bleeding, you can try medicines or surgery. If you want to get pregnant, you may need surgery to remove the implants. Treatments for endometriosis include: - Over-the-counter pain medicines like ibuprofen (such as Advil or Motrin) or naproxen (such as Aleve). These medicines are called anti-inflammatory drugs, or NSAIDs. They can reduce bleeding and pain. - Birth control pills. They are the best treatment to control pain and shrink implants. Most women can use them safely for years. But you cannot use them if you want to get pregnant. - Hormone therapy. This stops your periods and shrinks implants. But it can cause side effects, and pain may come back after treatment ends. Like birth control pills, hormone therapy will keep you from getting pregnant. - Laparoscopy to remove implants and scar tissue. This may reduce pain, and it may also help you get pregnant. As a last resort for severe pain, some women have their uterus and ovaries removed (Reference hysterectomy Opens New Window and oophorectomy). If you have your ovaries taken out, your estrogen level will drop and your symptoms will probably go away. But you may have symptoms of menopause, and you will not be able to get pregnant. If you are getting close to Reference menopause Opens New Window, you may want to try to manage your symptoms with medicines rather than surgery. Endometriosis usually stops causing problems when you stop having periods. Frequently Asked Questions |By:||Reference Healthwise Staff||Last Revised: Reference July 7, 2011| |Medical Review:||Reference Adam Husney, MD - Family Medicine Reference Kirtly Jones, MD - Obstetrics and Gynecology
<urn:uuid:930954bd-4123-46c8-949a-9554f717d332>
CC-MAIN-2013-20
http://www.pamf.org/healtheducation/healthinfo/index.cfm?A=C&hwid=hw102998
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936317
1,419
3.4375
3
What is pancreatitis? Pancreatitis is inflammation of the Reference pancreas Opens New Window Reference Opens New Window, an organ in your belly that makes the hormones Reference insulin Opens New Window and Reference glucagon Opens New Window. These two hormones control how your body uses the sugar found in the food you eat. Your pancreas also makes other hormones and Reference enzymes Opens New Window that help you break down food. Usually the digestive enzymes stay in one part of the pancreas. But if these enzymes leak into other parts of the pancreas, they can irritate it and cause pain and swelling. This may happen suddenly or over many years. Over time, it can damage and scar the pancreas. What causes pancreatitis? Most cases are caused by Reference gallstones Opens New Window or alcohol abuse. The disease can also be caused by an injury, an infection, or certain medicines. Long-term, or chronic, pancreatitis may occur after one attack. But it can also happen over many years. In Western countries, alcohol abuse causes most chronic cases. In some cases doctors don't know what caused the disease. What are the symptoms? The main symptom of pancreatitis is medium to severe pain in the upper belly. Pain may also spread to your back. Some people have other symptoms too, such as nausea, vomiting, a fever, and sweating. How is pancreatitis diagnosed? Your doctor will do a physical exam and ask you questions about your symptoms and past health. You may also have blood tests to see if your levels of certain enzymes are higher than normal. This can mean that you have pancreatitis. Your doctor may also want you to have a complete blood count (CBC), a liver test, or a stool test. Other tests include an MRI, a CT scan, or an ultrasound of your belly (abdominal ultrasound) to look for gallstones. A test called endoscopic retrograde cholangiopancreatogram, or ERCP, may help your doctor see if you have chronic pancreatitis. During this test, the doctor can also remove gallstones that are stuck in the Reference bile duct Opens New Window. How is it treated? Most attacks of pancreatitis need treatment in the hospital. Your doctor will give you pain medicine and fluids through a vein (Reference IV Opens New Window) until the pain and swelling go away. Fluids and air can build up in your stomach when there are problems with your pancreas. This buildup can cause severe vomiting. If buildup occurs, your doctor may place a tube through your nose and into your stomach to remove the extra fluids and air. This will help make the pancreas less active and swollen. Although most people get well after an attack of pancreatitis, problems can occur. Problems may include Reference cysts Opens New Window, infection, or death of tissue in the pancreas. You may need surgery to remove your gallbladder or a part of the pancreas that has been damaged. If your pancreas has been severely damaged, you may need to take insulin to help your body control blood sugar. You also may need to take pancreatic enzyme pills to help your body digest fat and protein. If you have chronic pancreatitis, you will need to follow a low-fat diet and stop drinking alcohol. You may also take medicine to manage your pain. Making changes like these may seem hard. But with planning, talking with your doctor, and getting support from family and friends, these changes are possible. Frequently Asked Questions |By:||Reference Healthwise Staff||Last Revised: Reference October 31, 2011| |Medical Review:||Reference Kathleen Romito, MD - Family Medicine Reference Peter J. Kahrilas, MD - Gastroenterology
<urn:uuid:90504251-6342-478a-a075-692aad14e4c2>
CC-MAIN-2013-20
http://www.pamf.org/teen/healthinfo/index.cfm?A=C&hwid=uf4337
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918222
797
3.15625
3
Gary McConkey from Knightdale, N.C., writes: I often park my car in the sun. When I get back inside, it feels warmer than the outside temperature. Why is that? This is a good example of the “greenhouse effect,” which is essential to life on Earth. Without it, our planet wouldn’t be warm enough for living things to survive. In the case of a car, the sun’s rays enter through the window glass. Some of the heat is absorbed by interior components, such as the dashboard, seats, and carpeting. But the heat they radiate is a different wavelength from the rays of the sun that got through the glass, and it doesn’t let as much of the rays pass back out. As a result, more energy goes into the car than goes out, and the inside temperature increases.
<urn:uuid:80c83852-1a5c-4531-886d-34b711c519f9>
CC-MAIN-2013-20
http://www.parade.com/askmarilyn/2012/05/13-sunday-column.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960281
183
3.34375
3
Work Out With Your Dog - How Animal Agility Training Can Burn Calories For You The University of Massachusetts studied human oxygen consumption during canine agility training. John Ales Vigorous Exercise for Dog and Human Researchers at the University of Massachusetts Department of Kinesiology have studied the impact on humans during canine agility training, and their findings were recently highlighted on Zoom Room Dog Agility Training Center's website. The researchers looked at oxygen consumption (using a face mask and battery-operated, portable metabolic system that measures breath-by-breath gas exchange) as well as heart rate (detected and recorded using a Polar heart rate monitor). The data collected was translated into Metabolic Equivalents, or METs, a way of comparing how much energy a person expends at rest versus during a given activity.
<urn:uuid:0297752d-7d90-4cbe-8341-2b2fb2430ffc>
CC-MAIN-2013-20
http://www.pawnation.com/2010/11/09/work-out-with-your-dog-how-animal-agility-training-can-burn-ca/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95639
163
3.03125
3
Geology and Geography Information about Portage County Wisconsin We will provide as much historical map information as possible about the county. Google Map of Our Museums . How Wisconsin Was Surveyed The methods used to survey land is largely unknown to the general public. But, the Wisconsin Public Land Survey Records: Original Field Notes and Plat Maps site offers a complete explanation of this method as well as access to the original field notes and maps compiled by the surveyors. This section has links to our maps as well as external links to free printable map providers. - Map of Portage County (33k). - Map of Wisconsin (285k). - Map of Central Wisconsin (30k). - Map of Townships (7k). A Portage County Plat Book for 1895 has been photographed using a digital camera. The following maps are from "Page-Size Maps of Wisconsin" published by: University of Wisconsin - Extension and Wisconsin Geological and Natural History Survey, 3817 Mineral Point Road, Madison WI., 53705-5100. - Bedrock Geology of Wisconsin (168k). - Ice Age Deposits of Wisconsin (150k). - Early Vegetation of Wisconsin (126k). - Landforms of Wisconsin (109k). - Soil Regions of Wisconsin (180k). The following maps are in pdf format. Maps available from the Wisconsin Historical Society also. - British Era fur trading posts 1760-1815. - American Era fur trading posts 1815-1850. - American Forts and Exploration ca 1820. - Military Roads 1815-1862. - Wisconsin counties 1835. - Wisconsin counties 1850. - Wisconsin counties 1870. - Wisconsin counties 1901. - Wisconsin Railroads 1865. - Wisconsin Railroads 1873. - Wisconsin Railroads 1936. - From National Atlas, a government agency, are printable maps of all the states and more. - This link is located in France and provides free printable maps covering all countries. The Society will embark on a project during the summer of 2009 and continuing onward to provide county maps with geotag information locating: - Small Communities. - Cemetery Locations. - Locations of School Houses, one-room and others of historic value. - Catholic Churches. - Lutheran Churches. - Other Churches. - Historic sites within the communities Portage County Ice Age Trail Here is a list of all the Historical Makers in the State of Wisconsin.
<urn:uuid:babed0a1-fbec-430c-8a50-1dd2d6f4add6>
CC-MAIN-2013-20
http://www.pchswi.org/archives/geology.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.824916
523
3.015625
3
Hepatitis A is a virus that can infect the liver. In most cases, the infection goes away on its own and doesn't lead to long-term liver problems. In rare cases, it can be more serious. The hepatitis A virus is found in the stool of an infected person. It is spread when a person eats food or drinks water that has come in contact with infected stool. Sometimes a group of people who eat at the same restaurant can get hepatitis A. This can happen when an employee with hepatitis A doesn't wash his or her hands well after using the bathroom and then prepares food. It can also happen when a food item is contaminated by raw sewage or by an infected garden worker. The disease can also spread in day care centers. Children, especially those in diapers, may get stool on their hands and then touch objects that other children put into their mouths. And workers can spread the virus if they don't wash their hands well after changing a diaper. Some things can raise your risk of getting hepatitis A, such as eating raw oysters or undercooked clams. If you're traveling in a country where hepatitis A is common, you can lower your chances of getting the disease by avoiding uncooked foods and untreated tap water. You may also be at risk if you live with or have sex with someone who has hepatitis A. After you have been exposed to the virus, it can take from 2 to 7 weeks before you see any signs of it. Symptoms usually last for about 2 months but may last longer. Common symptoms are: All forms of hepatitis have similar symptoms. Only a blood test can tell if you have hepatitis A or another form of the disease. Call your doctor if you have reason to think that you have hepatitis A or have been exposed to it. (For example, did you recently eat in a restaurant where a server was found to have hepatitis A? Has there been an outbreak at your child's day care? Does someone in your house have hepatitis A?) Your doctor will ask questions about your symptoms and where you have eaten or traveled. You may have blood tests if your doctor thinks you have the virus. These tests can tell if your liver is inflamed and whether you have antibodies to the hepatitis A virus. These antibodies prove that you have been exposed to the virus. Hepatitis A goes away on its own in most cases. Most people get well within a few months. While you have hepatitis: If hepatitis A causes more serious illness, you may need to stay in the hospital to prevent problems while your liver heals. Be sure to take steps to avoid spreading the virus to others. You can only get the hepatitis A virus once. After that, your body builds up a defense against it. Learning about hepatitis A: Preventing hepatitis A: |American Liver Foundation (ALF)| |39 Broadway, Suite 2700| |New York, NY 10006| The American Liver Foundation (ALF) funds research and informs the public about liver disease. A nationwide network of chapters and support groups exists to help people who have liver disease and to help their families. ALF also sponsors a national organ-donor program to increase public awareness of the continuing need for organs. You can send an email by completing a form on the contact page on the ALF website: www.liverfoundation.org/contact. |Centers for Disease Control and Prevention (CDC): Division of Viral Hepatitis| The Division of Viral Hepatitis provides information about viral hepatitis online and by telephone 24 hours a day. Pamphlets also are available. Information is available in English and in Spanish. |Hepatitis Foundation International| |504 Blick Drive| |Silver Spring, MD 20904-2901| This organization is a grassroots communication and support network for people with viral hepatitis. It provides education to patients, professionals, and the public about the prevention, diagnosis, and treatment of viral hepatitis. The organization will make referrals to local doctors and support groups. |Immunization Action Coalition| |1573 Selby Avenue| |St. Paul, MN 55104| The Immunization Action Coalition (IAC) works to raise awareness of the need for immunizations to help prevent disease. The website has videos and photos about how vaccines work and the diseases the vaccines prevent. The site also offers information about vaccine safety and common concerns and myths about vaccines. |National Digestive Diseases Information Clearinghouse| |2 Information Way| |Bethesda, MD 20892-3570| This clearinghouse is a service of the U.S. National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), part of the U.S. National Institutes of Health. The clearinghouse answers questions; develops, reviews, and sends out publications; and coordinates information resources about digestive diseases. Publications produced by the clearinghouse are reviewed carefully for scientific accuracy, content, and readability. - Centers for Disease Control and Prevention (2007). Update: Prevention of hepatitis A after exposure to hepatitis A virus and in international travelers. Updated recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR, 56(RR-41): 1080–1084. Also available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5641a3.htm. Other Works Consulted - American Academy of Pediatrics (2009). Hepatitis A. In LK Pickering et al., eds., Red Book: 2009 Report of the Committee on Infectious Diseases, 28th ed., pp. 329–337. Elk Grove Village, IL: American Academy of Pediatrics. - Centers for Disease Control and Prevention (2006). Prevention of hepatitis A through active or passive immunization: Recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR, 55 (RR-7): 1–23. Also available online: http://www.cdc.gov/mmwr/PDF/rr/rr5507.pdf. - Centers for Disease Control and Prevention (2009). Updated recommendations from the Advisory Committee on Immunization Practices (ACIP) for use of hepatitis A vaccine in close contacts of newly arriving international adoptees. MMWR, 58(36): 1006–1007. Also available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5836a4.htm?s_cid=mm5836a4_e. - Centers for Disease Control and Prevention (2010). Sexually transmitted diseases treatment guidelines, 2010. MMWR, 59(RR-12): 1–110. Also available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5912a1.htm?s_cid=rr5912a1_w. - Curry MP, Chopra S (2010). Acute viral hepatitis. In GL Mandell et al., eds., Mandell, Douglas, and Bennett's Principles and Practice of Infectious Diseases, 7th ed., vol. 1, pp. 1577–1592. Philadelphia: Churchill Livingstone Elsevier. - Weller PF (2009). Health advice for international travelers. In EG Nabel, ed., ACP Medicine, Clinical Essentials, chap. 7. Hamilton, ON: BC Decker. |Primary Medical Reviewer||E. Gregory Thompson, MD - Internal Medicine| |Specialist Medical Reviewer||W. Thomas London, MD - Hepatology| |Last Revised||August 30, 2012| Last Revised: August 30, 2012 To learn more visit Healthwise.org © 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:5dd77189-0c5b-4757-9c68-7df0db39b8fb>
CC-MAIN-2013-20
http://www.peacehealth.org/xhtml/content/special/hw124783.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.864282
1,641
3.5625
4
Peoria Tribe of Indians of OklahomaThe Peoria Tribe of Indians of Oklahoma is a confederation of Kaskaskia, Peoria, Piankeshaw and Wea Indians united into a single tribe in 1854. The tribes which constitute The Confederated Peorias, as they then were called, originated in the lands bordering the Great Lakes and drained by the mighty Mississippi. They are Illinois or Illini Indians, descendants of those who created the great mound civilizations in the central United States two thousand to three thousand years ago. Forced from their ancestral lands in Illinois, Michigan, Ohio and Missouri, the Peorias were relocated first in Missouri, then in Kansas and, finally, in northeastern Oklahoma. There, in Miami, Ottawa County, Oklahoma is their tribal headquarters. The Peoria Tribe of Indians of Oklahoma is a federally-recognized sovereign Indian tribe, functioning under the constitution and by-laws approved by the Secretary of the U.S. Department of the Interior on August 13, 1997. Under Article VIII, Section 1 of the Peoria Constitution, the Peoria Tribal Business Committee is empowered to research and pursue economic and business development opportunities for the Tribe. The increased pressure from white settlers in the 1840’s and 1850’s in Kansas brought cooperation among the Peoria, Kaskaskia, Piankashaw and Wea Tribes to protect these holdings. By the Treaty of May 30, 1854, 10 Stat. 1082, the United States recognized the cooperation and consented to their formal union as the Confederated Peoria. In addition to this recognition, the treaty also provided for the disposition of the lands of the constituent tribes set aside by the treaties of the 1830’s; ten sections were to be held in common by the new Confederation, each tribal member received an allotment of 160 acres; the remaining or “surplus” land was to be sold to settlers and the proceeds to be used by the tribes. The Civil War caused considerable turmoil among all the people of Kansas, especially the Indians. After the war, most members of the Confederation agreed to remove to the Indian Territory under the provisions of the so-called Omnibus Treaty of February 23, 1867, 15 Stat. 513. Some of the members elected at this time to remain in Kansas, separate from the Confederated Tribes, and become citizens of the United States. The lands of the Confederation members in the Indian Territory were subject to the provisions of the General Allotment Act of 1887. The allotment of all the tribal land was made by 1893, and by 1915, the tribe had no tribal lands or any lands in restricted status. Under the provisions of the Oklahoma Indian Welfare Act of 1936, 49 Stat. 1967, the tribes adopted a constitution and by-laws, which was ratified on October 10, 1939, and they became known as the Peoria Tribe of Indians of Oklahoma. As a result of the “Termination Policy” of the Federal Government in the 1950’s, the Federal Trust relationship over the affairs of the Peoria Tribe of Indians of Oklahoma and its members, except for claims then pending before the Indian Claims Commission and Court of claims, was ended on August 2, 1959, pursuant to the provisions of the Act of August 2, 1956, 709 Stat. 937, and Federal services were no longer provided to the individual members of the tribe. More recently, however, the Peoria Tribe of Indians of Oklahoma was reinstated as a federally recognized tribe by the Act of May 15, 1978, 92 Stat. 246.
<urn:uuid:29264399-5a65-4210-b5f7-94445de0d2e9>
CC-MAIN-2013-20
http://www.peoriatribe.com/history.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960678
734
3.296875
3
The scabs that won’t heal: Racial injustice, stereotypes and social ills By DeShuna Spencer new america media America is in denial. Everyday millions of us of various ethnicities, religions and sexual orientations congregate together in our places of work, at our schools/universities and public spaces exchanging politically correct pleasantries as we interact with each other. All the while, boiling deep, down inside many of us lies unconscious, deep-seated stereotypes and misconceptions about the very people (co-workers, neighbors, store patrons, etc.) we come in contact with on a daily basis; and we've been carrying these racial wounds since childhood. Don't believe me? Two years ago, CNN conducted a study on children's attitudes on race. In one of the segments, a 5-year-old white girl from Georgia was asked a series of questions based on a board that had pictures of identical looking cartoon-type girls that ranged in skin color for light to dark. When the interviewer asked the girl who is smart, the 5-year-old pointed to the lightest child. When she was asked who was mean, she pointed to the darkest child. According to CNN, the 5-year-old's answers were a reflection of one of the major findings of the survey. It revealed that "white children have an overwhelming bias to whites and black children ALSO have a bias toward whites, but not nearly as the bias shown by white children." In a world where whites feel as if they have to walk on egg shells when discussing race for fear of being classified as a narrow-minded racist and where blacks are afraid to report or verbally express when they have experienced a form of prejudice for fear of "pulling out the race-card," people have decided to remain silent on the subject. It is not until a tragedy—like the Trayvon Martin case—happens that causes people to come out of the shadows. This case has forced many Americans to face its painful, dysfunctional relationship with race and prejudice, a subject that is rarely discussed in some households. In 2007, the Journal of Marriage and Family found that 75 percent of white families with kindergartners never, or almost never, talk about race. While the stats were reserved for black parents. Seventy-five percent of them discuss race with their children. Just when we think the racial scabs of this country are finally healing, something happens that reopens an already slow-healing wound, causing further pain. Not since the arrest of Dr. Henry Louis Gates that resulted in the "beer summit" has the issue of race polarized the American public. The Trayvon Martin murder has sparked an outrage from people of all races questioning how could someone who killed an unarmed young man still walk around freely, it has allowed people to look at themselves in the mirror and question how they stereotype others; and has unfortunately turned into a political circus from players on both sides of the isle using his death as a way to take on other issues. Through all of this, Trayvon's family is seeking just one thing: justice. While many see this as a great opportunity for a great debate, I'm sure if Trayvon's parents had their ultimate wish—instead of the TV specials, editorials (like this one) and radio commentary on this issue—their son would be alive and they would be helping him sort through college acceptance letters instead of sorting through dozens of media appearance requests from every Tom, Dick and Harry news outlet looking to get a piece of this story. But unfortunately this is a cruel world and unfair things happen to innocent people. So here we are in a supposedly post-racial America debating a decades old issue: racial profiling. How we address this tragedy can either help America turn over a new leaf or it could drive us further apart as a nation. Mirror Mirror On The Wall Who's The Prejudiced of Them All If there were a national poll of every American of all ethnic backgrounds that asked if they were racist it would be safe to say that most people would say that they are in fact not prejudiced. But is that reality? While people are attacking George Zimmerman on how his preconceived notions on black males caused someone's death, many of us are blind to our own prejudices. Don't we all harbor some form of prejudice (great or small)? I was having this conversation with a group of friends one weekday evening over dinner. A black male admitted that he felt uncomfortable getting on a plane with someone who looked Middle Eastern. I proposed a question: What if a white person did not want to ride in your carpool for fear of getting robbed? "Well that's racist?" he said. And your thoughts aren't? In a way it's silly if you think about it. No one in their right mind would assume a well-dressed African American male who pulls up to a DC Metro (subway) station at 7 am to drive people to the District would attack or rob them. But many people look at the thousands of people from Middle Eastern countries that way everyday on airplanes, even as they travel with their small children and elderly parents. They are law-abiding citizens who want to safely land in their destination just as much as you do. Just because there are extremists in the Muslim community who want to harm others exists, we can't assume every person who has olive skin or wears certain religious attire are out to take down the plane. I'm sure George Zimmerman never considered himself to be a racist just as my friend doesn't. I don't know Zimmerman personally so I can't possibly know what he harbors in his heart, but the reality is that we let images we see in the media dictate how we view others. You see faces of black males' mug shots on the nightly news, so when one walks toward you on the street you clutch your purse a little tighter just in case he tries to snatch it. You see images of Latino men standing in front of Home Depots looking for work or read stories about them getting pulled over without a license, so you assume that all Hispanics are illegal, day laborers. You hear about another terrorist threat from a Muslim extremist group, so when a Middle Eastern man sits next to you on the plane, for a split second you wonder if he's wearing a bomb. You see images of black teens participating in flash mobs, so you follow a group of black females who walk in your store just in case someone slips an item in their bag. Zimmerman took one look at Trayvon and assumed the worst about him: he was on drugs and up to no good (recorded on the 911 tape). His paranoia came after a string of burglaries—in a span of 15 months—that were committed all by young black males, according to his neighbor and supporter Frank Taafee. All along, Zimmerman's friends and family have contended that this shooting was not about race but self-defense. But listening to Taafee discuss the case in an interview with Soledad O'Brien, it looks like Zimmerman judged Trayvon based on previous incidents in the gated community. When O'Brien pressed him on how the prior incidents related to Trayvon's death, Taafee responded with, "There's an old saying if you plant corn, you get corn." And then goes on to say later in the interview that, "It is what it is. It is what it is." Now, how do you judge others?
<urn:uuid:f9ea5072-4c80-4c6e-b26f-ef1d9c6bab79>
CC-MAIN-2013-20
http://www.philasun.com/news/2885/57/The-scabs-that-won-t-heal-Racial-injustice-stereotypes-and-social-ills.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97638
1,520
2.921875
3
Don't know if you knew this but there are other places that have sweet onions too that are indigenous to their area.Texas has their 1015 Super Sweet(planted on Oct.15) and Washington has what's called Walla Walla's. I think Hawaii has a particular sweet onion too. That's not a post Iíd expect from someone who likes to chide people for not reading posts before posting. Expanding on what I wrote two posts before yours (where I commented on both 1015ís and Maui onions), none of the sweet onions we have in the US today are indigenous. It all started in 1898 when Bermuda onions were first planted in Texas. Ironically, the seeds were from the Canary Islands not Bermuda. By the 1920's they were growing so many onions in Texas, the demand for seed brought in new, inexperienced seed growers who drove the Canary Island seed quality down. Per-acre yields in Texas became so low that growers began looking at other varieties Ė the most important of which was the Grano from Spain. Because of low yields, there have been few, if any, Bermuda onions grown commercially in the US since the late 1940's despite what you might see advertised at your grocery store. One of the most important super sweet onions is the Granex, an F1 hybrid that was developed in Texas from the Excel Yellow Bermuda and the Texas Early Grano 951. This onion has a host of names including Vidalia, Maui, Noonday, etc. The Grano 1015Y (a.k.a Texas 1015) is not a hybrid Ė rather an improved Grano 951 that was developed for resistance to pink root Ė not sweetness per se Ė while maintaining early maturity. Attempts to further improve the 1015Y have resulted in later maturity which is highly undesirable from a commercial growing perspective. The Walla Walla onion, on the other hand was developed from seed brought from Corsica off the coast of Italy. Interestingly, Bermuda onions are also of Italian origin.
<urn:uuid:962d4ad5-11d2-4bb0-b66d-5b83c198d0ba>
CC-MAIN-2013-20
http://www.pizzamaking.com/forum/index.php?topic=20364.msg209519
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978725
418
2.578125
3
In any large city just a handful of bars give the police far more trouble than all the rest put together. The same is true of many other types of establishments, such as schools, convenience stores, and parking lots. In each case, just a few produce far more crime, disorder, and calls for police assistance than the rest of the group combined. This phenomenon—called “risky facilities”—has important implications for many problem-oriented policing projects. In particular, it can help police focus their energies where they are needed most and can help in selecting appropriate preventive measures. This guide serves as an introduction to risky facilities and shows how the concept can aid problem-oriented policing efforts by providing answers to the following key questions. We open with a definition of facilities and provide some examples. We then discuss risky facilities and explain how this concept is related to other crime concentration theories. Facilities are places with specific public or private functions, such as stores, bars, restaurants, mobile home parks, bus stops, apartment buildings, public swimming pools, ATM locations, libraries, hospitals, schools, parking lots, railway stations, marinas, and shopping malls. Facilities vary greatly in the crimes they experience. Medical facilities, for example, are likely to have different types and levels of crime than do police booking facilities. In addition, there is likely to be a great variation within any broad category of facility. For example, although both are medical facilities, dental offices are likely to have different levels and types of crime than are emergency rooms. Because such distinctions are critical to the success of risky facility analyses, it is important to begin by carefully defining the type of facility that is to be examined; only then proceed to an examination of the type and frequency of crime that the particular type of facility experiences. One important principle of crime prevention holds that crime is highly concentrated among particular people, places, and things; as this principle suggests, focusing resources on these concentrations is likely to yield the greatest preventive benefits. This principle has spawned a number of related concepts that are routinely used by police in problem-solving projects, including: Risky facilities is another recently described theory of crime concentration that holds great promise for problem-oriented policing.1 The theory postulates that only a small proportion of any specific type of facility will account for the majority of crime and disorder problems experienced or produced by the group of facilities as a whole. As a rule of thumb, about 20 percent of the total group will account for 80 percent of the problems. This is known as the 80/20 rule: in theory, 20 percent of any particular group of things is responsible for 80 percent of outcomes involving those things.2 The 80/20 rule is not peculiar to crime and disorder; rather, it is almost a universal law. For example, a small portion of the earth’s surface holds the majority of life on the planet; a small proportion of earthquakes cause most earthquake damage; a small number of people hold most of the earth’s wealth; a small proportion of police officers produce the most arrests; and so forth. In practice, of course, the proportion is seldom exactly 80/20; however, it is always true that some small percentage of a group produces a large percentage of any particular result involving that group. Later in the guide we will show you how to determine whether the 80/20 rule holds true for any particular group of facilities. The 80/20 rule can be a useful initial assumption: when confronting a problem, start by assuming that most of the problem is created by a few individuals, places, or events. Although this first approximation is not always correct, it is probably correct more often than assuming that the problem is spread evenly across individuals, places, or events. Careful analysis can then test whether this starting assumption is correct. The first paper to discuss the concept of risky facilities identified nearly 40 studies of specific types of facilities that included data about variations in the risks of crime, disorder, or misconduct.3 These studies covered a wide range of facilities and many different types of crime and deviance, including robbery, theft, assault, and simple disorder. All the studies showed wide variations in risk in the facilities studied and in many there was clear evidence of high concentrations of risk consistent with the definition of risky facilities.† There follow a few examples. † Not every study provided clear evidence that a small proportion of the facilities accounted for a large proportion of the crime, disorder, or misconduct. Rather, some reported differences between facilities in crime numbers or rates; for example, Matthews, Pease & Pease (2001) [PDF] reported that “4 percent of banks had robbery rates four to six times that of other banks.” Although consistent with the concept of risky facilities, these figures do not satisfy a key component of the definition: they do not demonstrate that a small number of high-risk banks accounted for a large part of the robbery problem. However, this does not mean that risks for the facilities studied were not highly skewed. Rather, it only means that the data did not allow the distribution of risk to be examined. Although the studies in this list are just a few of those that have produced evidence of risky facilities, such results make it clear that this form of crime concentration is quite widespread. Low Cost Motel: The risk of crime varies a great deal among facilities of the same type. Photo Credit: John Eck When analysts plot the number of crimes at each facility under investigation, they almost always create a graph with a reclining-J shape. This can be seen in the example in Figure 1, based on the work of crime analysts in Chula Vista, California. In that study, all parks over two acres in Chula Vista were ranked from the most crime (on left) to the least. The heights of the bars show the number of crimes in each park. As can be seen, three parks had far more crime than any of the rest and most parks had very little crime. Risky facilities can show up as hot spots on a city’s crime map. Indeed, specific hospitals, schools, and train stations are often well-known examples. But simply treating these facilities as hot spots misses an important analytical opportunity: comparing the risky facilities with other like facilities. Such a comparison can reveal important differences between facilities that can account for the differences in risk, thereby providing important pointers to preventive action. In addition, risky facilities are sometimes treated as examples of repeat victimization. However, this can create confusion when it is not the facilities that are being victimized, but rather the people who are using them. Thus, a tavern that repeatedly requests police assistance in dealing with fights is not itself being repeatedly victimized, unless it routinely suffers damage in the course of these fights or if members of staff are regularly assaulted. Even those participating in the fights may not be repeat victims, as different patrons might be involved each time. Indeed, no one need be victimized at all, as would be the case if the calls were about drugs, prostitution, or stolen property sales. Calling the tavern a repeat victim can be more than just confusing, however, because it might also divert attention from the role mismanagement or poor design plays in causing the fights. By keeping the concepts of repeat victimization and risky facilities separate, it may be possible to determine whether or not repeat victimization is the cause of a risky facility and thereby to design responses accordingly. The concept of risky facilities can be helpful in two types of policing projects. First, the concept can be useful in crime prevention projects that focus on a particular class of facilities, such as low rent apartment complexes or downtown parking lots. In the scanning stage, the objective is to list the facilities involved along with the corresponding number of problem incidents in order to see which facilities experience the most and which the fewest problems. This might immediately suggest some contributing factors. For example, a study of car break-ins and thefts in downtown parking facilities in Charlotte, North Carolina revealed that the number of offenses in each parking lot was not merely a function of size.14 Rather, it was discovered that some smaller facilities experienced a large numbers of thefts because of some fairly obvious security deficiencies. This finding was explored in more depth in the analysis stage by computing theft rates for each facility based on its number of parking spaces. The analysis found that the risk of theft was far greater in surface lots than in parking garages, a fact that had not been known previously. Subsequent analysis compared security features between the multilevel and surface lots and then within the members of each category in an effort to determine which aspects of security (e.g., attendants, lighting, security guards) explained the variation. This analysis guided the selection of measures that were to have been introduced at the response stage; and had these been implemented as planned (which was not the case), the assessment stage would have examined, not merely whether theft rates declined overall, but whether those at the previously riskiest facilities had declined most. Obviously, this type of analysis can be conducted within any group of facilities. Second, risky facilities analysis can be helpful to crime prevention efforts that focus on a particular troublesome facility. In this sort of analysis, the scanning stage consists of comparing the problems at a particular facility with those at similar nearby facilities. For example, in a project that won the Herman Goldstein Award for Excellence in Problem-oriented Policing in 2003, 15 police in Oakland, California discovered that a particular motel experienced nearly 10 times as many criminal incidents as did any other comparable motel in the area. Although in this case the analysis convinced Oakland police to address the problems at the motel in question, in other cases analysis might reveal that some other facilities have far greater problems than the one which was the initial focus of the project. Comparing the facility being addressed in the project with other group members can also be useful in the analysis, response, and assessment stages described above. Police reports and calls for service data are the most common sources of information about crime and disorder events. However, using these data can lead to errors if care is not taken to check for some of the following potential problems.† † Many of these data problems are also encountered when studying hot spots and repeat victimization. For further information see Deborah Weisel (2005), Analyzing Repeat Victimization, Problem Solving Tools Series No. 4. Incident reporting forms and police records can be revised to improve geographical information gathering; moreover, the increased use of geocoding for crime reports will gradually help resolve some of these difficulties. A study in England in 1964 found that absconding rates for residents in 17 training schools for delinquent boys ranged from 10 percent to 75 percent. To determine whether this variation was random, researchers reexamined the absconding rates two years later (1966) to see if the variation was much the same. They found that by and large the variation was consistent between the two years. For example, School 1 had the lowest absconding rate and School 17 the highest rate in both years (see the table below). In fact, the correlation was 0.65 between the two years.† Because the variation was relatively stable and because very few boys would have been residents in both years, researchers determined that the variation was probably due to differences in management practices rather than to differences in the student populations. † Correlation coefficients can be calculated quite simply from an Excel spreadsheet. |Training School||Absconding Rate| Adapted from: Clarke and Martin (1975). Once a satisfactory measure of the problematic events for a defined group of facilities has been obtained, the following six-step procedure can be used to determine whether the 80/20 rule applies. † Reproduced with permission from Clarke and Eck (2003) In order to analyze crime concentrations, it is first necessary to define the type of facility to be examined; only then is it possible to create a list of facilities that meets that the definition. Ideally, all places that fit the definition and that are in the area of study will be on the list once and only once. In addition, facilities that do not fit the definition will not be on the list. The further the list departs from this ideal, the more likely it is that the results will be misleading. Identifying all facilities of a particular type in any given area can be troublesome: not only can it sometimes be difficult to develop an appropriate working definition of the type of facility at issue, but problems can also arise in regard to the data management practices of relevant public and private agencies. Here is an example of creating a list of facilities that illustrates these points. A research team at the University of Cincinnati, Ohio wanted to determine why a few bars had numerous violent incidents, whereas most of the others had none or only a very few. To do this, they needed a definition of “bar” and a list of facilities that met this definition. Researchers defined “bar” as a place that met four conditions: (1) it had to be open to the general public, rather than restricted to members or rented out to private parties; (2) it had to serve alcohol for onsite consumption; (3) some patrons had to come to the place for the primary purpose of consuming alcohol; and (4) there had to be a designated physical area within the place that served as a drinking area. Locations that did not meet all four conditions were excluded from the study. To obtain a list of locations meeting this definition, researchers began by consulting records from the Ohio Division of Liquor Control. These records showed that 633 places within the city limits were licensed to serve hard liquor. Based upon their personal knowledge, researchers were able to exclude a number of locations from consideration, reducing the list to 391 possible bars. To isolate the real bars, researchers then compared the remaining locations to the most recent bar guide in a local weekly tabloid that catered to young adults, which contained both a brief written description of the locations and numerous commercial advertisements. The tabloid information revealed that at least 198 of the 391 places fit the definition used. The tabloid list was incomplete, however, as there were an unknown number of city bars that were not reviewed by the tabloid staff. A check of the online Yellow pages verified several more bars. Private fraternal organizations were eliminated from consideration because they were not open to the general public. For most of the remaining places, researchers phoned or visited the sites, examining the physical locations and interviewing owners and employees. Onsite visits revealed several restaurants had areas that looked like bars, but these were eventually eliminated from consideration when it became clear from interviews that they were more decorative than functional or that they were used for other purposes (e.g., to hold carryout orders for customer pickup or to provide overflow seating where customers could eat). Ultimately, researchers identified 264 facilities that fit the definition of bar. These then became the subjects of the study. Table 1: The Distribution of 121 Assaults in 30 Pubs |No. of Assaults||% of Assaults||Cumulative % Assaults||Cumulative % Pubs| |George & Dragon||6||5.0||76.9||23.3| |Hare & Hounds||1||0.8||96.7||46.7| |Rose & Crown||0||0||100||63.3| |Dog and Fox||0||0||100||76.7| Because there is no single reason why facilities vary in risk, it is important to determine which reasons are in operation in each particular case. The most important sources of variation in risk follow. Table 2: Reported Shopliftings by Store, Danvers, Mass. October 2003 to September 2004 |Store||Shopliftings||Percent of Shopliftings||Cumulative % of Shopliftings||Cumulative % of Stores||Shopliftings per 1000 Sq. Ft.| |7 stores with 2 incidents||14||4.7||90.6||30.8||0.08| |28 stores with 1 incident||28||9.4||100.0||66.7||0.06| |26 stores with 0 incidents||0||0.0||100.0||100.0||0.00| |Total stores = 78||298||100.0||100.0||100.0||0.15| Unfortunately, it is not always easy to obtain the data needed to correct for the size of the facilities under study. For example, a study of downtown parking lot thefts in Charlotte, North Carolina was impeded when the city was unable to provide data about the number of spaces in each lot.16 As a result, police officers had to visit each lot and count the spaces by hand. † See Clarke, Ronald (1999) [PDF] . Hot Products. Police Research Series. Paper 112. London: Home Office. † See Mike Scott, The Problem of Robbery at Automated Teller Machines, Problem Specific Guide No. 8 (Washington, D.C.: Office of Community Oriented Policing Services, U.S. Department of Justice, 2001). A Sign Outside a Bar – How managers regulate patron conduct can have a big influence on crime risk. Credit: John Eck In every large city, a few low-cost rental apartment buildings make extraordinary demands on police time. These “risky facilities” are often owned by slumlords — unscrupulous landlords who purchase properties in poor neighborhoods and who make a minimum investment in management and maintenance. Building services deteriorate, respectable tenants move out, and their place is taken by less respectable ones — drug dealers, pimps, and prostitutes — who can afford to pay the rent but who cannot pass the background checks made by more responsible managements. In the course of a problem-oriented policing project in Santa Barbara, California, Officers Kim Frylsie and Mike Apsland analyzed arrests made at 14 rental apartment buildings owned by a slumlord, before and after he had purchased them. The table clearly shows a large increase in the number of people arrested at the properties in the years after he acquired them. There was also some evidence that the increased crime and disorder in these properties spilled over to infect other nearby apartment buildings — a finding that supports the widespread belief that slumlords contribute to neighborhood blight. |Property||Year Aquired||No. of Units||Average Pre-Owning||Yearly Arrests Post-Owning| Source: Clarke, Ronald and Gisela Bichler-Robertson (1998). “Place Managers, Slumlords and Crime in Low Rent Apartment Buildings”. Security Journal, 11: 11-19. Table 3: Responses to Risky Facilities |Size||Facility is large and attracts many users, some of whom become victims.||If the number of crimes per user is very small compared to most other facilities, then one option is to do nothing. Alternatively, identify those most likely to become victims and the circumstances associated with their victimization, then focus on these individuals and circumstances.| |Hot Products||Facility contains a large number of things that are particularly vulnerable to theft or vandalism.||Remove hot products. Provide additional protection to hot products.| |Location||Facility may be located in close proximity to offenders.||Hire additional security. Tailor management practices to the peculiarities of the area.| |Repeat Victims||Facility contains a few victims who are involved in a large proportion of crimes.||Provide victims with the information or inducements they need to make behavioral changes that will reduce their likelihood of victimization. Provide information or protection to victims so that they are not victimized again.| |Crime Attractor||Facility attracts many offenders or a few high rate offenders.||Remove offenders through enforcement and incapacitation or rehabilitation. Deny access to repeat offenders.| |Poor Design||Physical layout makes offending easy, rewarding or inducing risk.||Change the physical layout in conformity with principles of Crime Prevention through Environmental Design (CPTED)†.| |Poor Management||Management practices or processes enable or encourage offending.||Change management procedures, paying particular attention to practices that influence repeat victimization.| † For additional information on CPTED principles see Response Guide #6. There is no single reason that explains why some facilities have far more crime than other facilities of the same type. Rather, the full explanation usually involves a combination of the seven factors discussed above; remember though, that the relative contribution of each will vary from case to case. In many problem-oriented projects it might not be possible to explain completely the variations in risk between facilities, because such analysis is usually only possible after detailed research that can take weeks or months to complete. However, it is usually possible to get some idea of how each of the seven factors contributes to the problem by comparing high and low crime facilities. We previously explained how to do this when we discussed the various ways of testing the influence of location, hot products, repeat victimization and crime attractors. In some cases, quantitative data such as facility size will be readily available. In others, it might be necessary to survey the facilities to discover the relevant information. For example, in the project mentioned above that focused on thefts from cars in Charlotte’s downtown parking facilities, police surveyed the lots to gather information about hours of operation, attendants, fencing, lighting, and other security measures. This provided many ideas for reducing crime in the riskiest facilities. In another Charlotte study, a police survey found that the theft of household appliances from construction sites was much lower when builders delayed installation until the homes were ready for occupancy. 19 Direct observation and discussions with managers and police familiar with the facilities (see Box 4) can yield valuable insights into the reasons for variations in risk between facilities. In addition, interviews with apprehended offenders can reveal how they evaluate the difficulties, rewards, and risks of preying upon the facilities in the sample.† Similarly, interviews with victims—particularly repeat victims—can be revealing. † See Scott Decker, Using Offender Interviews to Inform Police Problem Solving, Problem Solving Tools Series No. 3 (Washington, D.C.: Office of Community Oriented Policing Services, 2005). In Newark, New Jersey, a project funded by the U.S. Department of Justice Office of Community Oriented Policing Services (the COPS Office) focused on drug dealing in low cost private rental apartment complexes. 20 During the scanning stage, 22 possible sites for intervention (out of a total of 506 private apartment complexes) were identified through an analysis of police data and interviews with officers in the Newark Police Department’s Safer Cities Task Force and Special Investigations Unit. Subsequent interviews with district commanders revealed a special problem with four apartment complexes located close to entry and exit ramps for Interstate 78, which provided out-of-town buyers with easy access to drug markets. The buyers could briefly enter the city, purchase drugs at the complexes, drive around in a loop and quickly exit again. Authorities implemented a traffic management plan that disrupted the loop by creating one-way streets and dead-ends. The traffic plan was reinforced with additional enforcement at the four sites and will eventually dovetail with a long-term project by the state to rebuild the ramps to route traffic away from residential areas. Your ability to understand the reasons for the variations in risk will be greatly assisted where there is an existing Problem-Oriented Policing Guide that deals with the facilities that are the focus of your own project. Although it will not tell you which factors are important in your sample, it will provide more specific suggestions than are provided by the general discussion above. As of June, 2006, ten guides focused on problems within specific types of facilities.† † New guides are constantly being added; a list of those in preparation is available at www.popcenter.org. Although there are many ways to reduce risk (see Table 3), it is important to focus on those that are most likely to succeed. For example, it is usually impossible to do anything about the size and location of specific facilities. Similarly, changing a facility’s physical design can be difficult or costly and would only be justified in an extreme case. On the other hand, it may be easier to change business practices that facilitate or encourage crime and disorder; this, however, cannot be done without the full cooperation of those who own or manage the facilities, as they are usually the ones who must implement and pay for the measures. Before moving on to a discussion of the various ways of convincing facility managers to make the changes necessary to reduce crime or disorder, it is important to understand some of the reasons why they might not have done these things on their own. The reasons can include the following. Although it always best to assume that managers and owners want to reduce crime and disorder in their facilities and that they will be open to working with the police and others to implement the necessary changes, the list above suggests that they will sometimes resist implementing remedial measures. Consequently, it will sometimes be necessary to exert a certain amount of coercion, either directly or indirectly. There are several ways that this can be done. † † See Clarke, Ronald (1999) [PDF] . Hot Products. Police Research Series. Paper 112. London: Home Office. (Accessible at www.popcenter.org) Demolition of a Former Bar and Drug Dealing Hot Spot: Removing a very risky facility can be the best way to reduce crime Credit: John Eck Table 4: Calls for Police Service Oakland Airport Motel |Year||Calls for Service| |*Through March 2003| In practice, a combination of approaches—both a carrot and a stick—might be the most effective strategy. Because business owners can be politically powerful, it may be far easier to reduce crime if management is induced to cooperate without engaging in a political battle. In this regard, it is important to recall the guiding principle of this guide, the 80-20 rule: most of the problem is likely to be the result of a few facilities. So it might be that enlisting the support of the majority of facility owners and managers—whose contributions to the problem are minor—to change the behavior of the few—whose contributions to the problem are major—can aid police in winning the political struggle. This can also reduce costs by focusing resources where they are needed most, which can aid in tailoring responses to particular settings, thereby increasing the chances that interventions will be effective. Kock (1999). National Association of Convenience Stores (1991). Sherman, Schmidt, and Velke (1992). Lindstrom (1997). Bowers et al. (1998). Hirschfield and Bowers (1998). Newton (2004); Loukaitou-Sideris and Eck (in press). Chula Vista Police Department (2004). Madensen et al. (2005). Eck (2002). Chula Vista Police Department (2004). Bowers, K., A. Hirschfield and S. Johnson (1998). “Victimization Revisited: A Case Study of Non-Residential Repeat Burglary in Merseyside.” British Journal of Criminology 38(3): 429-452. Chula Vista Police Department. Chief’s Community Advisory Committee (2004). The Chula Vista Motel Project. Chula Vista, Calif.: Chula Vista Police Department. Clarke, R.V. (1999). Hot Products: Understanding, Anticipating and Reducing Demand for Stolen Goods. Police Research Series, Paper 112. London: Home Office, Research Development and Statistics Directorate. [Full Text] ---- (2002). Shoplifting. Problem-Oriented Guides for Police Series; Problem-Specific Guide No. 11. Washington, D.C.: U.S. Department of Justice, Office of Community Oriented Policing Services. [Full Text] Clarke, R.V., and G. Bichler-Robertson (1998). “Place Managers, Slumlords and Crime in Low Rent Apartment Buildings.” Security Journal 11(1): 11-19. Clarke, R.V., and J.E. Eck (2003). Become a Problem-Solving Crime Analyst: In 55 Small Steps. London: Jill Dando Institute of Crime Science. [Full text] Clarke, R.V., and H. Goldstein (2002). “Reducing Theft at Construction Sites: Lessons from a Problem-Oriented Project.” In N. Tilley (ed.), Analysis for Crime Prevention, Crime Prevention Studies, Vol. 13. Monsey, N.Y.: Criminal Justice Press. [Full Text] ---- (2003). “Thefts from Cars in Center-City Parking Facilities: A Case Study in Implementing Problem-Oriented Policing.” In J. Knutsson (ed.), Problem-Oriented Policing: From Innovation to Mainstream, Crime Prevention Studies, Vol. 15. Monsey, N.Y.: Criminal Justice Press. [Full Text] Clarke, R.V., and D. Martin (1975). “A Study of Absconding and Its Implications for the Residential Treatment of Delinquents.” In J. Tizard, I. Sinclair and R.V. Clarke (eds.), Varieties of Residential Experience. London: Routledge and Kegan Paul. Decker, S. (2005) Using Offender Interviews to Inform Police Problem Solving, Problem-Oriented Guides for Police Series, Problem Solving Tools Series No. 3 Washington, D.C.: U.S. Department of Justice, Office of Community Oriented Policing Services. [Full Text] Eck, J.E. (2002). “Preventing Crime at Places.” In L.W. Sherman, D. Farrington, B. Welsh and D.L. MacKenzie (eds.), Evidence-Based Crime Prevention. New York: Routledge. ---- (2003). “Police Problems: The Complexity of Problem Theory, Research and Evaluation.” In J. Knutsson (ed.), Problem-Oriented Policing: From Innovation to Mainstream, Crime Prevention Studies, vol. 15. Monsey, N.Y.: Criminal Justice Press. [Full text] Eck, J., R.V. Clarke and R. Guerette (2007). “Risky Facilities: Crime Concentration in Homogeneous Sets of Facilities.” Crime Prevention Studies, Vol.21. Monsey, N.Y.: Criminal Justice Press. [Full Text] Felson, M., R. Berends, B. Richardson and A. Veno (1997). “Reducing Pub Hopping and Related Crime.” In R. Homel (ed.), Policing for Prevention: Reducing Crime, Public Intoxication and Injury, Crime Prevention Studies, vol. 7. Monsey, N.Y.: Criminal Justice Press. [Full Text] Hirschfield, A., and K. Bowers (1998). “Monitoring, Measuring and Mapping Community Safety.” In A. Marlow and J. Pitts (eds.), Planning Safer Communities. Lyne Regis: Russell House Publishing. Homel, R., M. Hauritz, G. McIlwain, R. Wortley and R. Carvolth (1997). “Preventing Drunkenness and Violence Around Nightclubs in a Tourist Resort.” In R.V. Clarke (ed.), Situational Crime Prevention: Successful Case Studies (2nd ed.). Guilderland, N.Y.: Harrow and Heston. Kock, R. (1999). 80-20 Principle: The Secret to Success by Achieving More with Less. New York: Doubleday. La Vigne, N. (1994). “Gasoline Drive-Offs: Designing a Less Convenient Environment.” In R.V. Clarke (ed.), Crime Prevention Studies, Vol. 2. Monsey, N.Y.: Criminal Justice Press. [Full Text] Lindstrom, P. (1997). “Patterns of School Crime: A Replication and Empirical Extension.” British Journal of Criminology 37(1): 121-130. Loukaitou-Sideris, A., and J.E. Eck (in press). “Crime Prevention and Active Living.” American Journal of Health Promotion. Madensen, T., M. Skubak, D. Morgan and J.E. Eck (2005). Open-Air Drug Dealing in Cincinnati, Ohio: Executive Summary and Final Recommendations. Cincinnati, Ohio: University of Cincinnati, Division of Criminal Justice. Available at www.uc.edu/criminaljustice/ProjectReports/ FINAL_ RECOMMENDATIONS.pdf) Matthews, R., C. Pease and K. Pease (2001). “Repeat Bank Robbery: Theme and Variations.” In G. Farrell and K. Pease (eds.), Repeat Victimization. Crime Prevention Studies, Vol.12. Monsey, N.Y.: Criminal Justice Press. [Full Text] National Association of Convenience Stores (1991). Convenience Store Security Report and Recommendations. Alexandria, Va.: National Association of Convenience Stores. Newton, A. (2004). Crime and Disorder on Busses: Toward an Evidence Base for Effective Crime Prevention. PhD dissertation, University of Liverpool. Oakland Police Department (2003). “The Oakland Airport Motel Project.” Submission for the Herman Goldstein Award for Excellence in Problem-Oriented Policing. [Full Text] Perrone, S. (2000). Crimes Against Small Business in Australia: A Preliminary Analysis. Trends & Issues in Crime and Criminal Justice, No. 184. Canberra: Australian Institute for Criminology. [Full text] Scott, M. (2001). The Problem of Robbery at Automated Teller Machines. Problem-Oriented Guides for Police Series, Problem Specific Guide No. 8. Washington, D.C.: U.S. Department of Justice, Office of Community Oriented Policing Services. [Full text] Scott, M., and H. Goldstein (2005). Shifting and Sharing Responsibility for Public Safety Problems. Problem-Oriented Guides for Police, Response Guide Series No. 3 Washington, D.C.: U.S. Department of Justice, Office of Community Oriented Policing Services. [Full text] Sherman, L., J. Schmidt and R. Velke (1992). High Crime Taverns: A RECAP Project in Problem-Oriented Policing. Washington, D.C.: Crime Control Institute. Smith, D., M. Gregson and J. Morgan (2003). Between the Lines: An Evaluation of the Secured Park Award Scheme. Home Office Research Study, No. 266. London: Home Office Research, Development and Statistics Directorate. [Full text] Stedman, J. (2005). “Alcohol Issues in City Parks.” Unpublished presentation to the Chula Vista City Council. Chula Vista, CA: Chula Vista Police Department (November). Weisel, D. (2005) Analyzing Repeat Victimization. Problem-Oriented Guides for Police, Problem Solving Tools Series No. 4. Washington, D.C.: U. S. Department of Justice, Office of Community Oriented Policing Services. [Full text] Zanin, N., J. Shane and R.V. Clarke (2004). “Reducing Drug Dealing in Private Apartment Complexes In Newark, New Jersey.” A final report to the U.S. Department of Justice, Office of Community Oriented Policing Services, on the field applications of the Problem-Oriented Guides for Police project. Washington, D.C.: Office of Community Oriented Policing Services, U.S. Department of Justice. [Full Text] You may order free bound copies in any of three ways: Phone: 800-421-6770 or 202-307-1480 Allow several days for delivery. Send an e-mail with a link to this guide. Error sending email. Please review your enteries below.
<urn:uuid:56a8fda9-11df-4d64-a1cb-3ceaf7795d58>
CC-MAIN-2013-20
http://www.popcenter.org/tools/risky_facilities/print/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923122
7,539
2.578125
3
Q. What's wrong with hot dogs? A. Nitrite additives in hotdogs form carcinogens. Petition to ban Three different studies have come out in the past year, finding that the consumption of hot dogs can be a risk factor for childhood cancer. Peters et al. studied the relationship between the intake of certain foods and the risk of leukemia in children from birth to age 10 in Los Angeles County 1980 and 1987. The study found that children eating more than 12 hot dogs per month have nine times the normal risk of developing childhood leukemia. A strong risk for childhood leukemia also existed for those children whose fathers' intake of hot dogs was 12 or more per month. Researchers Sarusua and Savitz studied childhood cancer cases in Denver and found that children born to mothers who consumed hot dogs one or more times during pregnancy has approximately double the risk of developing brain tumors. Children who ate hot dogs one or more times per week were also at higher risk of brain cancer. Bunin et al, also found that maternal consumption of hot dogs during pregnancy was associated with an excess risk of childhood Q. How could hot dogs cause cancer? A. Hot dogs contain nitrites which are used as preservatives, primarily to combat botulism. During the cooking process, nitrites combine with amines naturally present in meat to form carcinogenic N-nitroso compounds. that nitrites can combine with amines in the human stomach to form N-nitroso compounds. These compounds are known carcinogens and have been associated with cancer of the oral cavity, urinary bladder, esophagus, stomach and Q. Some vegetables contain nitrites, do they cause cancer too? A. It is true that nitrites are commonly found in many green vegetables, especially spinach, celery and green lettuce. However, the consumption of vegetables appears to be effective in reducing the risk of cancer. How is this possible? The explanation lies in the formation of N-nitroso compounds from nitrites and amines. Nitrite containing vegetables also have Vitamin C and D, which serve to inhibit the formation of N-nitroso compounds. Consequently, vegetables are quite and serve to reduce your cancer risk. Q. Do other food products contain nitrites? A. Yes, all cured meats contain nitrites. These include bacon and fish. Q. Are all hot dogs a risk for childhood cancer? A. No. Not all hot dogs on the market contain nitrites. Because of modern refrigeration methods, nitrites are now used more for the red color they produce (which is associated with freshness) than for preservation. Nitrite-free hot dogs, while they taste the same as nitrite hot dogs, have a brownish color that has limited their popularity among consumers. When cooked, nitrite-free hot dogs are perfectly safe and healthy. HERE ARE FOUR THINGS THAT YOU CAN DO: - Do not buy hot dogs containing nitrite. It is especially important that children and potential parents do not consume 12 or more of these - Request that your supermarket have nitrite-free hot - Contact your local school board and find out whether children are being served nitrite hot dogs in the cafeteria, - Write the FDA and express your concern that nitrite-hot dogs are not labeled for their cancer risk to children. You can dogs, docket #: 95P 0112/CP1. Cancer Prevention Coalition of Public Health, M/C 922 University of Illinois at Chicago 2121 West Taylor Street Chicago, IL 60612 Tel: (312) 996-2297, Fax: (312) 413-9898 1, Peters J, et al " Processed meats and risk of childhood leukemia (California, USA)" Cancer Causes & Control 5: 195-202, 1994. 2 Sarasua S, Savitz D. " Cured and broiled meat consumption in relation to childhood cancer: Denver, Colorado (United States)," Cancer Causes & Control 5:141-8, 1994. 3 Bunin GR, et al. "Maternal diet and risk of astrocytic glioma in children: a report from the children's cancer group (United States and Canada)," Cancer Causes & Control 5:177-87, 1994. 4. Lijinsky W, Epstein, S. "Nitrosamines as environmental carcinogens," Nature 225 (5227): 2112, 1970.
<urn:uuid:2675e659-a043-4bc4-94c2-a7baa413ea7d>
CC-MAIN-2013-20
http://www.preventcancer.com/consumers/food/hotdogs.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.912046
975
3.078125
3
PPPL scientists propose a solution to a critical barrier to producing fusion Posted April 23, 2012; 05:00 p.m. Physicists from the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) have discovered a possible solution to a mystery that has long baffled researchers working to harness fusion. If confirmed by experiment, the finding could help scientists eliminate a major impediment to the development of fusion as a clean and abundant source of energy for producing electric power. An in-depth analysis by PPPL scientists zeroed in on tiny, bubble-like islands that appear in the hot, charged gases — or plasmas — during experiments. These minute islands collect impurities that cool the plasma. And these islands, the scientists report in the April 20 issue of the journal Physical Review Letters, are at the root of a longstanding problem known as the "density limit" that can prevent fusion reactors from operating at maximum efficiency. Fusion occurs when plasmas become hot and dense enough for the atomic nuclei contained within the hot gas to combine and release energy. But when the plasmas in experimental reactors called tokamaks reach the mysterious density limit, they can spiral apart into a flash of light. "The big mystery is why adding more heating power to the plasma doesn't get you to higher density," said David Gates, a principal research physicist at PPPL and co-author of the proposed solution with Luis Delgado-Aparicio, a postdoctoral fellow at PPPL and a visiting scientist at the Massachusetts Institute of Technology's Plasma Science Fusion Center. "This is critical because density is the key parameter in reaching fusion and people have been puzzling about this for more than 30 years." A discovery by Princeton Plasma Physics Laboratory physicists Luis Delgado-Aparicio (left) and David Gates could help scientists eliminate a major impediment to the development of fusion as a clean and abundant source of energy for producing electric power. Listen to a podcast with the scientists discussing their discovery. (Photo by Elle Starkman) The scientists hit upon their theory in what Gates called "a 10-minute 'Aha!' moment." Working out equations on a whiteboard in Gates' office, the physicists focused on the islands and the impurities that drive away energy. The impurities stem from particles that the plasma kicks up from the tokamak wall. "When you hit this magical density limit, the islands grow and coalesce and the plasma ends up in a disruption," said Delgado-Aparicio. These islands actually inflict double damage, the scientists said. Besides cooling the plasma, the islands act as shields that block out added power. The balance tips when more power escapes from the islands than researchers can pump into the plasma through a process called ohmic heating — the same process that heats a toaster when electricity passes through it. When the islands grow large enough, the electric current that helps to heat and confine the plasma collapses, allowing the plasma to fly apart. Gates and Delgado-Aparicio now hope to test their theory with experiments on a tokamak called Alcator C-Mod at MIT, and on the DIII-D tokamak at General Atomics in San Diego. Among other things, they intend to see if injecting power directly into the islands will lead to higher density. If so, that could help future tokamaks reach the extreme density and 100-million-degree temperatures that fusion requires. The scientists' theory represents a fresh approach to the density limit, which also is known as the "Greenwald limit" after MIT physicist Martin Greenwald, who has derived an equation that describes it. Greenwald has another potential explanation for the source of the limit. He thinks it may occur when turbulence creates fluctuations that cool the edge of the plasma and squeeze too much current into too little space in the core of the plasma, causing the current to become unstable and crash. "There is a fair amount of evidence for this," Greenwald said. However, he added, "We don't have a nice story with a beginning and end and we should always be open to new ideas." Gates and Delgado-Aparicio pieced together their model from a variety of clues that have developed in recent decades. Gates first heard of the density limit while working as a postdoctoral fellow at the Culham Centre for Fusion Energy in Abingdon, England, in 1993. The limit had previously been named for Culham scientist Jan Hugill, who described it to Gates in detail. Separately, papers on plasma islands were beginning to surface in scientific circles. French physicist Paul-Henri Rebut described radiation-driven islands in a mid-1980s conference paper, but not in a periodical. German physicist Wolfgang Suttrop speculated a decade later that the islands were associated with the density limit. "The paper he wrote was actually the trigger for our idea, but he didn't relate the islands directly to the Greenwald limit," said Gates, who had worked with Suttrop on a tokamak experiment at the Max Planck Institute for Plasma Physics in Garching, Germany, in 1996 before joining PPPL the following year. In early 2011, the topic of plasma islands had mostly receded from Gates' mind. But a talk by Delgado-Aparicio about the possibility of such islands erupting in the plasmas contained within the Alcator C-Mod tokamak reignited his interest. Delgado-Aparicio spoke of corkscrew-shaped phenomena called snakes that had first been observed by PPPL scientists in the 1980s and initially reported by German physicist Arthur Weller. Intrigued by the talk, Gates urged Delgado-Aparicio to read the papers on islands by Rebut and Suttrop. An email from Delgado-Aparicio landed in Gates' inbox some eight months later. In it was a paper that described the behavior of snakes in a way that fit nicely with the C-Mod data. "I said, 'Wow! He's made a lot of progress,'" Gates remembered. "I said, 'You should come down and talk about this.'" What most excited Gates was an equation for the growth of islands that hinted at the density limit by modifying a formula that British physicist Paul Harding Rutherford had derived back in the 1980s. "I thought, 'If Wolfgang (Suttrop) was right about the islands, this equation should be telling us the Greenwald limit," Gates said. "So when Luis arrived I pulled him into my office." Then a curious thing happened. "It turns out that we didn't even need the entire equation," Gates said. "It was much simpler than that." By focusing solely on the density of the electrons in a plasma and the heat radiating from the islands, the researchers devised a formula for when the heat loss would surpass the electron density. That in turn pinpointed a possible mechanism behind the Greenwald limit. Delgado-Aparicio became so absorbed in the scientists' new ideas that he missed several turnoffs while driving back to Cambridge, Mass., that night. "It's intriguing to try to explain Mother Nature," he said. "When you understand a theory you can try to find a way to beat it. By that I mean find a way to work at densities higher than the limit." Conquering the limit could provide essential improvements for future tokamaks that will need to produce self-sustaining fusion reactions, or "burning plasmas," to generate electric power. Such machines include proposed successors to ITER, a $20 billion experimental reactor that is being built in Cadarache, France, by the European Union, the United States and five other countries. Why hadn't researchers pieced together a similar theory of the density-limit puzzle before? The answer, said Gates, lies in how ideas percolate through the scientific community. "The radiation-driven islands idea never got a lot of press," he said. "People thought of them as curiosities. The way we disseminate information is through publications, and this idea had a weak initial push." PPPL, in Plainsboro, N.J., is devoted both to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Through the process of fusion, which is constantly occurring in the sun and other stars, energy is created when the nuclei of two lightweight atoms, such as those of hydrogen, combine in plasma at very high temperatures. When this happens, a burst of energy is released, which can be used to generate electricity. PPPL is managed by Princeton University for the U.S. Department of Energy's Office of Science.
<urn:uuid:903085bb-3abd-4b6c-8799-284017dca006>
CC-MAIN-2013-20
http://www.princeton.edu/main/news/archive/S33/52/10O57/index.xml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959791
1,801
3.34375
3
PRLog (Press Release) - Apr. 10, 2012 - Many are not aware that on the day Titanic collided with an iceberg in the North Atlantic, the ship had received no less than six wireless transmissions describing the extent of the dangerous ice fields and bergs, but that not all of these messages made it to the bridge and that the captain therefore had an incorrect mental picture which did not match the reality on the ocean in front of him. Author David Warner Mathisen, a professional analyst and former US Army Infantry officer, observes that this type of failure to “connect the dots” is well known in the Army, and that military concepts such as “situational awareness” and Clausewitz’s phrase “the fog of war” are valuable tools for extracting lessons from the disaster that we can apply today. He points out that in many situations, the information that is needed to enable accurate analysis of the situation is actually available, but overlooked or not placed into the proper framework or context, so that the dots are not connected, something that happens so often that we can conclude that gaining true situational awareness is actually exceedingly difficult, even though it might at first appear to be simple. He then goes on to argue that the data we may be overlooking from a civilizational perspective may be creating a dangerous “false picture” that creates potentially serious danger, which should encourage greater efforts to “connect the dots” using tools that can facilitate better analysis. While many various theories of greater or lesser merit have been put forward to explain the 1912 Titanic disaster, including some recent analysis that the position of the earth in relation to both the moon and the sun may have played a role, ultimately the sinking and the tragic loss of life were the result of a lack of situational awareness – not just prior to the collision but in the fatal aftermath as well. # # # David Warner Mathisen is a professional analyst and former US Army officer, and the author of the book "The Mathisen Corollary" and of the recently-released essay "Titanic and the Fall of Civilizations."
<urn:uuid:7a3d133d-2007-4dda-ad00-b8c438ee49f6>
CC-MAIN-2013-20
http://www.prlog.org/11845924-titanic-and-the-fall-of-civilizations.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966946
432
2.65625
3
CAMBRIDGE – Public-opinion polls show that citizens in many democracies are unhappy with their leaders. This is particularly true in Great Britain, where a number of members of Parliament have used their housing allowances to enhance their income, sometimes legally and sometimes not. Some analysts predict that only half of Britain’s MPs will be returned in next year’s election. But, whatever the failures of particular British legislators, the issues go further than merely allowing voters to “throw the rascals out.” There is also a question of how successful leadership is taught and learned in a democracy. A successful democracy requires leadership to be widespread throughout government and civil society. Citizens who express concern about leadership need to learn not only how to judge it, but how to practice it themselves. Many observers say that leadership is an art rather than a science. Good leadership is situational. In my book The Powers to Lead , I call this skill “contextual intelligence.” The ability to mobilize a group effectively is certainly an art rather than a predictive science, and varies with situations, but that does not mean that it cannot be profitably studied and learned. Music and painting are based in part on innate skills, but also on training and practice. And artists can benefit not merely from studio courses, but also from art appreciation lessons that introduce them to the full repertoires and pallets of past masters. Learning leadership occurs in a variety of ways. Learning from experience is the most common and most powerful. It produces the tacit knowledge that is crucial in a crisis. But experience and intuition can be supplemented by analytics, which is the purpose of my book. As Mark Twain once observed, a cat that sits on a hot stove will not sit on a hot stove again, but it won’t sit on a cold one, either. Consequently, learning to analyze situations and contexts is an important leadership skill. The United States Army categorizes leadership learning under three words: “be, know, do.” “Be” refers to the shaping of character and values, and it comes partly from training and partly from experience. “Know” refers to analysis and skills, which can be trained. “Do” refers to action and requires both training and fieldwork. Most important, however, is experience and the emphasis on learning from mistakes and a continuous process that results from what the military calls “after-action reviews.” Learning can also occur in the classroom, whether through case studies, historical and analytic approaches, or experiential teaching that simulates situations that train students to increase self-awareness, distinguish their roles from their selves, and use their selves as a barometer for understanding a larger group. Similarly, students can learn from the results of scientific studies, limited though they may be, and by studying the range of behaviors and contexts that historical episodes can illuminate. In practice, of course, few people occupy top positions in groups or organizations. Most people “lead from the middle.” Effective leadership from the middle often requires attracting and persuading those above, below, and beside you. Indeed, leaders in the middle frequently find themselves in a policy vacuum, with few clear directives from the top. A passive follower keeps his head down, shuns risk, and avoids criticism. An opportunist uses the slack to feather his own nest rather than help the leader or the public. Bureaucratic entrepreneurs, on the other hand, take advantage of such opportunities to adjust and promote policies. The key moral question is whether, and at what point, their entrepreneurial activity exceed the bounds of policies set from the top. Since they lack the legitimate authority of elected or high-level appointed officials, bureaucratic entrepreneurs must remain cognizant of the need to balance initiative with loyalty. Leaders should encourage such entrepreneurship among their followers as a means of increasing their effectiveness. After all, the key to successful leadership is to surround oneself with good people, empower them by delegating authority, and then claim credit for their accomplishments. To make this formula work, however, requires a good deal of soft power. Without the soft power that produces attraction and loyalty to the leader’s goals, entrepreneurs run off in all directions and dissipate a group’s energies. With soft power, however, the energy of empowered followers strengthens leaders. Leadership is broadly distributed throughout healthy democracies, and all citizens need to learn more about what makes good and bad leaders. Potential leaders, in turn, can learn more about the sources and limits of the soft-power skills of emotional IQ, vision, and communication, as well as hard-power political and organizational skills. They must also better understand the nature of the contextual intelligence they will need to educate their hunches and sustain strategies of smart power. Most important, in today’s age of globalization, revolutionary information technology, and broadened participation, citizens in democracies must learn more about the nature and limits of the new demands on leadership.
<urn:uuid:061a731c-49e3-476a-8b10-79e073b8f67e>
CC-MAIN-2013-20
http://www.project-syndicate.org/commentary/learning-to-lead
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964807
1,027
2.9375
3
As millions across India thronged Durga Puja marquees on the penultimate day of the festival Wednesday, so did the Jaintias, an indigenous tribe of Meghalaya comprising Christians, continuing a 400-year-old unique tradition. Worshipping Goddess Durga with the same fervour and devotion but with a different set of rituals, hundreds of Jaintias, both Christians and believers of an indigenous faith, thronged the ancient temple at Nartiang, about 65 km east of Shillong. The Pnar people, as Jaintias are known, were also joined by tourists. The tradition goes back over 400 years. Perched on a hill top, overlooking the Myntang stream, the Durga Bari at Nartiang in the Jaintia Hills district was built by the Jaintia kings in the 16th-17th centuries. "Twenty-two generations of Jaintia kings worshipped Durga and Jayanteswari, the ancestral deity of the Jaintia kings," said the young temple priest, Molay Desmukh. Desmukh, 20, took charge of the Durga temple five years ago after the demise of his father Gopendra Desmukh. Interestingly, Desmukh priests were brought to Nartiang by the Jaintia kings from Bengal, not Maharashtra as the surname may suggest. The dilapidated centuries-old temple structure was demolished recently, and a new one was built with minimal change in design and material in its place. Durga and Jayanteswari are placed on the same place and worshipped together. Both the idols are made of astadhatu (eight precious metals), and each is about six to eight inches tall. "The rituals and religious functions during the Durga Puja are performed as per the Hindu way," the priest said. The ceremony begins with ablution of both the idols, which are then draped in colourful new attires and ornaments before the rituals. On the fourth day of the five-day festival, animal sacrifice is carried out. "However, during the royal Jainitia rule there used to be a scary practice of human sacrifice," the priest said, pointing to a small square hole. He has been told by his father that "the severed head used to be rolled through the hole connected to a secret tunnel that falls into the adjacent river Myntang". It's believed that the practice was stopped by the British, after the sacrifice of a British subject. "Instead, now water gourds are sacrificed, along with animals and birds such as goats, chicken and pigeons," Desmukh said. A human mask is placed on the gourds, as a symbolic act of human sacrifice. Apart from this unique tradition, there is another indigenous feature that marks Durga Puja at Nartiang -- the Durga idol is permanent and is not sent for immersion after the last day of worship. However, the priest installs a young banana plant beside the Durga idol, which is taken out after the completion of the worship and immersed in the nearby river Myntang. The entire expenditure of the Durga Puja is borne by the Dolloi (traditional village chief, who is non-Christian) of Nartiang. Even though the majority of the tribal population in the state of Meghalaya has embraced Christianity, a sizeable section of the community has retained its indigenous culture, religion and customs. "Nartiang was the summer capital of the Jaintia kingdom, which was set up at Jaintiapur, now in Sylhet district of Bangladesh," said historian J.B. Bhattacharjee. "The palace, though in ruins, still stands there as a testimony to the Jaintia heritage," he said. The Jaintia kings spent the summer in the hills to escape the unbearable heat in the plains and return to Jaintiapur after Durga Puja. The royal tradition continued till the British annexed the Jaintia territories in 1835, thereby ending Jaintia reign in the plains.
<urn:uuid:4f7bbd43-184c-4c3a-99a7-c67b07bceeb4>
CC-MAIN-2013-20
http://www.prokerala.com/news/articles/a251515.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967485
856
2.921875
3
|Trees and Shrubs that Tolerate Saline Soils and Salt Spray Drift|| Concentrated sodium (Na), a component of salt, can damage plant tissue whether it contacts above or below ground parts. High salinity can reduce plant growth and may even cause plant death. Care should be taken to avoid excessive salt accumulation from any source on tree and shrub roots, leaves or stems. Sites with saline (salty) soils, and those that are exposed to coastal salt spray or paving de-icing materials, present challenges to landscapers and homeowners. |May 1, 2009||430-031| |Urban Forestry Issues||May 1, 2009||420-180| |Value, Benefits, and Costs of Urban Trees||May 1, 2009||420-181|
<urn:uuid:61bccf50-3c7f-4a28-b1e6-fd1afb392d58>
CC-MAIN-2013-20
http://www.pubs.ext.vt.edu/author/k/kane-brian-res.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.890352
163
3.15625
3
Lama Ole Nydahl The six liberating actions are a motivational teaching for direct use in one's life. As is generally known, Buddhism has a very practical aim and its view is exceedingly clear. No one gets enlightened from only hearing teachings. Lasting results come from real experiences and the changes they bring about. Because this is so important, Buddha gave much practical advice, which should never be seen as commandments but as help from a friend. Being neither a creator nor a judging god, he wants no followers nor students who are a flock of sheep. Instead he wants colleagues - mature people sharing his enlightenment and the massive responsibility it entails are his real goal. For those who mainly think of themselves, his advice is contained in the Noble Eightfold Path. Starting with a useful lifestyle, it culminates in proper concentration. Whoever has reached the level of compassion and insight, and wishes to be useful to others, finds the Six Paramitas or Six Liberating Actions more useful. 'Ita' means 'gone' and 'Param' means 'beyond'. The paramitas develop love which takes one beyond the personal. It is the view which sets one free, the deep insight that seer, things seen, and the act of seeing are interdependent and one, that subject, object and action cannot be separated. The Paramitas liberate not because bad pictures in the mirror of one's mind are replaced with good ones, but because the confident states the latter produce allow one to go behind the good and the bad and recognize the mirror itself; shining, perfect and more fantastic than anything that it may reflect. The actions are liberating because they bring a recognition of the ultimate nature of mind. If one only fills the mind with good impressions, that would of course bring future happiness, but it would not go beyond the conditioned. With the view of the oneness of subject, object and action, whatever is undertaken for the benefit of others will bring the doer timeless benefit. The First Liberating Action: Generosity. Generosity opens up every situation. The world is full of spontaneous richness, but no matter how good the music is, there is no party if no one dances. If no one shares anything of themselves, nothing meaningful will happen. That is why generosity is so important. At Buddha's time, people were much less complicated than today. They also did not have amazing machines working for them. At that time, generosity was a question of helping others survive, of assuring that they had enough to eat. This meant the act was often focused on material things. Today, in the free and non-overpopulated part of the world, this is not the case; one usually dies from too much fat around the heart. Due to a lack of clear thinking, people develop inner problems as the outer ones diminish, and start to feel lonely and insecure. Instead of worrying about necessities, they develop complicated inner lives and many have never tasted the joy of their physical freedom. Thus in the Western world and parts of Asia, where material things are abundant - generosity refers mostly to the emotional. It means sharing one's power, joy and love with others, from the beyond- personal levels from where there is no falling down. If one meditates well and taps into the unconditioned states of mind, there is no end to the good that one may pass on to others. Sharing one's ultimate certainty is the finest gift of all - giving beings one's warmth - and though one cannot take one's car or fame past the grave, not everything is lost at death. The qualities developed during former lives are easily re-gained in later ones and there is no richness that is passed more directly from one existence to another than joyful energy. Squeezing the juice out of life pays, and a few more mantras or prostrations, some more love for one's partner than usual, not only bring power here and now, but speed up enlightenment. As already mentioned, the finest and only lasting richness one may bring beings is an insight into their unconditioned nature. But how to do that? How does one show others their innate perfection? The best mirror is Buddha's teachings and this is why no activity is more beneficial than the making of meditation centers. The practical wisdom they disseminate acquaints many with the clear light of their consciousness and the seeds thus planted will grow over all future lives until enlightenment. Though many socially minded people claim that such teachings are a luxury and that first one should give people something to eat, this is not true. There is ample space for both. When the mind functions well, the stomach will digest the food better and maybe then one can understand the reasons for having less children. In any case, the body will disappear while the mind continues on. The Second Paramita: A life that is aware, meaningful and useful to others. As terms like morality and ethics are employed by governing classes to control those below, many prefer not to use them. People are consciously intimidated by this, and often think, "If the state doesn't get you in this life, the church will get you afterwards." Even when only advice is given, as in the case of the Buddha, and the full development of beings is the only goal, one has to choose words which instruct clearly, without employing fear. The best definition of the second liberating action is probably living meaningfully and for the benefit of others. So what does this mean? How can one encompass the countless actions, words and thoughts during just one single day? Buddha, seeing everything from the state of timeless wisdom, had a few unique ideas. Because people have ten fingers for counting and then remembering, he gave ten pieces of advice concerning what is useful and what is not. Encompassing body, speech and mind, they become meaningful also to independent people when one recognizes that Buddha is not a boss, but a friend wishing one happiness. He wants everybody to share the blissful clear light of mind; the knower of past, present and future. Understanding that everybody is a Buddha who has not realized it yet, and recognizing the outer world to be a pure land, all experience becomes the expression of highest wisdom simply because it can happen. How else could the Buddha act? He never teaches by dogma or from above but shares his wisdom with beings whom he knows to be his equals in essence. Due to the good Karma of those surrounding him, Buddha tought for a full 45 years and died with a smile. He taught many extraordinary students. The questions they asked him were on the level of Socrates, Aristotle and Plato; the best minds of an amazing generation came to test him with the complete range of their philosophical tools and found not only convincing words, but Buddha's power was so skillful that it changed them in lasting ways. Beyond perfecting their logical abilities, he influenced their whole mind. Introducing them to the timeless experiencer behind the experiences, there was no space left for doubt. On the levels of body, speech and mind, it is not difficult to understand what is useful to avoid. When people have problems with the police, usually they have caused some trouble with their body. Killing, stealing, or harming others sexually are the main points here. When they are lonely, usually they say things which disturb others. They usually lie with the intent to harm others, spread gossip, split friends or confuse people. If somebody is unhappy, one will develop a tendency to dislike others, feel envy and permit states of confusion to drag on. The opposite are ten positive actions of body, speech and mind which only bring happiness. They make one powerful and useful to others. Here the Buddha advises using one's body as a tool to protect beings, to give them love and whatever else they need. Whoever has success with others now, has developed that potential during earlier lives, so the quicker one starts, the better. One's speech may touch many more beings with the means of communication today. Kind words previously spoken, create pleasant experiences now and strengthen good karma. If people listen, speak kindly and receive clear information, then again, in this life they will see benefit in telling the truth whenever possible, avoid telling lies to harm others, show people how things work in the world, and bring them calm. And finally, what to do with one's mind? Good wishes, joy in the good that others do and clear thinking is the way to go. These qualities brought us the mental happiness we enjoy today and making a habit of them insures happiness until enlightenment. The mind is most important of all. Thoughts today become words tomorrow and actions the day after. Every moment here and now is important. If one watches the mind, nothing can stop one's progress. The Third Paramita: How not to lose future happiness through anger. When one is accumulating spiritual richness through generosity and directing it with the right understanding, the third quality needed on one's way is patience; not to lose the good energy at work for others and oneself. How may one lose it? Through anger. Anger is the only luxury mind cannot afford. Good impressions gathered over lifetimes - mind's capital and the only source of lasting happiness - may be burnt in no time through fits of hot or cold rage. Buddha said that avoiding anger is the most difficult and most beautiful robe one can wear, and he gave many means to obtain that goal. One which is very useful today is experiencing a situation as a series of separate events to which one reacts without any evaluation. This "salami tactic" or "strobe light-view" is very effective when reacting to a physical danger. Also other methods like feeling empathy with whomever creates bad Karma, knowing it will return to them, and being aware of the impermanent and conditioned nature of every experience, and imagining how deluded people must be to cause such trouble are beneficial approaches. Reacting to whatever appears without anger will set free the timeless wisdom of body, speech, and mind and one's reactions will be right. On the highest level of practice called the Diamond Way, one lets unwanted emotions float on a carpet of mantras, letting them fall away without causing any bad habits. One may also let the thief "come to an empty house" by simply being aware of the feeling while doing nothing unusual. When it has visited a few times without receiving any energy, it will come less frequently and then stay away. Whoever can be aware as anger appears, plays around and then disappears, will discover a radiant state of mind, showing all things clearly like a mirror. In any case, it is wise to avoid anger as well as one can. And when it bites, to let it go quickly. The decision to stop anger and remove it whenever it appears is the support for the "inner" or Bodhisattva vow. Force is useful to protect and teach, but the feeling of anger is always difficult and causes most of the suffering in the world today. The Buddhist protectors removing harm, or Tilopa and Marpa polishing off their students in record time fall under the category of forceful action. Probably no teacher could survive without having to resort to it. Meditation centers need this view for a balanced policy for their visitors. If people appear drunk, on drugs, unwashed or behave badly, one should make them leave quickly. They disturb others, plus the next day they will not remember what they have learned. The function of a Buddhist center, and especially of the Karma Kagyu lineage, is to offer a spiritual way to those who are too critical and independent for anything else; there are enough churches and places for people searching for help. Not everybody brings the necessary conditions for entering the Buddhist practice, however. To practice the Diamond Way one needs a foundation of being at least behaved, able to not take things personally and to think of others. The Fourth Paramita: Joyful energy insuring our growth . Next follows joyful energy. Without that, life has no "zap" and one will get older but not wiser. It is a point where one should be conscious and keep feeding body, speech and mind the impressions which give an appetite for further conquest and joy. As most have a strong tendency towards inertia and the status quo, one should make sure to stay alive from the inside out, which actually happens best through the pure view of the Diamond Way. Knowing that all beings are Buddhas just waiting to be shown their richness and that all existence is the free play of enlightened space: What would be more inspiring than making all that come true? There is an immense joy inherent in constant growth, in never allowing anything to become stale or used. Real development lies beyond the comfort zone and it pays well to demand little from others and much from oneself. The Fifth Paramita: Meditation which makes life meaningful. The former four points should be evident to everybody. Whoever wants to give life power and meaning has to invoke others. This happens best through generosity with body, speech and mind. One needs to direct the energy thus arising through skillful thoughts, words and actions and then to avoid the anger which destroys all good seeds one may have planted. Also energy gives that extra push which opens new dimensions. But why meditation? Because one cannot willfully keep the states so joyfully reached at times. Unwanted emotions often lurk in dark corners of beings' consciousness and may bring them to do, say or experience things they would rather have avoided. Here, the pacifying meditation of calming and holding the mind gives the necessary distance to choose taking roles in life's comedies and avoiding it's tragedies. The Sixth Paramita: Wisdom - Recognizing the true nature of mind. So far, the five actions mentioned have mainly been kind deeds which fill mind with good impressions and thus produce conditioned happiness. In themselves, they go no further than that. What makes them liberating or "gone beyond" paramitas is the sixth point, the enlightening wisdom which the Buddha supplies. In it's fullness it means the understanding of the sixteen levels of "emptiness" or interdependent origination of all phenomena, outer and inner, which is the subject of many weighty books. In a short few words it may be expressed as the understanding that doing good is natural. Because subject, object and action are all parts of the same totality, what else could one do? They condition one another and share the same space while no lasting ego, self or essence can be found either in them or elsewhere. This insight makes one realize how all beings wish for happiness and one will act to bring them benefit in the long run.
<urn:uuid:cc95fd39-e9b2-4277-88a3-0af6175325a7>
CC-MAIN-2013-20
http://www.purifymind.com/LiberatingActions.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962127
2,986
2.734375
3
Python's flexible, duck-typed object system lowers the cost of architectural options that are more difficult to exercise in more rigid languages (yes, we are thinking of C++). One of these is carefully separating your data model (the classes and data structures that represent whatever state your application is designed to manipulate) from your controller (the classes that implement your user interface. In Python, a design pattern that frequently applies is to have one master editor/controller class that encapsulates your user interface (with, possibly, small helper classes for stateful widgets) and one master model class that encapsulates your application state (probably with some members that are themselves instances of small data-representation classes). The controller calls methods in the model to do all its data manipulation; the model delegates screen-painting and input-event processing to the controller. Narrowing the interface between model and controller makes it easier to avoid being locked into early decisions about either part by adhesions with the other one. It also makes downstream maintainance and bug diagnosis easier.
<urn:uuid:bd9b1371-3a14-4f87-b0a1-27d5ecdbd1a8>
CC-MAIN-2013-20
http://www.pygtk.org/pygtk2tutorial/ch25s02.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937767
213
2.796875
3
ActiveMQ via C# using Apache.NMS Part 1 Java Message Service (JMS) is the de facto standard for asynchronous messaging between loosely coupled, distributed applications. Per the specification, it provides a common way for Java application to create, send, receive and read messages. This is great for enterprises or organizations whose architecture depends upon a single platform (Java), but the reality is that most organizations have hi-bred architectures consisting of Java and .NET (and others). Oftentimes these systems need to communicate using common messaging schematics: ActiveMQ and Apache.NMS satisfy this integration requirement. The JMS specification outlines the requirements for system communication between Java Messaging Middleware and the clients that use them. Products that implement the JMS specification do so by developing a provider that supports the set of JMS interfaces and messaging semantics. Examples of JMS providers include open source offerings such as ActiveMQ, HornetQ and GlassFish and proprietary offerings such as SonicMQ and WebSphere MQ. The specification simply makes it easier for third parties to develop providers. All messaging in JMS is peer-2-peer; clients are either JMS or non JMS applications that send and receive messages via a provider. JMS applications are pure Java based applications whereas non JMS use JMS styled APIs such as ActiveMQ.NMS which uses OpenWire, a cross language wire protocol that allows native access to the ActiveMQ provider. JMS messaging schematics are defined into two separate domains: queue based and topic based applications. Queue based or more formally, point-to-point (PTP) clients rely on “senders” sending messages to specific queues and “receivers” registering as listeners to the queue. In scenarios where more a queue has more than one listener, the messages are delivered in a round-robin fashion between each listener; only one copy of the message is delivered. Think of this as something like a phone call between you and another person. Topic based application follow the publish/subscribe metaphor in which (in most cases) a single publisher client publishes a message to a topic and all subscribers to that topic receive a copy. This type of messaging metaphor is often referred to as broadcast messaging because a single client sends messages to all client subscribers. This is some analogous to a TV station broadcasting a television show to you and any other people who wish to “subscribe” to a specific channel. JMS API Basics The JMS Standard defines a series of interfaces that client applications and providers use to send messages and receive messages. From a client perspective, this makes learning the various JMS implementations relatively easy, since once you learn one you can apply what you learned to another implementation relatively easily and NMS is no exception. The core components of JMS are as follows: ConnectionFactory, Connection, Destination, Session, MessageProducer, and MessageConsumer. The following diagram illustrates communication and creational aspects of each object: NMS supplies similar interfaces to the .NET world which allows for clients to send messages to and from the ActiveMQ JMS via OpenWire. A quick rundown of the NMS interfaces are as follows: Note that the Apache.NMS namespace contains several more interfaces and classes, but these are the essential interfaces that map to the JMS specification. The following diagram illustrates the signature that each interface provides: The interfaces above are all part of the Apache.NMS 1.30 API available for download here. In order to use NMS in your .NET code you also need to down load the Apache.NMS.ActiveMQ client as well and to test your code, you will need to download and install the ActiveMQ broker, which is written in Java so it requires the JRE to be installed as well. The following table provides links to each download: For my examples I will be using the latest release of Apache.NMS and Apache.NMS.ActiveMQ as of this writing time. You should simple pick the latest version that is stable. The same applies for ActiveMQ and the JDK/JRE…note that you only need the Java Runtime Environment (JRE) to host install ActiveMQ. Install the JDK if you want to take advantage of some the tools that it offers for working with JMS providers. To start ActiveMQ, install the JRE (if you do not already have it installed – most people do already) and unzip the ActiveMQ release into a directory…in directory will do. Open a command prompt and navigate to the folder with the ActiveMQ release and locate the “bin” folder, then type ‘activemq”. You should see something like the following: Download and install the Apache.NMS and Apache.NMS.ActiveMQ libraries from the links defined in the table above. Unzip them into a directory on your hard drive, so that you can reference them from Visual Studio. Open Visual Studio 2008/2010 and create a new Windows project of type “Class Library”: And once the project is created, using the “Add Reference” dialog, browse to the directory where you unzipped the Apache.NMS files defined above and a reference to the Apache.NMS.dll. Do the same for the Apache.NMS.ActiveMQ download. Note that each download contains builds for several different .NET versions; I chose the “net-3.5” version of each dll since I am using VS 2008 and targeting the 3.5 version of .NET. For my examples you will also need to install the latest and greatest version NUnit from www.nunit.org. After you have installed NUnit, add a reference to the nunit.framework.dll. Note that any unit testing framework should work. Add three classes to the project: - A test harness class (ApacheNMSActiveMQTests.cs) - A publisher class (TopicPublisher.cs) - A subscriber class (TopicSubscriber.cs). Your solution explorer should look something like the following: The test harness will be used to demonstrate the use of the two other classes. The TopicPublisher class represents a container for a message producer and the TopicSubcriber represents a container for a message consumer. The publisher, TopicPublisher is a simple container/wrapper class that allows a client to easily send messages to a topic. Remember from my previous discussion about topics that topics allow for broadcast messaging scenarios: a single publisher sends a message to one or more subscribers and that all subscribers will receive a copy of the message. Message producers typically have a lifetime equal to the amount of time it takes to send a message, however for performance reasons you can extend the life out to the length of the application’s lifetime. Like the TopicPublisher above, the TopicSubscriber class is container/wrapper class that allows clients to “listen in” or “subscribe” to a topic. The TopicSubscriber class is typically has a lifetime that is the equal to the lifetime of the application. The reason is pretty obvious: a publisher always knows when it will publish, but a subscriber never knows when the publisher will send the message. What the subscriber does is create a permanent “listener” to the topic, when a publisher sends a message to the topic, the subscriber will receive and process the message. The following unit test shows the classes above used in conjunction with the Apache.NMS and Apache.NMS.ActiveMQ API’s to send and receive messages to ActiveMQ which is Java based, from the .NET world! Here is quick rundown of the ApachNMSActiveMQTests class: - Declare variables for the required NMS objects and the TopicSubscriber - Declare variables for the broker URI, the topic to subscribe/publish to, and the client and consumer ids - Create a ConnectionFactory object, create and start a Connection, and then create a Session to work with. - Create and start the TopicSubscriber which will be a listener/subscriber to the “TestTopic” topic. Also, to receive messages you must register an event handler or lambda expression with the MessageReceivedDelegate delegate. In this example I in-lined a lambda expression for simplicity. - On the test the method, create a temporary publisher and send a message to the topic. - Tear down and dispose of the subscriber and Session. - Tear down and dispose of the Connection. After you run the unit test you should see something like the following message: Note that ActiveMQ must be up and running for the example to work.
<urn:uuid:73b23611-c6f1-4577-9ef2-cbb95d8098c3>
CC-MAIN-2013-20
http://www.rantdriven.com/2010/07/default.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.888875
1,808
2.625
3
Nearly everybody with more than the minimum amount of computer knowledge will have used the built in Windows Task Manager, and know what an important tool it can sometimes be. Whenever a program crashes, hangs, consumes too many resources or just shouldn’t be there, often the quickest and easiest way to solve the problem is using Task Manager to forcefully close the program. The problem with Task Manager is it’s such a vital troubleshooting component, malware often targets it and tries to block use of the Task Manager so the malicious process cannot be terminated. Some more sophisticated malware can even block third party task management software such as Process Explorer from running. If you’re stuck and the default Task Manager has been blocked or you can’t run a third party task manager tool then things can become quite tricky. There is however, a rather interesting solution to get around this problem, which is to use a task manager tool built to run in a Microsoft Excel spreadsheet. Most people would expect a utility like this to be an executable .exe file, but this one is actually a standard Office 97 – 2003 Worksheet .xls file with some built in trickery. TaskManager.xls is a small (41KB) and simple task manager that has been created using the Visual Basic for Applications (VBA) programming language component built into Excel and other Office applications. While it doesn’t show you things like running services, performance graphs or network activity, it can list the currently running processes, and terminate, suspend or resume any of them, which is the most important part when dealing with malware. For this to run you have to make sure Macro’s are enabled in Excel because their usage is disabled by default to protect against potential Macro viruses. If Macro’s are disabled for instance in Excel 2003, and you don’t get asked if you want to enable them for the current sheet, go to Tools -> Options -> Security -> Macro Security, and set the level to medium which will always ask to run a Macro in future. There are only 2 buttons and a blank window in TaskManager.xls to start with. The List processes button will populate the window with a list of all running and active processes on your computer, and the Execute commands button will perform one of the three tasks available of terminate, suspend or resume a process. These are used by entering t, s or r into column A of the worksheet, then pressing the button. The screenshot below shows that the MaliciousProcess.exe is to be suspended and Ransomware.exe terminated when the Execute commands button is pressed. Clicking the button will do just that, then press the List processes button again to update the list. Do note that like a traditional task manager tool, TaskManager.xls is unable to terminate protected processes. For example, nothing will happen if you try to terminate the Client Server Runtime Process (csrss.exe) from TaskManager.xls. TaskManager.xls is very useful but unfortunately it does have problems working in other Office suites. In Libre Office v4 clicking the List Processes button will prompt a runtime error, and Softmaker Office free version doesn’t support VBA. The free version of Kingsoft Office doesn’t support VBA either so won’t run although the professional version does support it and might work. Even the free Excel Viewer provided by Microsoft doesn’t work, so it appears that sadly the TaskManager.xls tool is only compatible with the real Microsoft Excel.
<urn:uuid:52a96c7d-cd81-4e2d-8d7b-026b859957fd>
CC-MAIN-2013-20
http://www.raymond.cc/blog/run-task-manager-from-excel-with-useful-suspend-process-command/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.91847
729
2.609375
3
Below you will find several recent observations about the relationship between reading and science process skills. Significant improvement in both science and reading scores occurred when the regular basal reading program was replaced with reading in science that correlated with the science curriculum (Romance and Vitale, 2001). Teachers should help students recognize the important role that prior knowledge plays and teach them to use that knowledge when learning science through reading (Barton and Jordan, 2001). Most students arrive at the science teacher's classroom knowing how to read, but few understand how to use reading for learning science content (Santa, Havens, and Harrison, 1996). The same skills that make good scientists also make good readers: engaging prior knowledge, forming hypotheses, establishing plans, evaluating understanding, determining the relative importance of information, describing patterns, comparing and contrasting, making inferences, drawing conclusions, generalizing, evaluating sources, and so on (Armbruster, 1993). The skills in science are remarkably similar to those used in other subjects, especially reading. When students are doing science, following scientific procedures, and thinking as scientists, they are developing skills that are necessary for effective reading and understanding (Padilla, Muth and Lund Padilla, 1991). Students engaging in hands-on activities are forced to confront currently held cognitive frameworks with new ideas, and, thus actively reconstruct meaning form experience (Shymansky, 1989). Because hands-on activities encourage students to generate their own questions whose answers are found by subsequent reading of their science textbook or other science materials, such activities can provide students with both a meaningful purpose for reading (Ulerick, 1989) and context-valid cognitive frames of reference from which to construct meaning from text (Nelson-Herber, 1986). Reading and activity-oriented sciences emphasize the same intellectual skills and are both concerned with thinking processes. When a teacher helps students develop science process skills, reading processes are simultaneously being developed (Mechling & Oliver, 1983 and Simon & Zimmerman, 1980). Research indicates that a strong experienced-based science program, one in which students directly manipulate materials, can facilitate the development of language arts skills (Wellman, 1978). Science process skills have reading counterparts. For example, when a teacher is working on "describing" in science, students are learning to isolate important characteristics, enumerate characteristics, use appropriate terminology, and use synonyms which are important reading skills (Carter & Simpson, 1978). When students have used the process skills of observing, identifying, and classifying, they are better able to discriminate between vowels and consonants and to learn the sounds represented by letters, letter blends, and syllables (Murray & Pikul ski, 1978). Science instruction provides an alternative teaching strategy that motivates students who may have reading difficulties (Wellman, 1978). Children's involvement with process skills enables them to recognize more easily the contextual and structural clues in attacking new words and better equips them to interpret data in a paragraph. Science process skills are essential to logical thinking, as well as to forming the basic skills for learning to read (Barufaldi & Swift, 1977). Guszak defines reading readiness as a skill-complex. Of the three areas within the skill-complex, two can be directly enhanced by science process skills: (1) physical factors (health, auditory, visual, speech, and motor); and (2) understanding factors (concepts, processes). When students see, hear, and talk about science experiences, their understanding, perception, and comprehension of concepts and processes may improve (Barufaldi & Swift, 1977 and Bethel, 1974). The hands-on manipulative experiences science provides are the key to the relationship between process skills in both science and reading (Lucas & Burlando, 1975). Science activities provide opportunities for manipulating large quantities of multi-sensory materials which promote perceptual skills, i.e., tactile, kinesthetic, auditory, and visual (Neuman, 1969). These skills then contribute to the development of the concepts, vocabulary, and oral language skills (listening and speaking) necessary for learning to read (Wellman, 1978). Studies viewed cumulatively suggest that science instruction at the intermediate and upper elementary grades does improve the attainment of reading skills. The findings reveal that students have derived benefits in the areas of vocabulary enrichment, increased verbal fluency, enhanced ability to think logically, and improved concept formation and communication skills (Campbell, 1972; Kraft, 1961; Olson, 1971; Quinn & Kessler, 1976).
<urn:uuid:ec9cf480-f6cb-437c-a4cf-e506d9210796>
CC-MAIN-2013-20
http://www.readingeducator.com/content/science/research.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925986
920
3.390625
3
Defining Dry Eyes: Doctors Agree That Dry Eyes Involve Water Loss in the Tear Film’s Aqueous Layer Physicians look for a series of symptoms for dry eyes, not an exact cause or condition, says Bio-Logic Aqua Research Founder Sharon Kleyne. Grants Pass, OR (PRWEB) April 09, 2012 In a recent interview, Mrs. Kleyne discussed the latest attempts to define “dry eyes,” “dry eye syndrome” and “dry eye disease.” According to Mrs. Kleyne, the only agreement is that dry eyes involve a loss of water in the tear film’s “aqueous layer,” due either to excessive evaporation or to poor tear production. The causes and symptoms of dry eyes are so complex and variable that doctors have not agreed on a precise clinical definition of the syndrome. Dry eyes are the most frequently cited reason for visiting an eye doctor and so common that ophthalmologists find it difficult to draw a precise line between normal eyes and abnormal eyes with dry eye disease. (Mathers, 2005). That was the conclusion of eye health advocate Sharon Kleyne, host of the Sharon Kleyne Hour Power of Water syndicated radio show and founder of Bio-Logic Aqua Research. The three-layered tear film covering the eye’s exposed portions is 99% water and extremely complex. The overlying “lipid layer” helps prevent water evaporation from the middle “aqueous (water) layer,” while the lower “mucin layer” adheres the tear film to the eye. Dry eyes are experienced by nearly everyone, says Mrs. Kleyne. Tear film dehydration (water loss) begins at the moment of birth, when you first open your eyes, and eyes require constant hydration throughout life. Because we are all unique, no two individuals are affected in exactly the same way by eye dehydration. Doctors agree that maintaining a healthy, fully hydrated tear film is becoming an increasing challenge for everyone. According to Ula Jurkunas, MD, corneal stem cell researcher at Harvard University, “To function well, the cornea (clear part of the eye) must be well hydrated by the tear film. Hydration is also essential to successful corneal stem cell transplants” (Jurkunas, 2011). Sharon Kleyne notes that, no physiologic variable correlates exactly with dry eye symptoms, although most measurable variables correlate to some degree. Instead, she explains, physicians look for a series of symptoms. The presence of one or more symptom could indicate a dry eye condition (Korb, 2000). The most common dry eye symptoms include eye irritation, a feeling of dryness in the eyes; itching, burning and grainy or scratchy eyes; increased eye allergies, and blurred vision (especially late in the day). Symptoms such as fatigue, headache, muscle aches and an elevated stress level may not even directly involve the eyes (Mathers, 2005). This symptom-based definition works reasonably well, according to Mrs. Kleyne. The degree and duration of symptoms are critical since a large percentage of the adult population complains of at least mild dry eye symptoms at any given time. This includes 50% of adult females and a significant percentage of computer users and contact lens patients (Mathers, 2005). In addition to symptoms, most (but not all) dry eye patients have at least one physiologic parameter outside the range of normal. Typically, tear production has decreased, tear film volume is low, tear film evaporation is high, and/or tear film osmolarity is elevated (Mathers, 2004). In addition, tears produced in dry eyes contain elevated levels of substances (metalloproteases and other proteinaceous compounds) that increase surface inflammation (Barton, 1995). © 2012 Bio-Logic Aqua Research For the original version on PRWeb visit: http://www.prweb.com/releases/prweb2012/4/prweb9381612.htm
<urn:uuid:0668c357-db10-44e3-9891-6865f5dbc051>
CC-MAIN-2013-20
http://www.redorbit.com/news/science/1112511020/defining_dry_eyes_doctors_agree_that_dry_eyes_involve_water/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906884
863
2.828125
3
A canticle (from the Latin canticulum, a diminutive of canticum, song) is a hymn (strictly excluding the Psalms) taken from the Bible. The term is often expanded to include ancient non-biblical hymns such as the Te Deum and certain psalms used liturgically. These three canticles are sometimes referred to as the "evangelical canticles", as they are taken from the Gospel of St Luke. They are sung every day (unlike those from the Old Testament which, as is, shown above, are only of weekly occurrence). They are placed not amongst the psalms (as are the seven from the Old Testament), but separated from them by the Chapter, the Hymn, the Versicle and Response, and thus come immediately before the Prayer (or before the preces, if these are to be said). They are thus given an importance and distinction elevating them into great prominence, which is further heightened by the rubric which requires the singers and congregations to stand while they are being sung (in honour of the mystery of the Incarnation, to which they refer). Further, while the "Magnificat" is being sung at Solemn Vespers, the altar is incensed as at Solemn Mass. All three canticles are in use in the Greek and Anglican churches. In the Breviary the above-named ten canticles are provided with antiphons and are sung in the same eight psalm-tones and in the same alternating manner as the psalms. To make the seven taken from the Old Testament suitable for this manner of singing, nos. 2-7 sometimes divide a verse of the Bible into two verses, thus increasing the number of Breviary verses. No. 1, however, goes much farther than this. It uses only a portion of the long canticle in Daniel, and condenses, expands, omits, and interverts verses and portions of verses. In the Breviary the canticle begins with verse 57, and ends with verse 56 (Dan., iii); and the penultimate verse is clearly an interpolation, "Benedicamus Patrem, et Filium . . .". In addition to their Breviary use some of the canticles are used in other connections in the liturgy; e.g. the "Nunc dimittis" as a tract at the Mass of the Feast of the Purification (when 2 February comes after Septuagesima); the "Benedictus" in the burial of the dead and in various processions. The use of the "Benedictus" and the "Benedicite" at the old Gallican Mass is interestingly described by Duchene (Christian Worship: Its Origin and Evolution, London, 1903, 191-196). In the Office of the Greek Church the canticles numbered 1, 3, 5, 6, 7, 8, 9 are used at Lauds, but are not assigned to the same days as in the Roman Breviary. Two others (Isaiah 26:9-20, and Jonah 2:2-9) are added for Friday and Saturday respectively. The ten canticles so far mentioned do not exhaust the portions of Sacred Scripture which are styled "canticles". There are, so example, those of Deborah and Barac, Judith, the "canticle of Canticles"; and many psalms (e.g. xvii, 1, "this canticle"; xxxviii,1, "canticle of David"; xliv,1, "canticle for the beloved"; and the first verse of Pss. 1xiv, 1xv, 1xvi, 1xvii, etc). In the first verse of some psalms the phrase psalmus cantici (the psalm of a canticle) is found, and in others the phrase canticum psalmi (a canticle of a psalm). Cardinal Bona thinks that psalmus cantici indicated that the voice was to precede the instrumental accompaniment, while canticum psalmi indicated an instrumental prelude to the voice. This distinction follows from his view of a canticle as an unaccompanied vocal song, and of a psalm as an accompanied vocal song. It is not easy to distinguish satisfactorily the meanings of psalm, hymn, canticle, as referred to by St. Paul in two places. Canticum appears to be generic - a song, whether sacred or secular; and there is reason to think that his admonition did not contemplate religious assemblies of the Christians, but their social gatherings. In these the Christians were to sing "spiritual songs", and not the profane or lascivious songs common amongst the pagans. These spiritual songs were not exactly psalms or hymns. The hymn may then be defined as a metrical or rhythmical praise of God; and the psalm, accompanied sacred song or canticle, either taken from the Psalms or from some less authoritative source (St. Augustine declaring that a canticle may be without a psalm but not a psalm without a canticle). In addition to the ten canticles enumerated above the Roman Breviary places in its index, under the heading "Cantica", the "Te Deum" (at the end of Matins for Sundays and Festivals, but there styled "Hymnus SS. Ambrosii et Augustini") and the: "Quicumque vult salvus esse" (Sundays at Prime, but there styled "Symbolum S. Athanasii", the "Creed of St. Athanasius"). To these are sometimes added by writers the "Gloria in excelsis", the "Trisagion", and the "Gloria Patri" (the Lesser Doxology). In the "Psalter and Canticles Pointed for chanting" (Philadelphia, 1901), for the use of the Evangelical Lutheran Congregations, occurs a "Table of canticles" embracing Nos. 1, 3, 8, 9, 10, besides certain psalms, and the "Te Deum" and "Venite" (Ps. xicv, used at the beginning of Matins in the Roman Breviary). The word Canticles is thus seen to be somewhat elastic in its comprehension. On the one hand, while it is used in the common parlance in the Church of England to cover several of the enumerated canticles, the Prayer Book applies it only to the "Benedicite", while in its Calendar the word Canticles is applied to what is commonly known as the "Song of Solomon" (the Catholic "Canticle of Canticles", Vulgate, "Canticum canticorum"). The nine Canticles are as follows: Originally, these Canticles were chanted in their entirety every day, with a short refrain inserted between each verse. Eventually, short verses (troparia) were composed to replace these refrains, a process traditionally inaugurated by Saint Andrew of Crete. Gradually over the centuries, the verses of the Biblical Canticles were omitted (except for the Magnificat) and only the composed troparia were read, linked to the original canticles by an Irmos. During Great Lent however, the original Biblical Canticles are still read. Another Biblical Canticle, the Nunc Dimittis is either read or sung at Vespers.
<urn:uuid:617a48de-4593-4652-a581-fd490cb1420f>
CC-MAIN-2013-20
http://www.reference.com/browse/Canticle+of+Canticles
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944137
1,566
3.171875
3
When a child who has been hospitalized with a serious infection is sent home to complete a prolonged course of antibiotics, they can receive their medicine in two ways — by mouth, or intravenously, via a peripherally inserted central catheter (PICC) line. Though PICC lines can be scary for pediatric patients, and require caregivers to be trained in their use and care, many doctors often prefer them to oral medicines for long-term antibiotic treatments. One CHOP researcher, Ron Keren, MD, MPH, director of the Center for Pediatric Clinical Effectiveness, was recently awarded nearly two million dollars from the Patient-Centered Outcomes Research Institute (PCORI) to lead a study examining whether oral antibiotics are as effective at treating infection over an extended period as PICC lines. “These two antibiotic treatment options have major implications for the overall experience of the child, families and caregivers, but there is a lack of real-world evidence on their benefits and drawbacks to help clinicians and patient families make an informed choice,” said Dr. Keren. A type of intravenous (IV) catheter, a PICC line is a long, flexible tube that is inserted in a peripheral vein, often in the arm or neck, and advanced until its tip rests near the heart. Because they tap directly into the circulatory system, PICC lines offer maximum drug delivery. Unlike regular IV catheters, PICC lines can stay in the body for weeks to months, but they require regular maintenance. PICC lines must be flushed daily, their dressings have to be inspected and changed, and patients with PICC lines must avoid getting them wet or dirty — a tall order for some active pediatric patients. In addition, a variety of equipment is required to use and maintain PICC lines, including infusion pumps and portable IV poles. PICC lines do have some risks. They can clot, break, or become dislodged. And because they sit in large blood vessels directly above the heart, any bacteria that are inadvertently introduced into the catheter go directly to the heart and are pumped throughout the body, which can lead to a dangerous infection called sepsis. Oral antibiotics, on the other hand, are much easier for patients to take and caregivers to manage. However, because oral medications must pass through the digestive system, to have the same efficacy as IV medications oral antibiotics must have high “bioavailability” — the percentage of the drug that reaches the blood. Drugs administered via PICC lines have, by definition, 100 percent bioavailability. “If we find that the prolonged IV option is no better than the oral route, we think that most families would prefer for their child to take oral antibiotics,” Dr. Keren noted. “However, if IV antibiotics are marginally better than oral antibiotics, then that benefit will need to be weighed against any reduction in quality of life and complications that we anticipate with the PICC lines.”
<urn:uuid:f55a2b17-5f2a-406d-8e61-bdd2d06fae43>
CC-MAIN-2013-20
http://www.research.chop.edu/blog/comparing-oral-and-intravenous-antibiotics/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956885
620
2.984375
3
When the last oil well runs dry Just as certain as death and taxes is the knowledge that we shall one day be forced to learn to live without oil. Exactly when that day will dawn nobody knows, but people in middle age today can probably expect to be here for it. Long before it arrives we shall have had to commit ourselves to one or more of several possible energy futures. And the momentous decisions we take in the next few years will determine whether our heirs thank or curse us for the energy choices we bequeath to them. There will always be some oil somewhere, but it may soon cost too much to extract and burn it. It may be too technically difficult, too expensive compared with other fuels, or too polluting. An article in Scientific American in March 1998 by Dr Colin Campbell and Jean Laherrere concluded: "The world is not running out of oil - at least not yet. "What our society does face, and soon, is the end of the abundant and cheap oil on which all industrial nations depend." They suggested there were perhaps 1,000 billion barrels of conventional oil still to be produced, though the US Geological Survey's World Petroleum Assessment 2000 put the figure at about 3,000 billion barrels. Too good to burn The world is now producing about 75 million barrels per day (bpd). Conservative (for which read pessimistic) analysts say global oil production from all possible sources, including shale, bitumen and deep-water wells, will peak at around 2015 at about 90 million bpd, allowing a fairly modest increase in consumption. Peaking is at hand, not years away... If I'm right, the unforeseen consequences are devastating Matthew Simmons, former US government adviser On Campbell and Laherrere's downbeat estimate, that should last about 30 years at 90 million bpd, so drastic change could be necessary soon after 2030. And it would be drastic: 90% of the world's transport depends on oil, for a start. Most of the chemical and plastic trappings of life which we scarcely notice - furniture, pharmaceuticals, communications - need oil as a feedstock. The real pessimists want us to stop using oil for transport immediately and keep it for irreplaceable purposes like these. In May 2003 the Association for the Study of Peak Oil and Gas (Aspo), founded by Colin Campbell, held a workshop on oil depletion in Paris. One of the speakers was an investment banker, Matthew Simmons, a former adviser to President Bush's administration. From The Wilderness Publications reported him as saying: "Any serious analysis now shows solid evidence that the non-FSU [former Soviet Union], non-Opec [Organisation of Petroleum Exporting Countries] oil has certainly petered out and has probably peaked... No cheap oil, no cheap food "I think basically that peaking of oil will never be accurately predicted until after the fact. But the event will occur, and my analysis is... that peaking is at hand, not years away. "If I'm right, the unforeseen consequences are devastating... If the world's oil supply does peak, the world's issues start to look very different. "There really aren't any good energy solutions for bridges, to buy some time, from oil and gas to the alternatives. The only alternative right now is to shrink our economies." Planning pays off Aspo suggests the key date is not when the oil runs out, but when production peaks, meaning supplies decline. It believes the peak may come by about 2010. Fundamental change may be closing on us fast. And even if the oil is there, we may do better to leave it untouched. Many scientists are arguing for cuts in emissions of the main greenhouse gas we produce, carbon dioxide, by at least 60% by mid-century, to try to avoid runaway climate change. That would mean burning far less oil than today, not looking for more. There are other forms of energy, and many are falling fast in price and will soon compete with oil on cost, if not for convenience. So there is every reason to plan for the post-oil age. Does it have to be devastating? Different, yes - but our forebears lived without oil and thought themselves none the worse. We shall have to do the same, so we might as well make the best of it. And the best might even be an improvement on today. Who holds the world's oil - and how long will it last? At a glance link:
<urn:uuid:3ce10fea-1379-4508-ba12-ebb5202acfdb>
CC-MAIN-2013-20
http://www.resilience.org/stories/2004-04-18/when-last-oil-well-runs-dry
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956149
933
2.6875
3
Water and sediment testing EPA is currently collecting and analyzing water and sediment samples to help states and other federal agencies understand the immediate and long-term impacts of oil contamination along the Gulf coast. The results and the interpretation of all data collected by EPA will be posted to www.epa.gov/bpspill. Water and sediment samples are being taken prior to oil reaching the area to determine water quality and sediment conditions that are typical of selected bays and beaches in Louisiana, Mississippi, Alabama, and the Florida panhandle. This data will be used to supplement existing data generated from previous water quality surveys conducted by states, EPA, and others. Water sampling will continue once the oil reaches the shore; periodic samples will be collected to document water quality changes. EPA will make data publicly available as quickly as possible. Other state and federal agencies make beach closure and seafood harvesting and consumption determinations, but the data generated by EPA will assist in their evaluations. Why is EPA sampling and monitoring the water? EPA is tracking the prevalence of potentially harmful chemicals in the water as a result of this spill to determine the level of risk posed to fish and other wildlife. While these chemicals can impact ecosystems, drinking water supplies are not expected to be affected. The oil itself can cause direct effects on fish and wildlife, for example when it coats the feathers of waterfowl and other types of birds. In addition, other chemical compounds can have detrimental effects. Monitoring information allows EPA to estimate the amount of these compounds that may reach ecological systems. When combined with available information on the toxicity of these compounds, EPA scientists can estimate the likely magnitude of effects on fish, wildlife, and human health. To Learn More:
<urn:uuid:7de73c8d-5f61-40e7-bd98-90d076d2c127>
CC-MAIN-2013-20
http://www.restorethegulf.gov/release/2010/09/07/water-and-sediment-testing
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927935
344
3.6875
4
All-metal hip implants can damage soft tissue: FDA (Reuters) - Metal-on-metal hip implants can cause soft-tissue damage and pain, which could lead to further surgery to replace the implant, the U.S. health regulator said, following several recalls of the artificial hip parts. All-metal hip implants were developed to be more durable than traditional implants but have become a major cause of concern following several safety issues and user discomforts. The traditional implants combine a ceramic or metal ball with a plastic socket. The U.S. Food and Drug Administration said all-metal implants can shed metal where two components connect, such as the ball and the cup that slide against each other during walking or running. Such release of metal will cause wear and tear of the implant and can damage bone and soft tissue surrounding the implant. The agency said surgeons should select a metal-on-metal hip implant for their patient only after determining that its benefits outweigh that of an alternative hip system. Johnson & Johnson, the biggest manufacturer of all-metal devices, recalled its ASR hip implant in 2010 following safety problems. Smith & Nephew withdrew a component of one of its all-metal artificial hip systems last June, following higher level of patient problems with the device. Stryker Corp begun recalling some components of its implant in July due to risks associated with corrosion. Other hip implant makers include Zimmer Holdings Inc and Wright Medical Group. The regulator, however, added that it does not have enough data to specify the concentration of metal ions in a patient's body or blood necessary to produce adverse effects. The reaction seemed to be specific to individual patients, the FDA said on its website. (Reporting by Esha Dey in Bangalore; Editing by Don Sebastian) - Tweet this - Share this - Digg this
<urn:uuid:886c0edc-44e2-40cb-89cc-06630237dd72>
CC-MAIN-2013-20
http://www.reuters.com/article/2013/01/17/us-fda-hips-idUSBRE90G0W520130117
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947079
378
2.640625
3
Word of the Day, Website of the Day, Number to Know, This Day in History, Today’s Featured Birthday and Daily Quote. Word of the Day Vermicular ver-MIK-yuh-ler (adjective) Resembling a worm in form or motion; of, relating to or caused by worms - www.merriam-webster.com Website of the Day Martin Luther King Research and Education Institute As we approach Martin Luther King Jr. Day, take a moment to learn more about the great American leader. This site collects all kinds of documents pertaining to King, lists the latest news and much more. Number to Know 1983: Year when President Ronald Reagan signed Martin Luther King Jr. Day into law. This Day in History Jan. 16, 2003: The Space Shuttle Columbia takes off for mission STS-107, which would be its final one. Columbia disintegrated 16 days later on re-entry. Today’s Featured Birthday Baseball star Albert Pujols (33) “I look to a day when people will not be judged by the color of their skin, but by the content of their character.” – Dr. Martin Luther King Jr.
<urn:uuid:9311d186-ae00-46e5-a539-0621d8b6471b>
CC-MAIN-2013-20
http://www.ridgecrestca.com/article/20130116/NEWS/301169992
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.853919
258
2.703125
3
See what questions a doctor would ask. Leri-Weil syndrome (medical condition): A rare genetic disorder characterized by short forearms...more » These medical condition or symptom topics may be relevant to medical information for Leri-Weil syndrome: Leri-Weil syndrome is listed as a "rare disease" by the Office of Rare Diseases (ORD) of the National Institutes of Health (NIH). This means that Leri-Weil syndrome, or a subtype of Leri-Weil syndrome, affects less than 200,000 people in the US population. Source - National Institutes of Health (NIH) Leri-Weil syndrome: Leri-Weil syndrome is listed as a type of (or associated with) the following medical conditions in our database: These medical disease topics may be related to Leri-Weil syndrome: Source - NIH Search to find out more about Leri-Weil syndrome: Search Specialists by State and City
<urn:uuid:790de929-ef3d-4395-8a67-cec887be9eab>
CC-MAIN-2013-20
http://www.rightdiagnosis.com/medical/leri_weil_syndrome.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.88402
213
2.546875
3
Healthy Affordable Food RWJF Priority: Increase access to high-quality, affordable foods through new or improved grocery stores and healthier corner stores and bodegas Research shows that having a supermarket or grocery store in a neighborhood increases residents’ fruit and vegetable consumption and is associated with lower body mass index (BMI) among adolescents. Yet, many families do not have access to healthy affordable foods in their neighborhoods. This is especially true in lower-income communities, where convenience stores and fast-food restaurants are widespread, but supermarkets and farmers’ markets are scarce.
<urn:uuid:d25c1581-8ee3-457b-bd6c-71268f50a2ad>
CC-MAIN-2013-20
http://www.rwjf.org/en/about-rwjf/program-areas/childhood-obesity/strategy/policy-priority-healthy-affordable-food.html?d=location_type%3A549
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957511
116
2.890625
3
Last Friday, The Hill’s Congress Blog highlighted the innovative ways governments, NGO’s and the private sector are using to aid for global health. Programs like the Global Alliance for Vaccines and Immunization (GAVI) and The Global Fund to fight AIDS, TB and Malaria are not only ensuring that health interventions are getting to the people that need them most, they are helping to promote market growth and drive down prices. Here’s an excerpt on public-private partnerships from the blog: “Millions of lives are saved today in developing countries because of bold, innovative financing arrangements over last 10 years. These financing mechanisms are good examples of private sector partnership with public sector for common good. These financing initiatives have pooled large public sector funding with private sector resources, thus allowing tax payers funds to have much larger impact than would otherwise be possible. Some of the examples are given below.” USAID’s Neglected Tropical Disease (NTD) Program is one such collaboration. In a press statement released last fall, Dr. Ariel Pablos-Mendez, Assistant Administrator for USAID’s Global Health Bureau, states: “To date, USAID’s NTD program is the largest public-private partnership collaboration in our 50 year history. Over the past six years, USAID has leveraged over $3 billion in donated medicines reflecting one of the most cost effective public health programs. Because of this support, we are beginning to document control and elimination of these diseases in our focus countries and we are on track to meet the 2020 goals.” You can also read about how Sabin in helping countries create sustainable access to immunization financing here.
<urn:uuid:27bf619e-f246-4a79-ae92-51aa24290e8f>
CC-MAIN-2013-20
http://www.sabin.org/updates/blog/innovation-fund-global-health?language=fr
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919179
351
2.546875
3
Stat: Of all infertile women, an estimated 15 percent are infertile because of PID. What is it exactly? "Pelvic inflammatory disease" is shorthand for any serious, non-specific bacterial infection of the reproductive organs that are housed in the pelvis: the uterus, uterine lining, fallopian tubes, and/or ovaries. These infections usually start in the vagina and, when left untreated, can progressively infect other reproductive organs. 20% of PID cases are found in teens, who often are afraid or unable to get reproductive health care. PID can result in permanent infertility and chronic pain. About how many people have it? About one million cases of PID are reported in the United States annually. How is it spread? In most cases, other sexually transmitted diseases and infections such as gonorrhea and chlamydia are at the root of PID, especially when they are left untreated. Some cases of PID are due to infections with more than one type of bacteria. What are its symptoms? • painful periods that may last longer than previous cycles • unusual vaginal discharge • spotting or cramping between periods • pain or cramping curing urination, or blood in the urine • lower back or abdominal pain • nausea or vomiting • pain during vaginal intercourse How is it diagnosed? PID is often difficult to diagnose, and it is widely thought that millions of cases each year go undiagnosed. To diagnose PID, you will need a pelvic exam which includes a Pap smear, and a possible laparoscopy (a diagnostic microsurgical procedure that can usually be done in an office visit) in order for your doctor or clinician to take a close look at your reproductive system. It is also imperative that you tell your doctor or clinician if you have been sexually active with a partner and what your sexual history has been. Is it treatable? In some cases, antibiotics, bed rest, and sexual celibacy are prescribed. In other cases, surgery may be required, including the possible removal of some reproductive organs. Is it curable? In some cases, but it can recur even once treated if the person becomes reinfected. Can it effect fertility? PID can lead to permanent sterility or ectopic pregnancy. Can it cause death? Almost any bacterial infection, if it becomes serious enough or affects enough of the body's systems, can potentially cause severe injury or death. How can we protect against it? Using condoms during vaginal intercourse offers a very high level of protection from PID. Annual STD screenings also reduce the risk by finding other STDs or STIs and treating them before they can progress to cause PID. Because PID is caused by other untreated infection, it is one of many reasons why it is so important for women to get gynecological exams and full STI screenings at least once every year, without fail.
<urn:uuid:5cbad007-e8b7-40fb-b7c8-1c444f66cdcc>
CC-MAIN-2013-20
http://www.scarleteen.com/article/infection/the_sti_files_pelvic_inflammatory_disease_pid?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945829
594
2.84375
3
Ever wonder what would happen if every single adult in the U.S. took a few hours each month to support a program that supports the well-being of children? Perhaps you would choose to advocate for a child in an unstable environment; or a child in poor health; or one who is struggling with their academics; or one who facing a bully? What kind of impact would that make on the future of our country? I recently came across some information about National Make a Difference in Children Month; a grassroots call to action sponsored by long-time child advocate, Kim Ratz. The intention of this annual observance is to bring awareness on how our actions can make a positive difference to a child. Ms. Ratz outlines 4 key actions we can take to have a direct impact on the life of a child on her Website: 1. Pick one (or more) event or activity to do with a child … that will make some kind of positive difference or impact on that child. Need ideas? Read 100+ Ways to Make a Difference to Children. 2. Support an organization that serves children …It could be your local community ed. or schools, YMCA, Boy or Girl Scouts, place of worship, park and recreation or any other organization that serves kids. 3. Tell your policy makers to support initiatives that are good for kids … like your school board, city council, county commissioners, state legislators & congressional delegation; summer is generally a more relaxed time to communicate with them. Share your own story about Making a Difference to Children … and WHY it’s important to support programs for children … 4. Tell other people about this campaign …like your neighbors, relatives, friends, people at work, worship, school or play. Here are some more ideas from Early Childhood News and Resources on how you can make a difference to a child this month: - Volunteer at a local center that helps teen or single mothers (or fathers) - Volunteer with your local elementary school - Help at a soup kitchen for needy families - Help at church with Sunday School, VBS or another faith-based program - Locate a service in your area that assists homeless children with school supplies, medical care or social-emotional development - Volunteer to read for kids at your local library - Teach classes at a local rec center or community center: arts, crafts, reading, sports, ASL, music, etc. - Offer your time at the Foundation for the Blind (they often run children’s classes) - Find a local farm that hosts classes for special needs kiddos and volunteer there (horse therapy, etc) - Don’t have time to volunteer your time? How about a simple donation? What can YOU do to help a child in need? Share your ideas and inspiration!
<urn:uuid:a9bc1e4e-7dcc-4fb4-8dfb-781c02a919cd>
CC-MAIN-2013-20
http://www.schoodoodle.com/weblog/2011/07/05/july-is-national-make-a-difference-in-children-month/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948812
580
2.59375
3
XML and the Second-Generation Web; May 1999; Scientific American Magazine; by Bosak, Bray; 5 Page(s) Give people a few hints, and they can figure out the rest. They can look at this page, see some large type followed by blocks of small type and know that they are looking at the start of a magazine article. They can look at a list of groceries and see shopping instructions. They can look at some rows of numbers and understand the state of their bank account. Computers, of course, are not that smart; they need to be told exactly what things are, how they are related and how to deal with them. Extensible Markup Language (XML for short) is a new language designed to do just that, to make information self-describing. This simple-sounding change in how computers communicate has the potential to extend the Internet beyond information delivery to many other kinds of human activity. Indeed, since XML was completed in early 1998 by the World Wide Web Consortium (usually called the W3C), the standard has spread like wildfire through science and into industries ranging from manufacturing to medicine.
<urn:uuid:2da37991-2e83-48e8-b166-797398d5e7aa>
CC-MAIN-2013-20
http://www.sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=A916DC70-DA95-49E5-8DDA-A9D9E743319
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972203
233
3.171875
3
Please Read How You Can Help Keep the Encyclopedia Free Absolute and Relational Theories of Space and Motion Since antiquity, natural philosophers have struggled to comprehend the nature of three tightly interconnected concepts: space, time, and motion. A proper understanding of motion, in particular, has been seen to be crucial for deciding questions about the natures of space and time, and their interconnections. Since the time of Newton and Leibniz, philosophers’ struggles to comprehend these concepts have often appeared to take the form of a dispute between absolute conceptions of space, time and motion, and relational conceptions. This article guides the reader through some of the history of these philosophical struggles. Rather than taking sides in the (alleged) ongoing debates, or reproducing the standard dialectic recounted in most introductory texts, we have chosen to scrutinize carefully the history of the thinking of the canonical participants in these debates — principally Descartes, Newton, Leibniz, Mach and Einstein. Readers interested in following up either the historical questions or current debates about the natures of space, time and motion will find ample links and references scattered through the discussion and in the Other Internet Resources section below. - 1. Introduction - 2. Aristotle - 3. Descartes - 4. Newton - 5. Absolute Space in the Twentieth Century - 6. Leibniz - 7. ‘Not-Newton’ versus ‘Be-Leibniz’ - 8. Mach and Later Machians - 9. Relativity and Motion - 10. Conclusion - Other Internet Resources - Related Entries Things change. A platitude perhaps, but still a crucial feature of the world, and one which causes many philosophical perplexities — see for instance the entry on Zeno's Paradoxes. For Aristotle, motion (he would have called it ‘locomotion’) was just one kind of change, like generation, growth, decay, fabrication and so on. The atomists held on the contrary that all change was in reality the motion of atoms into new configurations, an idea that was not to begin to realize its full potential until the Seventeenth Century, particularly in the work of Descartes. (Of course, modern physics seems to show that the physical state of a system goes well beyond the geometrical configuration of bodies. Fields, while determined by the states of bodies, are not themselves configurations of bodies if interpreted literally, and in quantum mechanics bodies have ‘internal states' such as particle spin.) While not all changes seem to be merely the (loco)motions of bodies in physical space. Yet since antiquity, in the western tradition, this kind of motion has been absolutely central to the understanding of change. And since motion is a crucial concept in physical theories, one is forced to address the question of what exactly it is. The question might seem trivial, for surely what is usually meant by saying that something is moving is to say that it is moving relative to something, often tacitly understood between speakers. For instance: the car is moving at 60mph (relative to the road and things along it), the plane is flying (relative) to London, the rocket is lifting off (the ground), or the passenger is moving (to the front of the speeding train). Typically the relative reference body is either the surroundings of the speakers, or the Earth, but this is not always the case. For instance, it seems to make sense to ask whether the Earth rotates about its axis West-East diurnally or whether it is instead the heavens that rotate East-West; but if all motions are to be reckoned relative to the Earth, then its rotation seems impossible. But if the Earth does not offer a unique frame of reference for the description of motion, then we may wonder whether any arbitrary object can be used for the definition of motions: are all such motions on a par, none privileged over any other? It is unclear whether anyone has really, consistently espoused this view: Aristotle, perhaps, in the Metaphysics; Descartes and Leibniz are often thought to have but, as we'll see, those claims are suspect; possibly Huygens, though his remarks remain cryptic; Mach at some moments perhaps. If this view were correct, then the question of whether the Earth or heavens rotate would be meaningless, merely different but equivalent expressions of the facts. But suppose, like Aristotle, you take ordinary language accurately to reflect the structure of the world, then you could recognize systematic everyday uses of ‘up’ and ‘down’ that require some privileged standards — uses that treat things closer to a point at the center of the Earth as more ‘down’ and motions towards that point as ‘downwards'. Of course we would likely explain this usage in terms of the fact that we and our language evolved in a very noticeable gravitational field directed towards the center of the Earth, but for Aristotle, as we shall see, this usage helped identify an important structural feature of the universe, which itself was required for the explanation of weight. Now a further question arises: how should a structure, such as a preferred point in the universe, which privileges certain motions, be understood? What makes that point privileged? One might expect that Aristotle simply identified it with the center of the Earth, and so relative to that particular body; but in fact he did not adopt that tacit convention as fundamental, for he thought it possible for the Earth to move from the ‘down’ point. Thus the question arises (although Aristotle does not address it explicitly) of whether the preferred point is somewhere picked out in some other way by the bodies in the universe —the center of the heavens perhaps? Or is it picked out quite independently of the arrangements of matter? The issues that arise in this simple theory help frame the debates between later physicists and philosophers concerning the nature of motion; in particular, we will focus on the theories of Descartes, Newton, Leibniz, Mach and Einstein, and their interpretations. But similar issues circulate through the different contexts: is there any kind of privileged sense of motion, a sense in which things can be said to move or not, not just relative to this or that reference body, but ‘truly’? If so, can this true motion be analyzed in terms of motions relative to other bodies — to some special body, or to the entire universe perhaps? (And in relativity, in which distances, times and measures of relative motion are frame-dependent, what relations are relevant?) If not, then how is the privileged kind of motion to be understood, as relative to space itself — something physical but non-material — perhaps? Or can some kinds of motion be best understood as not being spatial changes — changes of relative location or of place — at all? To see that the problem of the interpretation of spatiotemporal quantities as absolute or relative is endemic to almost any kind of mechanics one can imagine, we can look to one of the simplest theories — Aristotle's account of natural motion (e.g., On the Heavens I.2). According to this theory it is because of their natures, and not because of ‘unnatural’ forces, that that heavy bodies move down, and ‘light’ things (air and fire) move up; it is their natures, or ‘forms’, that constitute the gravity or weight of the former and the levity of the latter. This account only makes sense if ‘up’ and ‘down’ can be unequivocally determined for each body. According to Aristotle, up and down are fixed by the position of the body in question relative to the center of the universe, a point coincident with the center of the Earth. That is, the theory holds that heavy bodies naturally move towards the center, while light bodies naturally move away. Does this theory involve absolute or merely relative quantities? It depends on how the center is conceived. If the center were identified with the center of the Earth, then the theory could be taken to eschew absolute quantities: it would simply hold that the natural motions of any body depend on its position relative to another, namely the Earth. But Aristotle is explicit that the center of the universe is not identical with, but merely coincident with the center of the Earth (e.g., On the Heavens II.14): since the Earth itself is heavy, if it were not at the center it would move there! So the center is not identified with any body, and so perhaps direction-to-center is an absolute quantity in the theory, not understood fundamentally as direction to some body (merely contingently as such if some body happens to occupy the center). But this conclusion is not clear either. In On the Heavens II.13, admittedly in response to a different issue, Aristotle suggests that the center itself is ‘determined’ by the outer spherical shell of the universe (the aetherial region of the fixed stars). If this is what he intends, then the natural law prescribes motion relative to another body after all — namely up or down with respect to the mathematical center of the stars. It would be to push Aristotle's writings too hard to suggest that he was consciously wrestling with the issue of whether mechanics required absolute or relative quantities of motion, but what is clear is that these questions arise in his physics and his remarks impinge on them. His theory also gives a simple model of how these questions arise: a physical theory of motion will say that ‘under such-and-such circumstances, motion of so-and-so a kind will occur’ — and the question of whether that kind of motion makes sense in terms of the relations between bodies alone arises automatically. Aristotle may not have recognized the question explicitly, but we see it as one issue in the background of his discussion of the center. The issues are, however, far more explicit in Descartes' physics; and since the form of his theory is different the ‘kinds of motion’ in question are quite different — as they change with all the different theories that we discuss. For Descartes argued in his 1644 Principles of Philosophy (see Book II) that the essence of matter was extension (i.e., size and shape) because any other attribute of bodies could be imagined away without imagining away matter itself. But he also held that extension constitutes the nature of space, hence he concluded that space and matter were one and the same thing. An immediate consequence of the identification is the impossibility of the vacuum; if every region of space is a region of matter, then there can be no space without matter. Thus Descartes' universe is ‘hydrodynamical’ — completely full of mobile matter of in different sized pieces in motion, rather like a bucket full of water and lumps of ice of different sizes, which has been stirred around. Since fundamentally the pieces of matter are nothing but extension, the universe is in fact nothing but a system of geometric bodies in motion without any gaps. (Descartes held that all other properties arise from the configurations and motions of such bodies — from geometric complexes. See Garber 1992 for a comprehensive study.) The identification of space and matter poses a puzzle about motion: if the space that a body occupies literally is the matter of the body, then when the body — i.e., the matter — moves, so does the space that it occupies. Thus it doesn't change place, which is should be to say that it doesn't move after all! Descartes resolved this difficulty by taking all motion to be the motion of bodies relative to one another, not a literal change of space. Now, a body has as many relative motions as there are bodies but it does not follow that all are equally significant. Indeed, Descartes uses several different concepts of relational motion. First there is ‘change of place’, which is nothing but motion relative to this or that arbitrary reference body (II.13). In this sense no motion of a body is privileged, since the speed, direction, and even curve of a trajectory depends on the reference body, and none is singled out. Next, he discusses motion in ‘the ordinary sense’ (II.24). This is often conflated with mere change of arbitrary place, but it in fact differs because according to the rules of ordinary speech one properly attributes motion only to bodies whose motion is caused by some action, not to any relative motion. (For instance, a person sitting on a speeding boat is ordinarily said to be at rest, since ‘he feels no action in himself’.) Finally, he defined motion ‘properly speaking’ (II.25) to be a body's motion relative to the matter contiguously surrounding it, which the impossibility of a vacuum guarantees to exist. (Descartes’ definition is complicated by the fact that he modifies this technical concept to make it conform more closely to the pre-theoretical sense of ‘motion’; however, in our discussion transference is all that matters, so we will ignore those complications.) Since a body can only be touching one set of surroundings, Descartes (dubiously) argued that this standard of motion was unique. What we see here is that Descartes, despite holding motion to be the motion of bodies relative to one another, also held there to be a privileged sense of motion; in a terminology sometimes employed by writers of the period, he held there to be a sense of ‘true motion’, over and above the merely relative motions. Equivalently, we can say that Descartes took motion (‘properly speaking’) to be a complete predicate: that is, moves-properly-speaking is a one-place predicate. (In contrast, moves-relative-to is a two-place predicate.) And note that the predicate is complete despite the fact that it is analyzed in terms of relative motion. (Formally, let contiguous-surroundings be a function from bodies to their contiguous surroundings, then x moves-properly-speaking is analyzed as x moves-relative-to contiguous-surroundings(x).) This example illustrates why it is crucial to keep two questions distinct: on the one hand, is motion to be understood in terms of relations between bodies or by invoking something additional, something absolute; on the other hand, are all relative motions equally significant, or is there some ‘true’, privileged notion of motion? Descartes' views show that eschewing absolute motion is logically compatible with accepting true motion; which is of course not to say that his definitions of motion are themselves tenable. There is an interpretational tradition which holds that Descartes only took the first, ‘ordinary’ sense of motion seriously, and introduced the second notion to avoid conflict with the Catholic Church. Such conflict was a real concern, since the censure of Galileo's Copernicanism took place only 11 years before publication of the Principles, and had in fact dissuaded Descartes from publishing an earlier work, The World. Indeed, in the Principles (III.28) he is at pains to explain how ‘properly speaking’ the Earth does not move, because it is swept around the Sun in a giant vortex of matter — the Earth does not move relative to its surroundings in the vortex. The difficulty with the reading, aside from the imputation of cowardice to the old soldier, is that it makes nonsense of Descartes' mechanics, a theory of collisions. For instance, according to his laws of collision if two equal bodies strike each other at equal and opposite velocities then they will bounce off at equal and opposite velocities (Rule I). On the other hand, if the very same bodies approach each other with the very same relative speed, but at different speeds then they will move off together in the direction of the faster one (Rule III). But if the operative meaning of motion in the Rules is the ordinary sense, then these two situations are just the same situation, differing only in the choice of reference frame, and so could not have different outcomes — bouncing apart versus moving off together. It seems inconceivable that Descartes could have been confused in such a trivial way. (Additionally, as Pooley 2002 points out, just after he claims that the Earth is at rest ‘properly speaking’, Descartes argues that the Earth is stationary in the ordinary sense, because common practice is to determine the positions of the stars relative to the Earth. Descartes simply didn't need motion properly speaking to avoid religious conflict, which again suggests that it has some other significance in his system of thought.) Thus Garber (1992, Chapter 6-8) proposes that Descartes actually took the unequivocal notion of motion properly speaking to be the correct sense of motion in mechanics. Then Rule I covers the case in which the two bodies have equal and opposite motions relative to their contiguous surroundings, while Rule VI covers the case in which the bodies have different motions relative to those surroundings — one is perhaps at rest in its surroundings. That is, exactly what is needed to make the rules consistent is the kind of privileged, true, sense of motion provided by Descartes' second definition. Insurmountable problems with the rules remain, but rejecting the traditional interpretation and taking motion properly speaking seriously in Descartes' philosophy clearly gives a more charitable reading. In an unpublished essay — De Gravitatione (Newton, 2004) — and in a Scholium to the definitions given in his 1687 Mathematical Principles of Natural Philosophy (see Newton, 1999 for an up-to-date translation), Newton attacked both of Descartes' notions of motion as candidates for the operative notion in mechanics. (see Stein 1967and Rynasiewicz 1995 for important, and differing, views on the issue.) (This critique is studied in more detail in the entry Newton's views on space, time, and motion.) The most famous argument invokes the so-called ‘Newton's bucket’ experiment. Stripped to its basic elements one compares: - a bucket of water hanging from a cord as the bucket is set spinning about the cord's axis, with - the same bucket and water when they are rotating at the same rate about the cord's axis. As is familiar from any rotating system, there will be a tendency for the water to recede from the axis of rotation in the latter case: in (i) the surface of the water will be flat (because of the Earth's gravitational field) while in (ii) it will be concave. The analysis of such ‘inertial effects' due to rotation was a major topic of enquiry of ‘natural philosophers' of the time, including Descartes and his followers, and they would certainly have agreed with Newton that the concave surface of the water in the second case demonstrated that the water was moving in a mechanically significant sense. There is thus an immediate problem for the claim that proper motion is the correct mechanical sense of motion: in (i) and (ii) proper motion is anti-correlated with the mechanically significant motion revealed by the surface of the water. That is, the water is flat in (i) when it is in motion relative to its immediate surroundings — the inner sides of the bucket — but curved in (ii) when it is at rest relative to its immediate surroundings. Thus the mechanically relevant meaning of rotation is not that of proper motion. (You may have noticed a small lacuna in Newton's argument: in (i) the water is at rest and in (ii) in motion relative to that part of its surroundings constituted by the air above it. It's not hard to imagine small modifications to the example to fill this gap.) Newton also points out that the height that the water climbs up the inside of the bucket provides a measure of the rate of rotation of bucket and water: the higher the water rises up the sides, the greater the tendency to recede must be, and so the faster the water must be rotating in the mechanically significant sense. But supposing, very plausibly, that the measure is unique, that any particular height indicates a particular rate of rotation. Then the unique height that the water reaches at any moment implies a unique rate of rotation in a mechanically significant sense. And thus motion in the sense of motion relative to an arbitrary reference body, is not the mechanical sense, since that kind of rotation is not unique at all, but depends on the motion of the reference body. And so Descartes’ change of place (and for similar reasons, motion in the ordinary sense) is not the mechanically significant sense of motion. In our discussion of Descartes we called the sense of motion operative in the science of mechanics ‘true motion’, and the phrase is used in this way by Newton in the Scholium. Thus Newton's bucket shows that true (rotational) motion is anti-correlated with, and so not identical with, proper motion (as Descartes proposed according to the Garber reading); and Newton further argues that the rate of true (rotational) motion is unique, and so not identical with change of place, which is multiple. Newton proposed instead that true motion is motion relative to a temporally enduring, rigid, 3-dimensional Euclidean space, which he dubbed ‘absolute space’. Of course, Descartes also defined motion as relative to an enduring 3-dimensional Euclidean space; the difference is that Descartes space was divided into parts (his space was identical with a plenum of corpuscles) in motion, not a rigid structure in which (mobile) material bodies are embedded. So according to Newton, the rate of true rotation of the bucket (and water) is the rate at which it rotates relative to absolute space. Or put another way, Newton effectively defines the complete predicate x moves-absolutely as x moves-relative-to absolute space; both Newton and Descartes offer the competing complete predicates as analyses of x moves-truly. Newton's proposal for understanding motion solves the problems that he posed for Descartes, and provides an interpretation of the concepts of constant motion and acceleration that appear in his laws of motion. However, it suffers from two notable interpretational problems, both of which were pressed forcefully by Leibniz (in the Leibniz-Clarke Correspondence, 1715–1716) — which is not to say that Leibniz himself offered a superior account of motion (see below). (Of course, there are other features of Newton's proposal that turned out to be empirically inadequate, and are rejected by relativity: Newton's account violates the relativity of simultaneity and postulates a non-dynamical spacetime structure.) First, according to this account, absolute velocity is a well-defined quantity: more simply, the absolute speed of a body is the rate of change of its position relative to an arbitrary point of absolute space. But the Galilean relativity of Newton's laws mean that the evolution of a closed system is unaffected by constant changes in velocity; Galileo's experimenter cannot determine from observations inside his cabin whether the boat is at rest in harbor or sailing smoothly. Put another way, according to Newtonian mechanics, in principle Newton's absolute velocity cannot be experimentally determined. So in this regard absolute velocity is quite unlike acceleration (including rotation); Newtonian acceleration is understood in absolute space as the rate of change of absolute velocity, and is, according to Newtonian mechanics, in general measurable, for instance by measuring the height that the water ascends the sides of the bucket. (It is worth noting that Newton was well-aware of these facts; the Galilean relativity of his theory is demonstrated in Corollary V of the laws of the Principia, while Corollary VI shows that acceleration is unobservable if all parts of the system accelerate in parallel at the same rate, as they do in a homogeneous gravitational field.) Leibniz argued (rather inconsistently, as we shall see) that since differences in absolute velocity were unobservable, they could not be genuine differences at all; and hence that Newton's absolute space, whose existence would entail the reality of such differences, must also be a fiction. Few contemporary philosophers would immediately reject a quantity as meaningless simply because it was not experimentally determinable, but this fact does justify genuine doubts about the reality of absolute velocity, and hence of absolute space. The second problem concerns the nature of absolute space. Newton quite clearly distinguished his account from Descartes' — in particular with regards to absolute space's rigidity versus Descartes' ‘hydrodynamical’ space, and the possibility of the vacuum in absolute space. Thus absolute space is definitely not material. On the other hand, presumably it is supposed to be part of the physical, not mental, realm. In De Gravitatione, Newton rejected both the standard philosophical categories of substance and attribute as suitable characterizations. Absolute space is not a substance for it lacks causal powers and does not have a fully independent existence, and yet not an attribute since it would exist even in a vacuum, which by definition is a place where there are no bodies in which it might inhere. Newton proposes that space is what we might call a ‘pseudo-substance’, more like a substance than property, yet not quite a substance. (Note that Samuel Clarke, in his Correspondence with Leibniz, which Newton had some role in composing, advocates the property view, and note further that when Leibniz objects because of the vacuum problem, Clarke suggests that there might be non-material beings in the vacuum in which space might inhere.) In fact, Newton accepted the principle that everything that exists, exists somewhere — i.e., in absolute space. Thus he viewed absolute space as a necessary consequence of the existence of anything, and of God's existence in particular — hence space's ontological dependence. Leibniz was presumably unaware of the unpublished De Gravitatione in which these particular ideas were developed, but as we shall see, his later works are characterized by a robust rejection of any notion of space as a real thing rather than an ideal, purely mental entity. This is a view that attracts even fewer contemporary adherents, but there is something deeply peculiar about a non-material but physical entity, a worry that has influenced many philosophical opponents of absolute space. After the development of relativity (which we will take up below), and its interpretation as a spacetime theory, it was realized that the notion of spacetime had applicability to a range of theories of mechanics, classical as well as relativistic. In particular, there is a spacetime geometry — ‘Galilean’ or ‘neo-Newtonian’ spacetime — for Newtonian mechanics that solves the problem of absolute velocity; an idea exploited by a number of philosophers from the late 1960s (e.g., Earman 1970, Friedman 1983, Sklar 1974 and Stein 1968). For details the reader is referred to the entry on spacetime: inertial frames, but the general idea is that although a spatial distance is well-defined between any two simultaneous points of this spacetime, only the temporal interval is well-defined between non-simultaneous points. Thus things are rather unlike Newton's absolute space, whose points persist through time and maintain their distances; in absolute space the distance between p-now and q-then (where p and q are points) is just the distance between p-now and q-now. However, Galilean spacetime has an ‘affine connection’ which effectively specifies for every point of every continuous curve, the rate at which the curve is changing from straightness at that point; for instance, the straight lines are picked out as those curves whose rate of change from straightness is zero at every point. (Another way of thinking about this space is as possessing — in addition to a distance between any two simultaneous points and a temporal interval between any points — a three-place relation of colinearity, satisfied by three points just in case they lie on a straight line.) Since the trajectories of bodies are curves in spacetime the affine connection determines the rate of change from straightness at every point of every possible trajectory. The straight trajectories thus defined can be interpreted as the trajectories of bodies moving inertially, and the rate of change from straightness of any trajectory can be interpreted as the acceleration of a body following that trajectory. That is, Newton's Second Law can be given a geometric formulation as ‘the rate of change from straightness of a body's trajectory is equal to the forces acting on the body divided by its mass’. The significance of this geometry is that while acceleration is well-defined, velocity is not — in accord with empirically determinability of acceleration but not velocity according to Newtonian mechanics. (A simple analogy helps see how such a thing is possible: betweenness but not ‘up’ is a well-defined concept in Euclidean space.) Thus Galilean spacetime gives a very nice interpretation of the choice that nature makes when it decides that the laws of mechanics should be formulated in terms of accelerations not velocities (as Aristotle and Descartes proposed). Put another way, we can define the complete predicate x accelerates as trajectory(x) has-non-zero-rate-of-change-from-straightness, where trajectory maps bodies onto their trajectories in Galilean spacetime. And this predicate, defined this way, applies to the water in the bucket if and only if it is rotating, according to Newtonian mechanics formulated in terms of the geometry of Galilean spacetime; it is the mechanically relevant sense of the word in this theory. But all of this formulation and definition has been given in terms of the geometry of spacetime, not relations between bodies; acceleration is ‘absolute’ in the sense that there is a preferred (true) sense of acceleration in mechanics and which is not defined in terms of the motions of bodies relative to one another. (Note that this sense of ‘absolute’ is broader than that of motion relative to absolute space, which we defined earlier. In the remainder of this article we will use it in the broader sense. The reader should be aware that the term is used in many ways in the literature, and such equivocation often leads to massive misunderstandings.) Thus if any of this analysis of motion is taken literally then one arrives at a position regarding the ontology of spacetime rather like that of Newton's regarding space: it is some kind of ‘substantial’ (or maybe pseudo-substantial) thing with the geometry of Galilean spacetime, just as absolute space possessed Euclidean geometry. This view regarding the ontology of spacetime is usually called ‘substantivalism’ (Sklar, 1974). The Galilean substantivalist usually sees himself as adopting a more sophisticated geometry than Newton but sharing his substantivalism (though there is room for debate on Newton's exact ontological views, see DiSalle, 2002). The advantage of the more sophisticated geometry is that although it allows the absolute sense of acceleration apparently required by Newtonian mechanics to be defined, it does not allow one to define a similar absolute speed or velocity — x accelerates can be defined as a complete predicate in terms of the geometry of Galilean spacetime but not x moves in general — and so the first of Leibniz's problem is resolved. Of course we see that the solution depends on a crucial shift from speed and velocity to acceleration as the relevant senses of ‘motion’: from the rate of change of position to the rate of rate of change. While this proposal solves the first kind of problem posed by Leibniz, it seems just as vulnerable to the second. While it is true that it involves the rejection of absolute space as Newton conceived it, and with it the need to explicate the nature of an enduring space, the postulation of Galilean spacetime poses the parallel question of the nature of spacetime. Again, it is a physical but non-material something, the points of which may be coincident with material bodies. What kind of thing is it? Could we do without it? As we shall see below, some contemporary philosophers believe so. There is a ‘folk-reading’ of Leibniz that one finds either explicitly or implicitly in the philosophy of physics literature which takes account of only some of his remarks on space and motion. The reading underlies vast swathes of the literature: for instance, the quantities captured by Earman's (1999) ‘Leibnizian spacetime’, do not do justice to Leibniz's view of motion (as Earman acknowledges). But it is perhaps most obvious in introductory texts (e.g., Ray 1991, Huggett 2000 to mention a couple). According to this view, the only quantities of motion are relative quantities, relative velocity, acceleration and so on, and all relative motions are equal, so there is no true sense of motion. However, Leibniz is explicit that other quantities are also ‘real’, and his mechanics implicitly — but obviously — depends on yet others. The length of this section is a measure, not so much the importance of Leibniz's actual views, but the importance of showing what the prevalent folk view leaves out regarding Leibniz's views on the metaphysics of motion and interpretation of mechanics. That said, we shall also see that no one has yet discovered a fully satisfactory way of reconciling the numerous conflicting things that Leibniz says about motion. Some of these tensions can be put down simply to his changing his mind (see Cover and Hartz 1988 for an explication of how Leibniz's views on space developed). However, we will concentrate on the fairly short period in the mid 1680-90s during which Leibniz developed his theory of mechanics, and was most concerned with their interpretation. We will supplement this discussion with the important remarks that he made in his Correspondence with Samuel Clarke around 30 years later (1715–1716); this discussion is broadly in line with the earlier period, and the intervening period is one in which he turned to other matters, rather than one in which his views on space were dramatically evolving. Arguably, Leibniz's views concerning space and motion do not have a completely linear logic, starting from some logically sufficient basic premises, but instead form a collection of mutually supporting doctrines If one starts questioning why Leibniz held certain views — concerning the ideality of space, for instance — one is apt to be led in a circle. Still, exposition requires starting somewhere, and Leibniz's argument for the ideality of space in the Correspondence with Clarke is a good place to begin. But bear in mind the caveats made here — this argument was made later than a number of other relevant writings, and its logical relation to Leibniz's views on motion is complex. Leibniz (LV.47 — this notation means Leibniz's Fifth letter, section 47, and so on) says that (i) a body comes to have the ‘same place’ as another once did, when it comes to stand in the same relations to bodies we ‘suppose’ to be unchanged (more on this later). (ii) That we can define ‘a place’ to be that which any such two bodies have in common (here he claims an analogy with the Euclidean/Eudoxan definition of a rational number in terms of an identity relation between ratios). And finally that (iii) space is all such places taken together. However, he also holds that properties are particular, incapable of being instantiated by more than one individual, even at different times; hence it is impossible for the two bodies to be in literally the same relations to the unchanged bodies. Thus the thing that we take to be the same for the two bodies — the place — is something added by our minds to the situation, and only ideal. As a result, space, which is after all constructed from these ideal places, is itself ideal: ‘a certain order, wherein the mind conceives the application of relations’. It's worth pausing briefly to contrast this view of space with those of Descartes and of Newton. Both Descartes and Newton claim that space is a real, mind-independent entity; for Descartes it is matter, and for Newton a ‘pseudo-substance’, distinct from matter. And of course for both, these views are intimately tied up with their accounts of motion. Leibniz simply denies the mind-independent reality of space, and this too is bound up with his views concerning motion. (Note that fundamentally, in the metaphysics of monads that Leibniz was developing contemporaneously with his mechanics, everything is in the mind of the monads; but the point that Leibniz is making here is that even within the world that is logically constructed from the contents of the minds of monads, space is ideal.) So far (apart from that remark about ‘unchanged’ bodies) we have not seen Leibniz introduce anything more than relations of distance between bodies, which is certainly consistent with the folk view of his philosophy. However, Leibniz sought to provide a foundation for the Cartesian/mechanical philosophy in terms of the Aristotelian/scholastic metaphysics of substantial forms (here we discuss the views laid out in Sections 17-22 of the 1686 Discourse on Metaphysics and the 1695 Specimen of Dynamics, both in Garber and Ariew 1989). In particular, he identifies primary matter with what he calls its ‘primitive passive force’ of resistance to changes in motion and to penetration, and the substantial form of a body with its ‘primitive active force’. It is important to realize that these forces are not mere properties of matter, but actually constitute it in some sense, and further that they are not themselves quantifiable. However because of the collisions of bodies with one another, these forces ‘suffer limitation’, and ‘derivative’ passive and active forces result. (There's a real puzzle here. Collision presupposes space, but primitive forces constitute matter prior to any spatial concepts — the primitive active and passive forces ground motion and extension respectively. See Garber and Rauzy, 2004.) Derivative passive force shows up in the different degrees of resistance to change of different kinds of matter (of ‘secondary matter’ in scholastic terms), and apparently is measurable. Derivative active force however, is considerably more problematic for Leibniz. On the one hand, it is fundamental to his account of motion and theory of mechanics — motion fundamentally is possession of force. But on the other hand, Leibniz endorses the mechanical philosophy, which precisely sought to abolish Aristotelian substantial form, which is what force represents. Leibniz's goal was to reconcile the two philosophies, by providing an Aristotelian metaphysical foundation for modern mechanical science; as we shall see, it is ultimately an open question exactly how Leibniz intended to deal with the inherent tensions in such a view. The texts are sufficiently ambiguous to permit dissent, but arguably Leibniz intends that one manifestation of derivative active force is what he calls vis viva — ‘living force’. Leibniz had a famous argument with the Cartesians over the correct definition of this quantity. Descartes defined it as size times speed — effectively as the magnitude of the momentum of a body. Leibniz gave a brilliant argument (repeated in a number of places, for instance Section 17 of the Discourse on Metaphysics) that it was size times speed2 — so (proportional to) kinetic energy. If the proposed identification is correct then kinetic energy quantifies derivative active force according to Leibniz; or looked at the other way, the quantity of virtus (another term used by Leibniz for active force) associated with a body determines its kinetic energy and hence its speed. As far as the authors know, Leibniz never explicitly says anything conclusive about the relativity of virtus, but it is certainly consistent to read him (as Roberts 2003 does) to claim that there is a unique quantity of virtus and hence ‘true’ (as we have been using the term) speed associated with each body. At the very least, Leibniz does say that there is a real difference between possession and non-possession of vis viva (e.g., in Section 18 of the Discourse) and it is a small step from there to true, privileged speed. Indeed, for Leibniz, mere change of relative position is not ‘entirely real’ (as we saw for instance in the Correspondence) and only when it has vis viva as its immediate cause is there some reality to it. (However, just to muddy the waters, Leibniz also claims that as a matter of fact, no body ever has zero force, which on the reading proposed means no body is ever at rest, which would be surprising given all the collisions bodies undergo.) An alternative interpretation to the one suggested here might say that Leibniz intends that while there is a difference between motion/virtus and no motion/virtus, there is somehow no difference between any strictly positive values of those quantities. It is important to emphasize two points about the preceding account of motion in Leibniz's philosophy. First, motion in the everyday sense — motion relative to something else — is not really real. Fundamentally motion is possession of virtus, something that is ultimately non-spatial (modulo its interpretation as primitive force limited by collision). If this reading is right — and something along these lines seems necessary if we aren't simply to ignore important statements by Leibniz on motion — then Leibniz is offering an interpretation of motion that is radically different from the obvious understanding. One might even say that for Leibniz motion is not movement at all! (We will leave to one side the question of whether his account is ultimately coherent.) The second point is that however we should understand Leibniz, the folk reading simply does not and cannot take account of his clearly and repeatedly stated view that what is real in motion is force not relative motion, for the folk reading allows Leibniz only relative motion (and of course additionally, motion in the sense of force is a variety of true motion, again contrary to the folk reading). However, from what has been said so far it is still possible that the folk reading is accurate when it comes to Leibniz's views on the phenomena of motion, the subject of his theory of mechanics. The case for the folk reading is in fact supported by Leibniz's resolution of the tension that we mentioned earlier, between the fundamental role of force/virtus (which we will now take to mean mass times speed2) and its identification with Aristotelian form. Leibniz's way out (e.g., Specimen of Dynamics) is to require that while considerations of force must somehow determine what form of the laws of motion, the laws themselves should be such as not to allow one to determine the value of the force (and hence true speed). One might conclude that in this case Leibniz held that the only quantities which can be determined are those of relative position and motion, as the folk reading says. But even in this circumscribed context, it is at best questionable whether the interpretation is correct. Consider first Leibniz's mechanics. Since his laws are what is now (ironically) often called ‘Newtonian’ elastic collision theory, it seems that they satisfy both of his requirements. The laws include conservation of kinetic energy (which we identify with virtus), but they hold in all inertial frames, so the kinetic energy of any arbitrary body can be set to any initial value. But they do not permit the kinetic energy of a body to take on any values throughout a process. The laws are only Galilean relativistic, and so are not true in every frame. Furthermore, according to the laws of collision, in an inertial frame, if a body does not collide then its Leibnizian force is conserved while if (except in special cases) it does collide then its force changes. According to Leibniz's laws one cannot determine initial kinetic energies, but one certainly can tell when they change. At very least, there are quantities of motion implicit in Leibniz's mechanics — change in force and true speed — that are not merely relative; the folk reading is committed to Leibniz simply missing this obvious fact. That said, when Leibniz discusses the relativity of motion — which he calls the ‘equivalence of hypotheses’ about the states of motion of bodies — some of his statements do suggest that he was confused in this way. For another way of stating the problem for the folk reading is that the claim that relative motions alone suffice for mechanics and that all relative motions are equal is a principle of general relativity, and could Leibniz — a mathematical genius — really have failed to notice that his laws hold only in special frames? Well, just maybe. On the one hand, when he explicitly articulates the principle of the equivalence of hypotheses (for instance in Specimen of Dynamics) he tends to say only that one cannot assign initial velocities on the basis of the outcome of a collision, which requires only Galilean relativity. However, he confusingly also claimed (On Copernicanism and the Relativity of Motion, also in Garber and Ariew 1989) that the Tychonic and Copernican hypotheses were equivalent. But if the Earth orbits the Sun in an inertial frame (Copernicus), then there is no inertial frame according to which the Sun orbits the Earth (Tycho Brahe), and vice versa: these hypotheses are simply not Galilean equivalent (something else Leibniz could hardly have failed to notice). So there is some textual support for Leibniz endorsing general relativity, as the folk reading maintains. A number of commentators have suggested solutions to the puzzle of the conflicting pronouncements that Leibniz makes on the subject, but arguably none is completely successful in reconciling all of them (Stein 1977 argues for general relativity, while Roberts 2003 argues the opposite; see also Lodge 2003). So the folk reading simply ignores Leibniz's metaphysics of motion, it commits Leibniz to a mathematical howler regarding his laws, and it is arguable whether it is the best rendering of his pronouncements concerning relativity; it certainly cannot be accepted unquestioningly. However, it is not hard to understand the temptation of the folk reading. In his Correspondence with Clarke, Leibniz says that he believes space to be “something merely relative, as time is, … an order of coexistences, as time is an order of successions” (LIII.4), which is naturally taken to mean that space is at base nothing but the distance and temporal relations between bodies. (Though even this passage has its subtleties, because of the ideality of space discussed above, and because in Leibniz's conception space determines what sets of relations are possible.) And if relative distances and times exhaust the spatiotemporal in this way, then shouldn't all quantities of motion be defined in terms of those relations? We have seen two ways in which this would be the wrong conclusion to draw: force seems to involve a notion of speed that is not identified with any relative speed, and (unless the equivalence of hypotheses is after all a principle of general relativity) the laws pick out a standard of constant motion that need not be any constant relative motion. Of course, it is hard to reconcile these quantities with the view of space and time that Leibniz proposes — what is speed in size times speed2 or constant speed if not speed relative to some body or to absolute space? Given Leibniz's view that space is literally ideal (and indeed that even relative motion is not ‘entirely real’) perhaps the best answer is that he took force and hence motion in its real sense not to be determined by motion in a relative sense at all, but to be primitive monadic quantities. That is, he took x moves to be a complete predicate, but he believed that it could be fully analyzed in terms of strictly monadic predicates: x moves iff x possesses-non-zero-derivative-active-force. And this reading explains just what Leibniz took us to be supposing when we ‘supposed certain bodies to be unchanged’ in the construction of the idea of space: that they had no force, nothing causing, or making real any motion. It's again helpful to compare Leibniz with Descartes and Newton, this time regarding motion. Commentators often express frustration at Leibniz's response to Newton's arguments for absolute space: “I find nothing … in the Scholium that proves or can prove the reality of space in itself. However, I grant that there is a difference between an absolute true motion of a body and a mere relative change …” (LV.53). Not only does Leibniz apparently fail to take the argument seriously, he then goes on to concede the step in the argument that seems to require absolute space! But with our understanding of Newton and Leibniz, we can see that what he says makes perfect sense (or at least that it is not as disingenuous as it is often taken to be). Newton argues in the Scholium that true motion cannot be identified with the kinds of motion that Descartes considers; but both of these are purely relative motions, and Leibniz is in complete agreement that merely relative motions are not true (i.e., ‘entirely real’). Leibniz's ‘concession’ merely registers his agreement with Newton against Descartes on the difference between true and relative motion; he surely understood who and what Newton was refuting, and it was a position that he had himself, in different terms, publicly argued against at length. But as we have seen, Leibniz had a very different analysis of the difference to Newton's; true motion was not, for him, a matter of motion relative to absolute space, but the possession of quantity of force, ontologically prior to any spatiotemporal quantities at all. There is indeed nothing in the Scholium explicitly directed against that view, and since it does potentially offer an alternative way of understanding true motion, it is not unreasonable for Leibniz to claim that there is no deductive inference from true motion to absolute space. The folk reading which belies Leibniz has it that he sought a theory of mechanics formulated in terms only of the relations between bodies. As we'll see presently, in the Nineteenth Century, Ernst Mach indeed proposed such an approach, but Leibniz clearly did not; though certain similarities between Leibniz and Mach — especially the rejection of absolute space — surely helps explain the confusion between the two. But not only is Leibniz often misunderstood, there are influential misreadings of Newton's arguments in the Scholium, influenced by the idea that he is addressing Leibniz in some way. Of course the Principia was written 30 years before the Correspondence, and the arguments of the Scholium were not written with Leibniz in mind, but Clarke himself suggests (CIV.13) that those arguments — specifically those concerning the bucket — are telling against Leibniz. That argument is indeed devastating to a general principle of relativity — the parity of all relative motions — but we have seen that it is highly questionable whether Leibniz's equivalence of hypotheses amount to such a view. That said, his statements in the first four letters of the Correspondence could understandably mislead Clarke on this point — it is in reply to Clarke's challenge that Leibniz explicitly denies the parity of relative motions. But interestingly, Clarke does not present a true version of Newton's argument — despite some involvement of Newton in writing the replies. Instead of the argument from the uniqueness of the rate of rotation, he argues that systems with different velocities must be different because the effects observed if they were brought to rest would be different. This argument is of course utterly question begging against a view that holds that there is no privileged standard of rest! As we discuss in Section 8, Mach attributed to Newton the fallacious argument that because the surface of the water curved even when it was not in motion relative to the bucket, it must be rotating relative to absolute space. Our discussion of Newton showed how misleading such a reading is. In the first place he also argues that there must be some privileged sense of rotation, and hence not all relative motions are equal. Second, the argument is ad hominem against Descartes, in which context a disjunctive syllogism — motion is either proper or ordinary or relative to absolute space — is argumentatively legitimate. On the other hand, Mach is quite correct that Newton's argument in the Scholium leaves open the logical possibility that the privileged, true sense of rotation (and acceleration more generally) is some species of relative motion; if not motion properly speaking, then relative to the fixed stars perhaps. (In fact Newton rejects this possibility in De Gravitatione (1962) on the grounds that it would involve an odious action at a distance; an ironic position given his theory of universal gravity.) However the kind of folk-reading of Newton that underlies much of the contemporary literature replaces Mach's interpretation with a more charitable one. According to this reading, Newton's point is that his mechanics — unlike Descartes' — could explain why the surface of the rotating water is curved, that his explanation involves a privileged sense of rotation, and that absent an alternative hypothesis about its relative nature, we should accept absolute space. But our discussion of Newton's argument showed that it simply does not have an ‘abductive’, ‘best explanation’ form, but shows deductively, from Cartesian premises, that rotation is neither proper nor ordinary motion. That is not to say that Newton had no understanding of how such effects would be explained in his mechanics. For instance, in Corollaries 5 and 6 to the Definitions of the Principles he states in general terms the conditions under which different states of motion are not — and so by implication are — discernible according to his laws of mechanics. Nor is it to say that Newton's contemporaries weren't seriously concerned with explaining inertial effects. Leibniz, for instance, analyzed a rotating body (in the Specimen). In short, parts of a rotating system collide with the surrounding matter and are continuously deflected, into a series of linear motions that form a curved path. But the system as Leibniz envisions it — comprised of a plenum of elastic particles of matter — is far too complex for him to offer any quantitative model based on this qualitative picture. (In the context of the proposed ‘abductive’ reading of Newton, note that this point is telling against a rejection of intrinsic rigidity or forces acting at a distance, not narrow relationism; it is the complexity of collisions in a plenum that stymies analysis. And since Leibniz's collision theory requires a standard of inertial motion, even if he had explained inertial effects, he would not have thereby shown that all motions are relative, much less that all are equal.) Although the argument is then not Newton's, it is still an important response to the kind of relationism proposed by the folk-Leibniz, especially when it is extended by bringing in a further example from Newton's Scholium. Newton considered a pair of identical spheres, connected by a cord, too far from any bodies to observe any relative motions; he pointed out that their rate and direction of rotation could still be experimentally determined by measuring the tension in the rod, and by pushing on opposite faces of the two globes to see whether the tension increased or decreased. He intended this simple example to demonstrate that the project he intended in the Principia, of determining the absolute accelerations and hence gravitational forces on the planets from their relative motions, was possible. However, if we further specify that the spheres and cord are rigid and that they are the only things in their universe, then the example can be used to point out that there are infinitely many different rates of rotation all of which agree on the relations between bodies. Since there are no differences in the relations between bodies in the different situations, it follows that the observable differences between the states of rotation cannot be explained in terms of the relations between bodies. Therefore, a theory of the kind attributed to the folk's Leibniz cannot explain all the phenomena of Newtonian mechanics, and again we can argue abductively for absolute space. (Of course, the argument works by showing that, granted the different states of rotation, there are states of rotation that cannot merely be relative rotations of any kind; for the differences cannot be traced to any relational differences. That is, granted the assumptions of the argument, rotation is not true relative motion of any kind.) This argument (neither the premises nor conclusion) is not Newton's, and must not be taken as a historically accurate reading, However, that is not to say that the argument is fallacious, and indeed many have found it attractive, particularly as a defense not of Newton's absolute space, but of Galilean spacetime. That is, Newtonian mechanics with Galilean spacetime can explain the phenomena associated with rotation, while theories of the kind proposed by Mach cannot explain the differences between situations allowed by Newtonian mechanics, but these explanations rely on the geometric structure of Galilean spacetime — particularly its connection, to interpret acceleration. And thus — the argument goes — those explanations commit us to the reality of spacetime — a manifold of points — whose properties include the appropriate geometric ones. This final doctrine, of the reality of spacetime with its component points or regions, distinct from matter, with geometric properties, is what we earlier identified as ‘substantivalism’. There are two points to make about this line of argument. First, the relationist could reply that he need not explain all situations which are possible according to Newtonian mechanics, because that theory is to be rejected in favor of one which invokes only distance and time relations between bodies, but which approximates to Newton's if matter is distributed suitably. Such a relationist would be following Mach's proposal, which we will discuss next. Such a position would be satisfactory only to the extent that a suitable concrete replacement theory to Newton's theory is developed; Mach never offered such a theory, but recently more progress has been made. Second, one must be careful in understanding just how the argument works, for it is tempting to gloss it by saying that in Newtonian mechanics the connection is a crucial part of the explanation of the surface of the water in the bucket, and if the spacetime which carries the connection is denied, then the explanation fails too. But this gloss tacitly assumes that Newtonian mechanics can only be understood in a substantial Galilean spacetime; if an interpretation of Newtonian mechanics that does not assume substantivalism can be constructed, then all Newtonian explanations can be given without a literal connection. Both Sklar (1974) and van Fraassen (1985) have made proposals along these lines. Sklar proposes interpreting ‘true’ acceleration as a primitive quantity not defined in terms of motion relative to anything, be it absolute space, a connection or other bodies. (Notice the family resemblance between this proposal and Leibniz's view of force and speed.) Van Fraassen proposes formulating mechanics as ‘Newton's Laws hold in some frame’, so that the form of the laws and the ways bodies move picks out a standard of inertial motion, not absolute space or a connection, or any instantaneous relations. These proposals aim to keep the full explanatory resources of Newtonian mechanics, and hence admit ‘true acceleration’, but deny any relations between bodies and spacetime itself. Like the actual Leibniz, they allow absolute quantities of motion, but claim that space and time themselves are nothing but the relations between bodies. Of course, such views raise the question of how a motion can be not relative to anything at all, and how we are to understand the privileging of frames; Huggett (2006) contains a proposal for addressing these problems. (Note that Sklar and van Fraassen are committed to the idea that in some sense Newton's laws are capable of explaining all the phenomena without recourse to spacetime geometry; that the connection and the metrical properties are explanatorily redundant. A similar view is defended in the context of relativity in Brown 2005.) Between the time of Newton and Leibniz and the 20th century, Newton's mechanics and gravitation theory reigned essentially unchallenged, and with that long period of dominance, absolute space came to be widely accepted. At least, no natural philosopher or physicist offered a serious challenge to Newton's absolute space, in the sense of offering a rival theory that dispenses with it. But like the action at a distance in Newtonian gravity, absolute space continued to provoke metaphysical unease. Seeking a replacement for the unobservable Newtonian space, Neumann (1870) and Lange (1885) developed more concrete definitions of the reference frames in which Newton's laws hold. In these and a few other works, the concept of the set of inertial frames was first clearly expressed, though it was implicit in both remarks and procedures to be found in the Principia. (See the entries on space and time: inertial frames and Newton's views on space, time, and motion) The most sustained, comprehensive, and influential attack on absolute space was made by Ernst Mach in his Science of Mechanics (1883). In a lengthy discussion of Newton's Scholium on absolute space, Mach accuses Newton of violating his own methodological precepts by going well beyond what the observational facts teach us concerning motion and acceleration. Mach at least partly misinterpreted Newton's aims in the Scholium, and inaugurated a reading of the bucket argument (and by extension the globes argument) that has largely persisted in the literature since. Mach viewed the argument as directed against a ‘strict’ or ‘general-relativity’ form of relationism, and as an attempt to establish the existence of absolute space. Mach points out the obvious gap in the argument when so construed: the experiment only establishes that acceleration (rotation) of the water with respect to the Earth, or the frame of the fixed stars, produces the tendency to recede from the center; it does not prove that a strict relationist theory cannot account for the bucket phenomena, much less the existence of absolute space. (The reader will recall that Newton's actual aim was simply to show that Descartes' two kinds of motion are not adequate to accounting for rotational phenomena.) Although Mach does not mention the globes thought experiment specifically, it is easy to read an implicit response to it in the things he does say: nobody is competent to say what would happen, or what would be possible, in a universe devoid of matter other than two globes. So neither the bucket nor the globes can establish the existence of absolute space. Both in Mach's interpretations of Newton's arguments and in his replies, one can already see two anti-absolute space viewpoints emerge, though Mach himself never fully kept them apart. The first strain, which we may call ‘Mach-lite’, criticizes Newton's postulation of absolute space as a metaphysical leap that is neither justified by actual experiments, nor methodologically sound. The remedy offered by Mach-lite is simple: we should retain Newton's mechanics and use it just as we already do, but eliminate the unnecessary posit of absolute space. In its place we need only substitute the frame of the fixed stars, as is the practice in astronomy in any case. If we find the incorporation of a reference to contingent circumstances (the existence of a single reference frame in which the stars are more or less stationary) in the fundamental laws of nature problematic (which Mach need not, given his official positivist account of scientific laws), then Mach suggests that we replace the 1st law with an empirically equivalent mathematical rival: Mach's Equation (1960, 287) The sums in this equation are to be taken over all massive bodies in the universe. Since the top sum is weighted by distance, distant masses count much more than near ones. In a world with a (reasonably) static distribution of heavy distant bodies, such as we appear to live in, the equation entails local conservation of linear momentum in ‘inertial’ frames. The upshot of this equation is that the frame of the fixed stars plays exactly the role of absolute space in the statement of the 1st law. (Notice that this equation, unlike Newton's first law, is not vectorial.) This proposal does not, by itself, offer an alternative to Newtonian mechanics, and as Mach himself pointed out, the law is not well-behaved in an infinite universe filled with stars; but the same can perhaps be said of Newton's law of gravitation (see Malament 1995, and Norton 1993). But Mach did not offer this equation as a proposed law valid in any circumstances; he avers, “it is impossible to say whether the new expression would still represent the true condition of things if the stars were to perform rapid movements among one another.” (p. 289) It is not clear whether Mach offered this revised first law as a first step toward a theory that would replace Newton's mechanics, deriving inertial effects from only relative motions, as Leibniz desired. But many other remarks made by Mach in his chapter criticizing absolute space point in this direction, and they have given birth to the Mach-heavy view, later to be christened “Mach's Principle” by Albert Einstein. The Mach-heavy viewpoint calls for a new mechanics that invokes only relative distances and (perhaps) their 1st and 2nd time derivatives, and thus ‘generally relativistic’ in the sense sometimes read into Leibniz's remarks about motion. Mach wished to eliminate absolute time from physics too, so he would have wanted a proper relationist reduction of these derivatives also. The Barbour-Bertotti theories, discussed below, provide this. Mach-heavy apparently involves the prediction of novel effects due to ‘merely’ relative accelerations. Mach hints at such effects in his criticism of Newton's bucket: Newton's experiment with the rotating vessel of water simply informs us that the relative rotation of the water with respect to the sides of the vessel produces no noticeable centrifugal forces, but that such forces are produced by its relative rotation with respect to the mass of the earth and the other celestial bodies. No one is competent to say how the experiment would turn out if the sides of the vessel [were] increased until they were ultimately several leagues thick. (1883, 284.) The suggestion here seems to be that the relative rotation in stage (i) of the experiment might immediately generate an outward force (before any rotation is communicated to the water), if the sides of the bucket were massive enough. More generally, Mach-heavy involves the view that all inertial effects should be derived from the motions of the body in question relative to all other massive bodies in the universe. The water in Newton's bucket feels an outward pull due (mainly) to the relative rotation of all the fixed stars around it. Mach-heavy is a speculation that an effect something like electromagnetic induction should be built into gravity theory. (Such an effect does exist according to the General Theory of Relativity, and is called ‘gravitomagnetic induction’. The recently finished Gravity Probe B mission was designed to measure the gravitomagnetic induction effect due to the Earth's rotation.) Its specific form must fall off with distance much more slowly than 1/r2, if it is to be empirically similar to Newtonian physics; but it will certainly predict experimentally testable novel behaviors. A theory that satisfies all the goals of Mach-heavy would appear to be ideal for the vindication of strict relationism and the elimination of absolute quantities of motion from mechanics. Direct assault on the problem of satisfying Mach-heavy in a classical framework proved unsuccessful, despite the efforts of others besides Mach (e.g., Friedländer 1896, Föpl 1904, Reissner 1914, 1915), until the work of Barbour and Bertotti in the 1970s and 80s. (Between the late 19th century and the 1970s, there was of course one extremely important attempt to satisfy Mach-heavy: the work of Einstein that led to the General Theory of Relativity. Since Einstein's efforts took place in a non-classical (Lorentz/Einstein/Minkowski) spacetime setting, we discuss them in the next section.) Rather than formulating a revised law of gravity/inertia using relative quantities, Barbour and Bertotti attacked the problem using the framework of Lagrangian mechanics, replacing the elements of the action that involve absolute quantities of motion with new terms invoking only relative distances, velocities etc. Their first (1977) theory uses a very simple and elegant action, and satisfies everything one could wish for from a Mach-heavy theory: it is relationally pure (even with respect to time: while simultaneity is absolute, the temporal metric is derived from the field equations); it is nearly empirically equivalent to Newton's theory in a world such as ours (with a large-scale uniform, near-stationary matter distribution); yet it does predict novel effects such as the ones Mach posited with his thick bucket. Among these is an ‘anisotropy of inertia’ effect — accelerating a body away from the galactic center requires more force than accelerating it perpendicular to the galactic plane — large enough to be ruled out empirically. Barbour and Bertotti's second attempt (1982) at a relational Lagrangian mechanics was arguably less Machian, but more empirically adequate. In it, solutions are sought beginning with two temporally-nearby, instantaneous relational configurations of the bodies in the universe. Barbour and Bertotti define an ‘intrinsic difference’ parameter that measures how different the two configurations are. In the solutions of the theory, this intrinsic difference quantity gets minimized, as well as the ordinary action, and in this way full solutions are derived despite not starting from a privileged inertial-frame description. The theory they end up with turns out to be, in effect, a fragment of Newtonian theory: the set of models of Newtonian mechanics and gravitation in which there is zero net angular momentum. This result makes perfect sense in terms of strict relationist aims. In a Newtonian world in which there is a nonzero net angular momentum (e.g., a lone rotating island galaxy), this fact reveals itself in the classic “tendency to recede from the center”. Since a strict relationist demands that bodies obey the same mechanical laws even in ‘rotating’ coordinate systems, there cannot be any such tendency to recede from the center (other than in a local subsystem), in any of the relational theory's models. Since cosmological observations, even today, reveal no net angular momentum in our world, the second Barbour & Bertotti theory can lay claim to exactly the same empirical successes (and problems) that Newtonian physics had. The second theory does not predict the (empirically falsified) anisotropy of inertia derivable from the first; but neither does it allow a derivation of the precession of the orbit of Mercury, which the first theory does (for appropriately chosen cosmic parameters). Mach-lite, like the relational interpretations of Newtonian physics reviewed in section 5, offers us a way of understanding Newtonian physics without accepting absolute position, velocity or acceleration. But it does so in a way that lacks theoretical clarity and elegance, since it does not delimit a clear set of cosmological models. We know that Mach-lite makes the same predictions as Newton for worlds in which there is a static frame associated with the stars and galaxies; but if asked about how things will behave in a world with no frame of fixed stars, or in which the stars are far from ‘fixed’, it shrugs and refuses to answer. (Recall that Mach-lite simply says: “Newton's laws hold in the frame of reference of the fixed stars.”) This is perfectly acceptable according to Mach's philosophy of science, since the job of mechanics is simply to summarize observable facts in an economical way. But it is unsatisfying to those with stronger realist intuitions about laws of nature. If there is, in fact, a distinguishable privileged frame of reference in which the laws of mechanics take on a specially simple form, without that frame being determined in any way by relation to the matter distribution, a realist will find it hard to resist the temptation to view motions described in that frame as the ‘true’ or ‘absolute’ motions. If there is a family of such frames, disagreeing about velocity but all agreeing about acceleration, she will feel a temptation to think of at least acceleration as ‘true’ or ‘absolute’. If such a realist believes motion to be by nature a relation rather than a property (and as we saw in the introduction, not all philosophers accept this) then she will feel obliged to accord some sort of existence or reality to the structure — e.g., the structure of Galilean spacetime — in relation to which these motions are defined. For philosophers with such realist inclinations, the ideal relational account of motion would therefore be some version of Mach-heavy. The Special Theory of Relativity (STR) is notionally based on a principle of relativity of motion; but that principle is ‘special’ — meaning, restricted. The relativity principle built into STR is in fact nothing other than the Galilean principle of relativity, which is built into Newtonian physics. In other words, while there is no privileged standard of velocity, there is nevertheless a determinate fact of the matter about whether a body has accelerated or non-accelerated (i.e., inertial) motion. In this regard, the spacetime of STR is exactly like Galilean spacetime (defined in section 5 above). In terms of the question of whether all motion can be considered purely relative, one could argue that there is nothing new brought to the table by the introduction of Einstein's STR — at least, as far as mechanics is concerned. As Dorling (1978) first pointed out, however, there is a sense in which the standard absolutist arguments against ‘strict’ relationism using rotating objects (buckets or globes) fail in the context of STR. Maudlin (1993) used the same considerations to show that there is a way of recasting relationism in STR that appears to be very successful. STR incorporates certain novelties concerning the nature of time and space, and how they mesh together; perhaps the best-known examples are the phenomena of ‘length contraction’, ‘time dilation’, and the ‘relativity of simultaneity.’ Since in STR both spatial distances and time intervals — when measured in the standard ways — are observer-relative (observers in different states of motion ‘disagreeing’ about their sizes), it is arguably most natural to restrict oneself to the invariant spacetime separation given by the interval between two points: [dx2 + dy2 + dz2 — dt2] — the four-dimensional analog of the Pythagorean theorem, for spacetime distances. If one regards the spacetime interval relations between masses-at-times as one's basis on which space-time is built up as an ideal entity, then with only mild caveats relationism works: the ‘relationally pure’ facts suffice to uniquely fix how the material systems are embeddable (up to isomorphism) in the ‘Minkowski’ spacetime of STR. The modern variants of Newton's bucket and globes arguments no longer stymie the relationist because (for example) the spacetime interval relations among bits of matter in Newton's bucket at rest are quite different from the spacetime interval relations found among those same bits of matter after the bucket is rotating. For example, the spacetime interval relation between a bit of water near the side of the bucket, at one time, and itself (say) a second later is smaller than the interval relation between a center-bucket bit of water and itself one second later (times referred to inertial-frame clocks). The upshot is that, unlike the situation in classical physics, a body at rest cannot have all the same spatial relations among its parts as a similar body in rotation. We cannot put a body or system into a state of rotation (or other acceleration) without thereby changing the spacetime interval relations between the various bits of matter at different moments of time. Rotation and acceleration supervene on spacetime interval relations. It is worth pausing to consider to what extent this victory for (some form of) relationism satisfies the classical ‘strict’ relationism traditionally ascribed to Mach and Leibniz. The spatiotemporal relations that save the day against the bucket and globes are, so to speak, mixed spatial and temporal distances. They are thus quite different from the spatial-distances-at-a-time presupposed by classical relationists; moreover they do not correspond to relative velocities (-at-a-time) either. Their oddity is forcefully captured by noticing that if we choose appropriate bits of matter at ‘times’ eight minutes apart, I-now am at zero distance from the surface of the sun (of eight minutes ‘past’, since it took 8 minutes for light from the sun to reach me-now). So we are by no means dealing here with an innocuous, ‘natural’ translation of classical relationist quantities into the STR setting. On the other hand, in light of the relativity of simultaneity (see note), it can be argued that the absolute simultaneity presupposed by classical relationists and absolutists alike was, in fact, something that relationists should always have regarded with misgivings. From this perspective, instantaneous relational configurations — precisely what one starts with in the theories of Barbour and Bertotti — would be the things that should be treated with suspicion. If we now return to our questions about motions — about the nature of velocities and accelerations — we find, as noted above, that matters in the interval-relational interpretation of STR are much the same as in Newtonian mechanics in Galilean spacetime. There are no well-defined absolute velocities, but there are indeed well-defined absolute accelerations and rotations. In fact, the difference between an accelerating body (e.g., a rocket) and an inertially moving body is codified directly in the cross-temporal interval relations of the body with itself. So we are very far from being able to conclude that all motion is relative motion of a body with respect to other bodies. It is true that the absolute motions are in 1-1 correlation with patterns of spacetime interval relations, but it is not at all correct to say that they are, for that reason, eliminable in favor of merely relative motions. Rather we should simply say that no absolute acceleration can fail to have an effect on the material body or bodies accelerated. But this was already true in classical physics if matter is modeled realistically: the cord connecting the globes does not merely tense, but also stretches; and so does the bucket, even if imperceptibly, i.e., the spatial relations change. Maudlin does not claim this version of relationism to be victorious over an absolutist or substantivalist conception of Minkowski spacetime, when it comes time to make judgments about the theory's ontology. There may be more to vindicating relationism than merely establishing a 1-1 correlation between absolute motions and patterns of spatiotemporal relations. The simple comparison made above between STR and Newtonian physics in Galilean spacetime is somewhat deceptive. For one thing, Galilean spacetime is a mathematical innovation posterior to Einstein's 1905 theory; before then, Galilean spacetime had not been conceived, and full acceptance of Newtonian mechanics implied accepting absolute velocities and, arguably, absolute positions, just as laid down in the Scholium. So Einstein's elimination of absolute velocity was a genuine conceptual advance. Moreover, the Scholium was not the only reason for supposing that there existed a privileged reference frame of ‘rest’: the working assumption of almost all physicists in the latter half of the 19th century was that, in order to understand the wave theory of light, one had to postulate an aetherial medium filling all space, wave-like disturbances in which constituted electromagnetic radiation. It was assumed that the aether rest frame would be an inertial reference frame; and physicists felt some temptation to equate its frame with the absolute rest frame, though this was not necessary. Regardless of this equation of the aether with absolute space, it was assumed by all 19th century physicists that the equations of electrodynamic theory would have to look different in a reference frame moving with respect to the aether than they did in the aether's rest frame (where they presumably take their canonical form, i.e., Maxwell's equations and the Lorentz force law.) So while theoreticians labored to find plausible transformation rules for the electrodynamics of moving bodies, experimentalists tried to detect the Earth's motion in the aether. Experiment and theory played collaborative roles, with experimental results ruling out certain theoretical moves and suggesting new ones, while theoretical advances called for new experimental tests for their confirmation or — as it happened — disconfirmation. As is well known, attempts to detect the Earth's velocity in the aether were unsuccessful. On the theory side, attempts to formulate the transformation laws for electrodynamics in moving frames — in such a way as to be compatible with experimental results — were complicated and inelegant. A simplified way of seeing how Einstein swept away a host of problems at a stroke is this: he proposed that the Galilean principle of relativity holds for Maxwell's theory, not just for mechanics. The canonical (‘rest-frame’) form of Maxwell's equations should be their form in any inertial reference frame. Since the Maxwell equations dictate the velocity c of electromagnetic radiation (light), this entails that any inertial observer, no matter how fast she is moving, will measure the velocity of a light ray as c — no matter what the relative velocity of its emitter. Einstein worked out logically the consequences of this application of the special relativity principle, and discovered that space and time must be rather different from how Newton described them. STR undermined Newton's absolute time just as decisively as it undermined his absolute space (see note ). Einstein's STR was the first clear and empirically successful physical theory to overtly eliminate the concepts of absolute rest and absolute velocity while recovering most of the successes of classical mechanics and 19th century electrodynamics. It therefore deserves to be considered the first highly successful theory to explicitly relativize motion, albeit only partially. But STR only recovered most of the successes of classical physics: crucially, it left out gravity. And there was certainly reason to be concerned that Newtonian gravity and STR would prove incompatible: classical gravity acted instantaneously at a distance, while STR eliminated the privileged absolute simultaneity that this instantaneous action presupposes. Several ways of modifying Newtonian gravity to make it compatible with the spacetime structure of STR suggested themselves to physicists in the years 1905-1912, and a number of interesting Lorentz-covariant theories were proposed (set in the Minkowski spacetime of STR). Einstein rejected these efforts one and all, for violating either empirical facts or theoretical desiderata. But Einstein's chief reason for not pursuing the reconciliation of gravitation with STR's spacetime appears to have been his desire, beginning in 1907, to replace STR with a theory in which not only velocity could be considered merely relative, but also acceleration. That is to say, Einstein wanted if possible to completely eliminate all absolute quantities of motion from physics, thus realizing a theory that satisfies at least one kind of ‘strict’ relationism. (Regarding Einstein's rejection of Lorentz-covariant gravity theories, see Norton 1992; regarding Einstein's quest to fully relativize motion, see Hoefer 1994.) Einstein began to see this complete relativization as possible in 1907, thanks to his discovery of the Equivalence Principle. Imagine we are far out in space, in a rocket ship accelerating at a constant rate g = 9.98 m/s2. Things will feel just like they do on the surface of the Earth; we will feel a clear up-down direction, bodies will fall to the floor when released, etc. Indeed, due to the well-known empirical fact that gravity affects all bodies by imparting a force proportional to their matter (and energy) content, independent of their internal constitution, we know that any experiment performed on this rocket will give the same results that the same experiment would give if performed on the Earth. Now, Newtonian theory teaches us to consider the apparent downward, gravity-like forces in the rocket ship as ‘pseudo-forces’ or ‘inertial forces’, and insists that they are to be explained by the fact that the ship is accelerating in absolute space. But Einstein asked: “Is there any way for the person in the rocket to regard him/herself as being ‘at rest’ rather than in absolute (accelerated) motion?” And the answer he gave is: Yes. The rocket traveler may regard him/herself as being ‘at rest’ in a homogeneous and uniform gravitational field. This will explain all the observational facts just as well as the supposition that he/she is accelerating relative to absolute space (or, absolutely accelerating in Minkowski spacetime). But is it not clear that the latter is the truth, while the former is a fiction? By no means; if there were a uniform gravitational field filling all space, then it would affect all the other bodies in the world — the Earth, the stars, etc, imparting to them a downward acceleration away from the rocket; and that is exactly what the traveler observes. In 1907, Einstein published his first gravitation theory (Einstein 1907), treating the gravitational field as a scalar field that also represented the (now variable and frame-dependent) speed of light. Einstein viewed the theory as only a first step on the road to eliminating absolute motion. In the 1907 theory, the theory's equations take the same form in any inertial or uniformly accelerating frame of reference. One might say that this theory reduces the class of absolute motions, leaving only rotation and other non-uniform accelerations as absolute. But, Einstein reasoned, if uniform acceleration can be regarded as equivalent to being at rest in a constant gravitational field, why should it not be possible also to regard inertial effects from these other, non-uniform motions as similarly equivalent to “being at rest in a (variable) gravitational field”? Thus Einstein set himself the goal of expanding the principle of equivalence to embrace all forms of ‘accelerated’ motion. Einstein thought that the key to achieving this aim lay in further expanding the range of reference frames in which the laws of physics take their canonical form, to include frames adapted to any arbitrary motions. More specifically, since the class of all continuous and differentiable coordinate systems includes as a subclass the coordinate systems adapted to any such frame of reference, if he could achieve a theory of gravitation, electromagnetism and mechanics that was generally covariant — its equations taking the same form in any coordinate system from this general class — then the complete relativity of motion would be achieved. If there are no special frames of reference in which the laws take on a simpler canonical form, there is no physical reason to consider any particular state or states of motion as privileged, nor deviations from those as representing ‘absolute motion’. (Here we are just laying out Einstein's train of thought; later we will see reasons to question the last step.) And in 1915, Einstein achieved his aim in the General Theory of Relativity (GTR). There is one key element left out of this success story, however, and it is crucial to understanding why most physicists reject Einstein's claim to have eliminated absolute states of motion in GTR. Going back to our accelerating rocket, we accepted Einstein's claim that we could regard the ship as hovering at rest in a universe-filling gravitational field. But a gravitational field, we usually suppose, is generated by matter. How is this universe-filling field linked to generating matter? The answer may be supplied by Mach-heavy. Regarding the ‘accelerating’ rocket which we decide to regard as ‘at rest’ in a gravitational field, the Machian says: all those stars and galaxies, etc., jointly accelerating downward (relative to the rocket), ‘produce’ that gravitational field. The mathematical specifics of how this field is generated will have to be different from Newton's law of gravity, of course; but it should give essentially the same results when applied to low-mass, slow-moving problems such as the orbits of the planets, so as to capture the empirical successes of Newtonian gravity. Einstein thought, in 1916 at least, that the field equations of GTR are precisely this mathematical replacement for Newton's law of gravity, and that they fully satisfied the desiderata of Mach-heavy relationism. But it was not so. (See the entry on early philosophical interpretations of general relativity.) In GTR, spacetime is locally very much like flat Minkowski spacetime. There is no absolute velocity locally, but there are clear local standards of accelerated vs non-accelerated motion, i.e., local inertial frames. In these ‘freely falling’ frames bodies obey the usual rules for non-gravitational physics familiar from STR, albeit only approximately. But overall spacetime is curved, and local inertial frames may tip, bend and twist as we move from one region to another. The structure of curved spacetime is encoded in the metric field tensor gab, with the curvature encoding gravity at the same time: gravitational forces are so to speak ‘built into’ the metric field, geometrized away. Since the spacetime structure encodes gravity and inertia, and in a Mach-heavy theory these phenomena should be completely determined by the relational distribution of matter (and relative motions), Einstein wished to see the metric as entirely determined by the distribution of matter and energy. But what the GTR field equations entail is, in general, only a partial-determination relation. We cannot go into the mathematical details necessary for a full discussion of the successes and failures of Mach-heavy in the GTR context. But one can see why the Machian interpretation Einstein hoped he could give to the curved spacetimes of his theory fails to be plausible, by considering a few simple ‘worlds’ permitted by GTR. In the first place, for our hovering rocket ship, if we are to attribute the gravity field it feels to matter, there has got to be all this other matter in the universe. But if we regard the rocket as a mere ‘test body’ (not itself substantially affecting the gravity present or absent in the universe), then we can note that according to GTR, if we remove all the stars, galaxies, planets etc. from the world, the gravitational field does not disappear. On the contrary, it stays basically the same locally, and globally it takes the form of empty Minkowski spacetime, precisely the quasi-absolute structure Einstein was hoping to eliminate. Solutions of the GTR field equations for arbitrary realistic configurations of matter (e.g., a rocket ship ejecting a stream of particles to push itself forward) are hard to come by, and in fact a realistic two-body exact solution has yet to be discovered. But numerical methods can be applied for many purposes, and physicists do not doubt that something like our accelerating rocket — in otherwise empty space — is possible according to the theory. We see clearly, then, that GTR fails to satisfy Einstein's own understanding of Mach's Principle, according to which, in the absence of matter, space itself should not be able to exist. A second example: GTR allows us to model a single rotating object in an otherwise empty universe (e.g., a neutron star). Relationism of the Machian variety says that such rotation is impossible, since it can only be understood as rotation relative to some sort of absolute space. In the case of GTR, this is basically right: the rotation is best understood as rotation relative to a ‘background’ spacetime that is identical to the Minkowski spacetime of STR, only ‘curved’ by the presence of matter in the region of the star. On the other hand, there is one charge of failure-to-relativize-motion sometimes leveled at GTR that is unfair. It is sometimes asserted that the simple fact that the metric field (or the connection it determines) distinguishes, at every location, motions that are ‘absolutely’ accelerated and/or ‘absolutely rotating’ from those that are not, by itself entails that GTR fails to embody a folk-Leibniz style general relativity of motion (e.g. Earman (1989), ch. 5). We think this is incorrect, and leads to unfairly harsh judgments about confusion on Einstein's part. The local inertial structure encoded in the metric would not be ‘absolute’ in any meaningful sense, if that structure were in some clear sense fully determined by the relationally specified matter-energy distribution. Einstein was not simply confused when he named his gravity theory. (Just what is to be understood by “the relationally specified matter-energy distribution” is a further, thorny issue, which we cannot enter into here.) GTR does not fulfill all the goals of Mach-heavy, at least as understood by Einstein, and he recognized this fact by 1918 (Einstein 1918). And yet … GTR comes tantalizingly close to achieving those goals, in certain striking ways. For one thing, GTR does predict Mach-heavy effects, known as ‘frame-dragging’: if we could model Mach's thick-walled bucket in GTR, it seems clear that it would pull the water slightly outward, and give it a slight tendency to begin rotating in the same sense as the bucket (even if the big bucket's walls were not actually touching the water. While GTR does permit us to model a lone rotating object, if we model the object as a shell of mass (instead of a solid sphere) and let the size of the shell increase (to model the ‘sphere of the fixed stars’ we see around us), then as Brill & Cohen (1966) showed, the frame-dragging becomes complete inside the shell. In other words: our original Minkowski background structure effectively disappears, and inertia becomes wholly determined by the shell of matter, just as Mach posited was the case. This complete determination of inertia by the global matter distribution appears to be a feature of other models, including the Friedman-Robertson-Walker-Lemâitre Big Bang models that best match observations of our universe. Finally, it is important to recognize that GTR is generally covariant in a very special sense: unlike all other prior theories (and unlike many subsequent quantum theories), it postulates no fixed ‘prior’ or ‘background’ spacetime structure. As mathematicians and physicists realized early on, other theories, e.g., Newtonian mechanics and STR, can be put into a generally covariant form. But when this is done, there are inevitably mathematical objects postulated as part of the formalism, whose role is to represent absolute elements of spacetime structure. What is unique about GTR is that it was the first, and is still the only ‘core’ physical theory, to have no such absolute elements in its covariant equations. The spacetime structure in GTR, represented by the metric field (which determines the connection), is at least partly ‘shaped’ by the distribution of matter and energy. And in certain models of the theory, such as the Big Bang cosmological models, some authors have claimed that the local standards of inertial motion — the local ‘gravitational field’ of Einstein's equivalence principle — are entirely fixed by the matter distribution throughout space and time, just as Mach-heavy requires (see, for example, Wheeler and Cuifollini 1995). Absolutists and relationists are thus left in a frustrating and perplexing quandary by GTR. Considering its anti-Machian models, we are inclined to say that motions such as rotation and acceleration remain absolute, or nearly-totally-absolute, according to the theory. On the other hand, considering its most Mach-friendly models, which include all the models taken to be good candidates for representing the actual universe, we may be inclined to say: motion in our world is entirely relative; the inertial effects normally used to argue for absolute motion are all understandable as effects of rotations and accelerations relative to the cosmic matter, just as Mach hoped. But even if we agree that motions in our world are in fact all relative in this sense, this does not automatically settle the traditional relationist/absolutist debate, much less the relationist/substantivalist debate. Many philosophers (including, we suspect, Nerlich 1994 and Earman 1989) would be happy to acknowledge the Mach-friendly status of our spacetime, and argue nevertheless that we should understand that spacetime as a real thing, more like a substance than a mere ideal construct of the mind as Leibniz insisted. (Nerlich (1994) and Earman (1989), we suspect, would take this stance.) Some, though not all, attempts to convert GTR into a quantum theory would accord spacetime this same sort of substantiality that other quantum fields possess. This article has been concerned with tracing the history and philosophy of ‘absolute’ and ‘relative’ theories of space and motion. Along the way we have been at pains to introduce some clear terminology for various different concepts (e.g., ‘true’ motion, ‘substantivalism’, ‘absolute space’), but what we have not really done is say what the difference between absolute and relative space and motion is: just what is at stake? Recently Rynasiewicz (2000) has argued that there simply are no constant issues running through the history that we have discussed here; that there is no stable meaning for either ‘absolute motion’ or ‘relative motion’ (or ‘substantival space’ vs ‘relational space’). While we agree to a certain extent, we think that nevertheless there are a series of issues that have motivated thinkers again and again; indeed, those that we identified in the introduction. (One quick remark: Rynasiewicz is probably right that the issues cannot be expressed in formally precise terms, but that does not mean that there are no looser philosophical affinities that shed useful light on the history.) Our discussion has revealed several different issues, of which we will highlight three as components of the ‘absolute-relative debate’. (i) There is the question of whether all motions and all possible descriptions of motions are equal, or whether some are ‘real’ — what we have called, in Seventeenth Century parlance, ‘true’. There is a natural temptation for those who hold that there is ‘nothing but the relative positions and motions between bodies' (and more so for their readers) to add ‘and all such motions are equal’, thus denying the existence of true motion. However, arguably — perhaps surprisingly — no one we have discussed has unreservedly held this view (at least not consistently): Descartes considered motion ‘properly speaking’ to be privileged, Leibniz introduced ‘active force’ to ground motion (arguably in his mechanics as well as metaphysically), and Mach's view seems to be that the distribution of matter in the universe determines a preferred standard of inertial motion. (Again, in general relativity, there is a distinction between inertial and accelerated motion.) That is, relationists can allow true motions if they offer an analysis of them in terms of the relations between bodies. Given this logical point, and given the historical ways thinkers have understood themselves, it seems unhelpful to characterize the issues in (i) as constituting an absolute-relative debate, hence our use of the term ‘true’ instead of ‘absolute’. So we are led to the second question: (ii) is true motion definable in terms of relations or not? (Of course the answer depends on what kind of definitions will count, and absent an explicit definition — Descartes' proper motion for example — the issue is often taken to be that of whether true motions supervene on relations, as Newton's globes are often supposed to refute.) It seems reasonable to call this issue that of whether motion is absolute or relative. Descartes and Mach are relationists about motion in this sense, while Newton is an absolutist. Leibniz is also an absolutist about motion in his metaphysics, and if our reading is correct, also about the interpretation of motion in the laws of collision. This classification of Leibniz's views runs contrary to his customary identification as relationist-in-chief, but we will clarify his relationist credentials below. Finally, we have discussed (ii) in the context of relativity, first examining Maudlin's proposal that the embedding of a relationally-specified system in Minkowski spacetime is in general unique once all the spacetime interval-distance relations are given. This proposal may or may not be held to satisfy the relational-definability question of (ii), but in any case it cannot be carried over to the context of general relativity theory. In the case of GTR we linked relational motion to the satisfaction of Mach's Principle, just as Einstein did in the early years of the theory. Despite some promising features displayed by GTR, and certain of its models, we saw that Mach's Principle is not fully satisfied in GTR as a whole. We also noted that in the absence of absolute simultaneity, it becomes an open question what relations are to be permitted in the definition (or supervience base) — spacetime interval relations? Instantaneous spatial distances and velocities on a 3-d hypersurface? (In recent works, Barbour has argued that GTR is fully Machian, using a 3-d relational-configuration approach. See Barbour, Foster and Murchadha 2002.) The final issue is that of (iii) whether absolute motion is motion with respect to substantival space or not. Of course this is how Newton understood acceleration — as acceleration relative to absolute space. More recent Newtonians share this view, although motion for them is with respect to substantival Galilean spacetime (or rather, since they know Newtonian mechanics is false, they hold that this is the best interpretation of that theory). Leibniz denied that motion was relative to space itself, since he denied the reality of space; for him true motion was the possession of active force. So despite his ‘absolutism’ (our adjective not his) about motion he was simultaneously a relationist about space: ‘space is merely relative’. Following Leibniz's lead we can call this debate the question of whether space is absolute or relative. The drawback of this name is that it suggests a separation between motion and space, which exists in Leibniz's views, but which is otherwise problematic; still, no better description presents itself. Others who are absolutists about motion but relationists about space include Sklar (1974) and van Fraassen (1985); Sklar introduced a primitive quantity of acceleration, not supervenient on motions relative to anything at all, while van Fraassen let the laws themselves pick out the inertial frames. It is of course arguable whether any of these three proposals are successful; (even) stripped of Leibniz's Aristotelian packaging, can absolute quantities of motion ‘stand on their own feet’? And under what understanding of laws can they ground a standard of inertial motion? Huggett (2006) defends a similar position of absolutism about motion, but relationism about space; he argues — in the case of Newtonian physics — that fundamentally there is nothing to space but relations between bodies, but that absolute motions supervene — not on the relations at any one time — but on the entire history of relations. Works cited in text - Aristotle, 1984, The Complete Works of Aristotle: The Revised Oxford Translation, J. Barnes (ed.), Princeton: Princeton University Press. - Barbour, J. and Bertotti, B., 1982, “Mach's Principle and the Structure of Dynamical Theories,” Proceedings of the Royal Society (London), 382: 295-306. - –––, 1977, “Gravity and Inertia in a Machian Framework,” Nuovo Cimento, 38B: 1-27. - Brill, D. R. and Cohen, J., 1966, “Rotating Masses and their effects on inertial frames,” Physical Review 143: 1011-1015. - Brown, H. R., 2005, Physical Relativity: Space-Time Structure from a Dynamical Perspective, Oxford: Oxford University Press. - Descartes, R., 1983, Principles of Philosophy, R. P. Miller and V. R. Miller (trans.), Dordrecht, London: Reidel. - Dorling, J., 1978, “Did Einstein need General Relativity to solve the Problem of Space? Or had the Problem already been solved by Special Relativity?,” British Journal for the Philosophy of Science, 29: 311-323. - Earman, J., 1989, World Enough and Spacetime: Absolute and Relational Theories of Motion. Boston: M.I.T. Press. - –––, 1970, “Who's Afraid of Absolute Space?,” Australasian Journal of Philosophy, 48: 287-319. - Einstein, A., 1918, “Prinzipielles zur allgemeinen Relativitätstheorie,” Annalen der Physik, 51: 639-642. - –––, 1907, “Über das Relativitätsprinzip und die aus demselben gezogenen Folgerungen,” Jahrbuch der Radioaktivität und Electronik 4: 411-462. - Einstein, A., Lorentz, H. A., Minkowski, H. and Weyl, H., 1952, The Principle of Relativity. W. Perrett and G.B. Jeffery, trs. New York: Dover Books. - Föppl, A. “Über absolute und relative Bewegung,” Sitzungsberichte der Münchener Akad.. 35:383. - Friedländer, B. and J., 1896, Absolute und relative Bewegung, Berlin: Leonhard Simion. - Friedman, M., 1983, Foundations of Space-Time Theories: Relativistic Physics and Philosophy of Science, Princeton: Princeton University Press. - Garber, D., 1992, Descartes' Metaphysical Physics, Chicago: University of Chicago Press. - Garber, D. and J. B. Rauzy, 2004, “Leibniz on Body, Matter and Extension,” The Aristotelian Society: Supplementary Volume, 78: 23-40. - Hartz, G. A. and J. A. Cover, 1988, “Space and Time in the Leibnizian Metaphysic,” Nous, 22: 493-519. - Hoefer, C., 1994, “Einstein's Struggle for a Machian Gravitation Theory,” Studies in History and Philosophy of Science, 25: 287-336. - Huggett, N., 2006, “The Regularity Account of Relational Spacetime,” Mind, 115: 41-74. - –––, 2000, “Space from Zeno to Einstein: Classic Readings with a Contemporary Commentary,” International Studies in the Philosophy of Science, 14: 327-329. - Lange, L., 1885, “Ueber das Beharrungsgesetz,” Berichte der Königlichen Sachsischen Gesellschaft der Wissenschaften zu Leipzig, Mathematisch-physische Classe 37 (1885): 333-51. - Leibniz, G. W., 1989, Philosophical Essays, R. Ariew and D. Garber (trans.), Indianapolis: Hackett Pub. Co. - Leibniz, G. W., and Samuel Clarke, 1715–1716, “Correspondence”, in The Leibniz-Clarke Correspondence, Together with Extracts from Newton's “Principia” and “Opticks”, H. G. Alexander (ed.), Manchester: Manchester University Press, 1956. - Lodge, P., 2003, “Leibniz on Relativity and the Motion of Bodies,” Philosophical Topics, 31: 277-308. - Mach, E., 1883, Die Mechanik in ihrer Entwickelung, historisch-kritisch dargestellt. 2nd edition. Leipzig: Brockhaus. English translation (6th edition, 1960): The Science of Mechanics, La Salle, Illinois: Open Court Press. - Malament, D., 1995, “Is Newtonian Cosmology Really Inconsistent?,” Philosophy of Science 62, no. 4. - Maudlin, T., 1993, “Buckets of Water and Waves of Space: Why Space-Time is Probably a Substance,” Philosophy of Science, 60:183-203. - Minkowski, H. (1908). “Space and time,” In Einstein, et al. (1952), pp. 75-91. - Nerlich, Graham, 1994, The Shape of Space (2nd edition), Cambridge: Cambridge University Press. - Neumann, C., 1870, Ueber die Principien der Galilei-Newton'schen Theorie. Leipzig: B. G. Teubner, 1870. - Newton, I., 2004, Newton: Philosophical Writings, A. Janiak (ed.), Cambridge: Cambridge University Press. - Newton, I. and I. B. Cohen, 1999, The Principia: Mathematical Principles of Natural Philosophy, I. B. Cohen and A. M. Whitman (trans.), Berkeley ; London: University of California Press. - Norton, J., 1995, “Mach's Principle before Einstein,” in J. Barbour and H. Pfister (eds.) Mach's Principle: From Newton's Bucket to Quantum Gravity: Einstein Studies, Vol. 6. Boston: Birkhäuser, pp.9-57. - Norton, J., 1993, “A Paradox in Newtonian Cosmology,” in M. Forbes , D. Hull and K. Okruhlik (eds.) PSA 1992: Proceedings of the 1992 Biennial Meeting of the Philosophy of Science Association. Vol. 2. East Lansing, MI: Philosophy of Science Association, pp. 412-20. - –––, 1992, “Einstein, Nordström and the Early Demise of Scalar, Lorentz-Covariant Theories of Gravitation,” Archive for History of Exact Sciences, 45: 17-94. - Pooley, O., 2002, the Reality of Spacetime, D.Phil thesis, Oxford University. - Ray, C., 1991, Time, Space and Philosophy, New York: Routledge. - Roberts, J. T., 2003, “Leibniz on Force and Absolute Motion,” Philosophy of Science, 70: 553-573. - Rynasiewicz, R., 1995, “By their Properties, Causes, and Effects: Newton's Scholium on Time, Space, Place, and Motion — I. The Text,” Studies in History and Philosophy of Science, 26: 133-153. - Sklar, L., 1974, Space, Time and Spacetime, Berkeley: University of California Press. - Stein, H., 1977, “Some Philosophical Prehistory of General Relativity,” in Minnesota Studies in the Philosophy of Science 8: Foundations of Space-Time Theories: , J. Earman, C. Glymour and J. Stachel (eds.), Minneapolis: University of Minnesota Press. - –––, 1967, “Newtonian Space-Time,” Texas Quarterly, 10: 174-200. - Wheeler, J.A. and Ciufolini, I., 1995, Gravitation and Inertia, Princeton, N.J.: Princeton U. Press. Notable Philosophical Discussions of the Absolute-Relative Debates - Barbour, J. B., 1982, “Relational Concepts of Space and Time,” British Journal for the Philosophy of Science, 33: 251-274. - Belot, G., 2000, “Geometry and Motion,” British Journal for the Philosophy of Science, 51: 561-595. - Butterfield, J., 1984, “Relationism and Possible Worlds,” British Journal for the Philosophy of Science, 35: 101-112. - Callender, C., 2002, “Philosophy of Space-Time Physics,” in The Blackwell Guide to the Philosophy of Science, P. Machamer (ed.), Cambridge: Blackwell. 173-198. - Carrier, M., 1992, “Kant's Relational Theory of Absolute Space,” Kant Studien, 83: 399-416. - Dieks, D., 2001, “Space-Time Relationism in Newtonian and Relativistic Physics,” International Studies in the Philosophy of Science, 15: 5-17. - Disalle, R., 1995, “Spacetime Theory as Physical Geometry,” Erkenntnis, 42: 317-337. - Earman, J., 1986, “Why Space is Not a Substance (at Least Not to First Degree),” Pacific Philosophical Quarterly, 67: 225-244. - –––, 1970, “Who's Afraid of Absolute Space?,” Australasian Journal of Philosophy, 48: 287-319. - Earman, J. and J. Norton, 1987, “What Price Spacetime Substantivalism: The Hole Story,” British Journal for the Philosophy of Science, 38: 515-525. - Hoefer, C., 2000, “Kant's Hands and Earman's Pions: Chirality Arguments for Substantival Space,” International Studies in the Philosophy of Science, 14: 237-256. - –––, 1998, “Absolute Versus Relational Spacetime: For Better Or Worse, the Debate Goes on,” British Journal for the Philosophy of Science, 49: 451-467. - –––, 1996, “The Metaphysics of Space-Time Substantialism,” Journal of Philosophy, 93: 5-27. - Huggett, N., 2000, “Reflections on Parity Nonconservation,” Philosophy of Science, 67: 219-241. - Le Poidevin, R., 2004, “Space, Supervenience and Substantivalism,” Analysis, 64: 191-198. - Malament, D., 1985, “Discussion: A Modest Remark about Reichenbach, Rotation, and General Relativity,” Philosophy of Science, 52: 615-620. - Maudlin, T., 1993, “Buckets of Water and Waves of Space: Why Space-Time is Probably a Substance,” Philosophy of Science, 60: 183-203. - –––, 1990, “Substances and Space-Time: What Aristotle would have Said to Einstein,” Studies in History and Philosophy of Science, 531-561. - Mundy, B., 1992, “Space-Time and Isomorphism,” Proceedings of the Biennial Meetings of the Philosophy of Science Association, 1: 515-527. - –––, 1983, “Relational Theories of Euclidean Space and Minkowski Space-Time,” Philosophy of Science, 50: 205-226. - Nerlich, G., 2003, “Space-Time Substantivalism,” in The Oxford Handbook of Metaphysics, M. J. Loux (ed.), Oxford: Oxford Univ Pr. 281-314. - –––, 1996, “What Spacetime Explains,” Philosophical Quarterly, 46: 127-131. - –––, 1994, What Spacetime Explains: Metaphysical Essays on Space and Time, New York: Cambridge Univ Pr. - –––, 1973, “Hands, Knees, and Absolute Space,” Journal of Philosophy, 70: 337-351. - Rynasiewicz, R., 2000, “On the Distinction between Absolute and Relative Motion,” Philosophy of Science, 67: 70-93. - –––, 1996, “Absolute Versus Relational Space-Time: An Outmoded Debate?,” Journal of Philosophy, 93: 279-306. - Teller, P., 1991, “Substance, Relations, and Arguments about the Nature of Space-Time,” Philosophical Review, 363-397. - Torretti, R., 2000, “Spacetime Models for the World,” Studies in History and Philosophy of Modern Physics, 31B: 171-186. - St. Andrews School of Mathematics and Statistics Index of Biographies - The Pittsburgh Phil-Sci Archive of pre-publication articles in philosophy of science - Ned Wright's Special Relativity tutorial - Andrew Hamilton's Special Relativity pages Descartes, René: physics | general relativity: early philosophical interpretations of | Newton, Isaac: views on space, time, and motion | space and time: inertial frames | space and time: the hole argument | Zeno of Elea: Zeno's paradoxes
<urn:uuid:c463babe-3f8d-40d5-a697-a0afccbbaf62>
CC-MAIN-2013-20
http://www.science.uva.nl/~seop/entries/spacetime-theories/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936085
25,047
3.03125
3
SAN FRANCISCO, Dec. 29, 2008 -- Facial expressions of emotion are hardwired into our genes, according to a study published today in the Journal of Personality and Social Psychology. The research suggests that facial expressions of emotion are innate rather than a product of cultural learning. The study is the first of its kind to demonstrate that sighted and blind individuals use the same facial expressions, producing the same facial muscle movements in response to specific emotional stimuli. The study also provides new insight into how humans manage emotional displays according to social context, suggesting that the ability to regulate emotional expressions is not learned through observation. San Francisco State University Psychology Professor David Matsumoto compared the facial expressions of sighted and blind judo athletes at the 2004 Summer Olympics and Paralympic Games. More than 4,800 photographs were captured and analyzed, including images of athletes from 23 countries. "The statistical correlation between the facial expressions of sighted and blind individuals was almost perfect," Matsumoto said. "This suggests something genetically resident within us is the source of facial expressions of emotion." Matsumoto found that sighted and blind individuals manage their expressions of emotion in the same way according to social context. For example, because of the social nature of the Olympic medal ceremonies, 85 percent of silver medalists who lost their medal matches produced "social smiles" during the ceremony. Social smiles use only the mouth muscles whereas true smiles, known as Duchenne smiles, cause the eyes to twinkle and narrow and the cheeks to rise. "Losers pushed their lower lip up as if to control the emotion on their face and many produced social smiles," Matsumoto said. "Individuals blind from birth could not have learned to control their emotions in this way through visual learning so there must be another mechanism. It could be that our emotions, and the systems to regulate them, are vestiges of our evolutionary ancestry. It's possible that in response to negative emotions, humans have developed a system that closes the mouth so that they are prevented from yelling, biting or throwing insults."
<urn:uuid:359737ab-ec67-44a6-aa93-2ab3520a58e9>
CC-MAIN-2013-20
http://www.sciencecodex.com/facial_expressions_of_emotion_are_innate_not_learned_says_new_study
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948632
421
3.390625
3
Feb. 8, 2006 Older Americans with high blood pressure and moderate to severe chronic kidney disease have a greater chance of developing heart disease than people with normal kidney function. This finding is one of three in a new paper published in the Feb. 7 issue of the Annals of Internal Medicine. The study also found these patients are at higher risk for developing heart disease than kidney failure (end stage renal disease). Lastly, it found for the first time that new types of drugs such as ACE inhibitors and calcium-channel blockers are no better than older type diuretic drugs, also called water pills, in preventing heart disease, and may be even less effective at preventing heart failure in patients with chronic kidney disease. Lead author of the study is Mahboob Rahman, M.D., M.S., of Case Western Reserve University School of Medicine, University Hospitals of Cleveland and the Louis Stokes Cleveland VA Medical Center. The study was sponsored by the National Heart Lung and Blood Institute and coordinated by the Clinical Trials Center at the University of Texas School of Public Health in Houston. The study looked at more than 31,000 men and women 55 years and older who have high blood pressure and one other risk factor of cardiovascular disease, such as diabetes. A blood test was used to determine kidney function and severity of disease. Patients with moderate chronic kidney disease had a 38 percent greater chance of developing heart disease and a 35 percent increase in overall cardiovascular disease (which includes heart disease, stroke, heart failure and others) than those with normal kidney function. In addition, patients with moderate to severe chronic kidney disease were twice as likely to develop heart disease than to experience kidney failure. Rahman said the researchers are not quite sure why moderate and severe kidney disease leads to greater risk in heart disease. "It may be related to other factors associated with renal failure, such as anemia or abnormalities of calcium or phosphorus metabolism, for example. We are participating in other ongoing studies to establish the connections," he said. The study also confirmed other earlier findings that diuretics are as effective as or better for preventing cardiovascular disease than newer drugs. "Overall, ACE inhibitors and diuretics were about equally likely to protect against heart attacks," said Rahman, "but diuretics seemed more effective at preventing other kinds of cardiovascular diseases, such as stroke and heart failure." Calcium-channel blockers were about equal in protecting against all cardiovascular disease, but diuretics were more effective at preventing heart failure. These results held for all participants regardless of kidney function. Rahman cautioned patients not to stop taking their medications after reading these results, however, and to consult their physicians. He added, "Exercise, maintaining optimal body weight, smoking avoidance, and maintaining low cholesterol levels -- these are all things that should be done with renewed emphasis in most patients with high blood pressure. Most patient with hypertension and chronic kidney disease will require multiple medications to control blood pressure. Our results demonstrate that the risk for cardiovascular disease is lower if one of the medications is a diuretic." He recommends patients who have high blood pressure talk to their doctors about measuring their kidney function to determine if they are suffering from chronic kidney disease. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:598976ed-eff5-417d-a642-78c8f606bd6e>
CC-MAIN-2013-20
http://www.sciencedaily.com/releases/2006/02/060206234103.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954897
696
2.734375
3
Jan. 30, 2009 A new way of making LEDs could see household lighting bills reduced by up to 75% within five years. Gallium Nitride (GaN), a man-made semiconductor used to make LEDs (light emitting diodes), emits brilliant light but uses very little electricity. Until now high production costs have made GaN lighting too expensive for wide spread use in homes and offices. However the Cambridge University based Centre for Gallium Nitride has developed a new way of making GaN which could produce LEDs for a tenth of current prices. GaN, grown in labs on expensive sapphire wafers since the 1990s, can now be grown on silicon wafers. This lower cost method could mean cheap mass produced LEDs become widely available for lighting homes and offices in the next five years. Based on current results, GaN LED lights in every home and office could cut the proportion of UK electricity used for lights from 20% to 5%. That means we could close or not need to replace eight power stations. A GaN LED can burn for 100,000 hours so, on average, it only needs replacing after 60 years. And, unlike currently available energy-saving bulbs GaN LEDs do not contain mercury so disposal is less damaging to the environment. GaN LEDs also have the advantage of turning on instantly and being dimmable. Professor Colin Humphreys, lead scientist on the project said: “This could well be the holy grail in terms of providing our lighting needs for the future. We are very close to achieving highly efficient, low cost white LEDs that can take the place of both traditional and currently available low energy light bulbs. That won’t just be good news for the environment. It will also benefit consumers by cutting their electricity bills.” GaN LEDs, used to illuminate landmarks like Buckingham Palace and the Severn Bridge, are also appearing in camera flashes, mobile phones, torches, bicycle lights and interior bus, train and plane lighting. Parallel research is also being carried out into how GaN lights could mimic sunlight to help 3m people in the UK with Seasonal Affective Disorder (SAD). Ultraviolet rays made from GaN lighting could also aid water purification and disease control in developing countries, identify the spread of cancer tumours and help fight hospital ‘super bugs’. Funding was provided by the Engineering and Physical Sciences Research Council (EPSRC). About GaN LEDs A light-emitting diode (LED) is a semiconductor diode that emits light when charged with electricity. LEDs are used for display and lighting in a whole range of electrical and electronic products. Although GaN was first produced over 30 years ago, it is only in the last ten years that GaN lighting has started to enter real-world applications. Currently, the brilliant light produced by GaN LEDs is blue or green in colour. A phosphor coating is applied to the LED to transform this into a more practical white light. GaN LEDs are currently grown on 2-inch sapphire. Manufacturers can get 9 times as many LEDs on a 6-inch silicon wafer than on a 2-inch sapphire wafer. In addition, edge effects are less, so the number of good LEDs is about 10 times higher. The processing costs for a 2-inch wafer are essentially the same as for a 6-inch wafer. A 6-inch silicon wafer is much cheaper to produce than a 2-inch sapphire wafer. Together these factors result in a cost reduction of about a factor of 10. Possible Future Applications - Cancer surgery. Currently, it is very difficult to detect exactly where a tumour ends. As a result, patients undergoing cancer surgery have to be kept under anaesthetic while cells are taken away for laboratory tests to see whether or not they are healthy. This may need to happen several times during an operation, prolonging the procedure extensively. But in the future, patients could be given harmless drugs that attach themselves to cancer cells, which can be distinguished when a blue GaN LED is shone on them. The tumour’s edge will be revealed, quickly and unmistakably, to the surgeon. - Water purification. GaN may revolutionise drinking water provision in developing countries. If aluminium is added to GaN then deep ultra-violet light can be produced and this kills all viruses and bacteria, so fitting such a GaN LED to the inside of a water pipe will instantly eradicate diseases, as well as killing mosquito larvae and other harmful organisms. - Hospital-acquired infections. Shining a ultra-violet GaN torch beam could kill viruses and bacteria, boosting the fight against MRSA and C Difficile. Simply shining a GaN torch at a hospital wall or trolley, for example, could kill any ‘superbugs’ lurking there. Other social bookmarking and sharing tools: The above story is reprinted from materials provided by Engineering and Physical Sciences Research Council (EPSRC). Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:356d9855-1da1-42ce-95c5-c24f941a4519>
CC-MAIN-2013-20
http://www.sciencedaily.com/releases/2009/01/090129090218.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944461
1,075
3.71875
4
Mar. 6, 2013 Boys are right-handed, girls are left ... Well at least this is true for sugar gliders (Petaurus breviceps) and grey short-tailed opossums (Monodelphis domestica), according to an article in BioMed Central’s open access journal BMC Evolutionary Biology that shows that handedness in marsupials is dependent on gender. This preference of one hand over another has developed despite the absence of a corpus collosum, the part of the brain which in placental mammals allows one half of the brain to communicate with the other. Many animals show a distinct preference for using one hand/paw/hoof over another. This is often related to posture – an animal is more likely to show manual laterality if it is upright, related to the difficulty of the task, more complex tasks show a handed preference, or even with age. As an example of all three: crawling human babies show less hand preference than toddlers. Some species also show a distinct sex effect in handedness but among non-marsupial mammals this tendency is for left-handed males and right-handed females. In contrast researchers from St Petersburg State University show that male quadruped marsupials, such as who walk on all fours, tend to be right-handed while the females are left-handed, especially as tasks became more difficult. Dr Yegor Malashichev from Saint Petersburg State University who led this study explained why they think this has evolved, “Marsupials do not have a corpus callosum – which connects the two halves of the mammalian brain together. Reversed sex related handedness is an indication of how the marsupial brain has developed different ways of the two halves of the brain communicating in the absence of the corpus callosum.” Other social bookmarking and sharing tools: - Andrey Giljov, Karina Karenina, Yegor Malashichev. Forelimb preferences in quadrupedal marsupials and there implications for laterality evolution in mammals. BMC Evolutionary Biology, 2013; 13 (1): 61 DOI: 10.1186/1471-2148-13-61 Note: If no author is given, the source is cited instead.
<urn:uuid:0da0c3b0-3202-410e-943e-07c344d95981>
CC-MAIN-2013-20
http://www.sciencedaily.com/releases/2013/03/130305200312.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.894542
469
3.515625
4
Web edition: March 4, 2013 Pregnant women taking DHA, an omega-3 fatty acid in fish oil, give birth to babies that score slightly better on several health measurements than those born to women who don’t take the supplement, a study has found. DHA, or docosahexaenoic acid, is a nutrient that promotes brain development (SN Online: 1/13/2009). Susan Carlson of the University of Kansas Medical Center in Kansas City and her colleagues randomly assigned 350 women to take daily capsules of either a placebo or DHA starting midway through pregnancy. Babies born to the women who took DHA were slightly longer and heavier than the other babies and were less apt to spend time in the intensive care unit.Overall rates of preterm birth, defined as birth before the 37th week of gestation, didn’t differ substantially between the groups. But among preterm babies, those in the DHA group spent an average of nine days in the hospital compared with 41 days for those in the placebo group. While only one of 154 babies in the DHA group was born very early — before 34 weeks’ gestation — seven of 147 babies to non-DHA mothers were born that early, Carlson and colleagues report in the April American Journal of Clinical Nutrition. S. E. Carlson et al. DHA supplementation and pregnancy outcomes. American Journal of Clinical Nutrition. April 2013, in press. doi: 10.3945/ajcn.112.050021. [Go to] N. Seppa. Omega-3 fatty acid is early boost for female preemies. Science News Online. January 13, 2009. [Go to]_
<urn:uuid:fa5c6e70-4e13-4fda-a48c-2cf94d43dcc8>
CC-MAIN-2013-20
http://www.sciencenews.org/view/generic/id/348704/description/News_in_Brief_Fish_oil_component_boosts_newborn_health
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956249
346
2.671875
3
Anthony Stocks, chairman and professor of anthropology at Idaho State University, responds: "The evolution of smiles is opaque and, as with many evolutionary accounts of social behavior, fraught with just-soism. Among human babies, however, the 'tooth-baring' smile is associated less with friendship than with fright--which, one might argue, is related to the tooth-baring threats of baboons. On the other hand, a non-toothy, not-so-broad-but-open-lipped smile is associated with pleasure in human infants. Somehow we seem to have taken the fright-threat sort of smile and extended it to strangers as a presumably friendly smile. Maybe it is not as innocent as it seems. "All cultures recognize a variety of mouth gestures as indexes of inner emotional states. As in our own culture, however, smiles come in many varieties, not all of them interpreted as friendly." Frank McAndrew, professor of psychology at Knox College in Galesburg, Ill., has done extensive research on facial expressions. He answers as follows: "Baring one's teeth is not always a threat. In primates, showing the teeth, especially teeth held together, is almost always a sign of submission. The human smile probably has evolved from that. "In the primate threat, the lips are curled back and the teeth are apart--you are ready to bite. But if the teeth are pressed together and the lips are relaxed, then clearly you are not prepared to do any damage. These displays are combined with other facial features, such as what you do with your eyes, to express a whole range of feelings. In a lot of human smiling, it is something you do in public, but it does not reflect true 'friendly' feelings--think of politicians smiling for photographers. "What is especially interesting is that you do not have to learn to do any of this--it is preprogrammed behavior. Kids who are born blind never see anybody smile, but they show the same kinds of smiles under the same situations as sighted people." McAndrew suggests several books that will be of interest to readers seeking more information on this topic: 'Non-Verbal Communication.' Edited by R. A. Hinde. Cambridge University Press, 1972. 'Emotion: A Psychoevolutionary Synthesis.' Robert Plutchik. Harper and Row, 1980. 'Emotion in the Human Face.' Second edition. Edited by Paul Ekman. Cambridge University Press, 1982
<urn:uuid:3af07c55-7258-46ac-a3d2-070c181f11d6>
CC-MAIN-2013-20
http://www.scientificamerican.com/article.cfm?id=it-seems-that-in-almost-a
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946999
515
2.921875
3
More 60-Second Science Plants can pull carbon dioxide, the planet-warming greenhouse gas, out of Earth’s atmosphere. But these aren’t the only living organisms that affect carbon dioxide levels, and thus global warming. Nope, I’m not talking about humans. Humble sea otters can also reduce greenhouse gases, by indirectly helping kelp plants. That finding is in the journal Frontiers in Ecology and the Environment. [Christopher C Wilmer et al., Do trophic cascades affect the storage and flux of atmospheric carbon? An analysis of sea otters and kelp forests] Researchers used 40 years of data to look at the effect of sea otter populations on kelp. Depending on the plant density, one square meter of kelp forest can absorb anywhere from tens to hundreds of grams of carbon per year. But when sea otters are around, kelp density is high and the plants can suck up more than 12 times as much carbon. That’s because otters nosh on kelp-eating sea urchins. In the mammals’ presence, the urchins hide away and feed on kelp detritus rather than living, carbon-absorbing plants. So climate researchers need to note that the herbivores that eat plants, and the predators that eat them, also have roles to play in the carbon cycle. [The above text is a transcript of this podcast.]
<urn:uuid:988dc99a-1448-437e-9ce7-9be18141d267>
CC-MAIN-2013-20
http://www.scientificamerican.com/podcast/episode.cfm?id=sea-otters-fight-global-warming-12-09-14
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.867837
296
3.578125
4
Own an Actual Piece of an American Space Travel Icon The Space Shuttle Atlantis is a retired Space Shuttle orbiter in the Space Shuttle fleet belonging to NASA. The last mission of Atlantis was STS-135, the last flight before the Shuttle program ended. By the end of its final mission, Atlantis had orbited the Earth 4,848 times, traveling nearly 126,000,000 mi in space or more than 525 times the distance from the Earth to the Moon. This photograph of the Atlantis taking off contains a piece of cargo bay liner from the actual space shuttle. Certificate of Authenticity is included. Dimensions: 8"x10" photograph, 13"x16" wooden frame.
<urn:uuid:e7c697cb-13c7-448a-83ea-4333bbfab128>
CC-MAIN-2013-20
http://www.scientificsonline.com/review/product/list/id/10980/?cat=444478&laser_color=83
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.867482
142
2.703125
3
OurDocuments.gov. Featuring 100 milestone documents of American history from the National Archives. Includes images of original primary source documents, lesson plans, teacher and student competitions, and educational resources. In 1866 the Russian government offered to sell the territory of Alaska to the United States. Secretary of State William H. Seward, enthusiastic about the prospects of American Expansion, negotiated the deal for the Americans. Edouard de Stoeckl, Russian minister to the United States, negotiated for the Russians. On March 30, 1867, the two parties agreed that the United States would pay Russia $7.2 million for the territory of Alaska. For less that 2 cents an acre, the United States acquired nearly 600,000 square miles. Opponents of the Alaska Purchase persisted in calling it “Seward’s Folly” or “Seward’s Icebox” until 1896, when the great Klondike Gold Strike convinced even the harshest critics that Alaska was a valuable addition to American territory. The check for $7.2 million was made payable to the Russian Minister to the United States Edouard de Stoeckl, who negotiated the deal for the Russians. Also shown here is the Treaty of Cession, signed by Tzar Alexander II, which formally concluded the agreement for the purchase of Alaska from Russia.
<urn:uuid:8182aa95-78e2-42b3-a86d-30bb1a0fa8f8>
CC-MAIN-2013-20
http://www.scoop.it/t/on-this-day/p/3018291670/our-documents-check-for-the-purchase-of-alaska-1868
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934167
279
4.03125
4
Filed under: Foundational Hand After studying the proportions of the Foundational Hand letters, the next step is to start writing the letters. Each letter is constructed rather than written. The letters are made up of a combination of pen strokes, which are only made in a top – down or left – right direction. The pen is never pushed up. When we studied the proportions of the Foundational Hand we could group the letters according to their widths. Now, we can group them according to the order and direction of the pen strokes. You may find it useful to look at the construction grid whilst studying the order and direction of the letters. The first group consists of the letters c, e, and o. These letters are based on the circle shape. This shape is produced with two pen strokes. Visualise a clock face and start the first stroke at approximately the 11, and finish it in an anti-clockwise direction at 5. The second stroke starts again at the 11 and finishes in a clockwise direction on the 5 to complete the letter o. The first pen-stroke for the letters c and e are the same as the first of the letter o. The second pen-stroke on the c and e are shorter and finish around the 1 position on the imaginary clock face. Finally, the letter e has a third stroke, starting at the end of the second stroke and finishes when it touches the first stroke. The next group of letters are d, q, b and p. All these letters combine curved and straight pen strokes. When writing these letters it can be useful to think of the underlying circle shape, which your pen will leave or join at certain points depending upon which letter is being written. The first stroke of the b starts at the ascender height of the letter, which can be eyed in at just under half the x-height (body height of letters with no ascender or descender). Continue the ascender stroke of the b until it ‘picks up’ the circle shape, follow round the circle until the pen reaches the 5 on the imaginary clock face. The second stroke starts on the first stroke following the circle round until it touches the end of the first stroke. The letter d is similar to the c except it has a third stroke for the ascender, which will touch the ends of the first and second stroke being for finishing on the write-line. Letter p starts with a vertical stroke from the x-height down to the imaginary descender line, which is just under half the x-height below the write-line. The second and third strokes are curved, starting on the descender stroke and following round the imaginary circle. The letter q is almost the same as the d, except it has a descender stroke rather than an ascender stroke. Letters a, h, m, n, r All these letters combine curved and straight pen strokes. Once again, think of the underlying circle shape, which your pen will leave or join at certain points depending upon the letter being written. The Letter h consists of two pen strokes. The first is a vertical ascender stroke. The second stroke starts curved, follows the circle round, then leaves it and becomes straight. The letter n is produced exactly the same way as the letter h, except the first stroke is not so tall as it starts on the x-height line. The first two pen strokes of the letter m are the same as the letter n. Then a third stroke is added which is identical to the second stroke. The letter r is also written the same way as the letter n except the second stroke finishes at the point where the circle would have been left and the straight is picked up. The first stroke of letter a is the same as the second stroke of the letters h, m and n. The second stroke follows the circle. Finally, the third stroke starts at the same point as the second stroke, but is a straight line at a 30° angle and touches the first stroke. The next group of letters are l, u and t. These letters are straight-forward. The letter l is the same as the first stroke of letter b. The letter u is also similar to the first stroke of letter b except it starts lower down. The second stroke starts on the x-height line and finishes on the write-line. Letter t has the same first stroke as letter u. It is completed by a second horizontal stroke. The following letters k, v, w, x, y and z are made of at least one diagonal pen stroke. The letter k starts with a vertical ascender stroke, then a second stroke diagonal stroke which joins the vertical stroke. The final stroke is also diagonal and starts where the first and second stroke meet and stops when it touches the write-line. If you look closely you will see it goes further out than the second stroke. This makes the letter look more balanced. If the end of these two pen-strokes lined up the letter would look like it is about to fall over. Letter v is simply two diagonal strokes and these are repeated to produce the letter w. The letter y is the same as the v except the second stroke is extended until to create a descender stroke. Letter x is a little different, you need to create it in such a way that the two stroke cross slightly above the half-way mark on the x-height. This means the top part will be slightly smaller than the bottom which will give the letter a better balance. Finally, in this group is letter z. The easiest way to produce this is with the two horizontal pen strokes, thenjoin these two strokes with a diagonal pen-stroke to complete the letter. Now for the hardest letters; f, g and s. Out of these three letters, f is the simplest. It starts with a vertical ascender stroke – except this is not as tall as the other ascender strokes we have produced so far. This is because we have to allow for the second curved stroke. The overall height of these two strokes should be the same as other letters that have an ascender. Finally, we need a horizontal stroke to complete the letter. Which will you find the hardest letter g or s? These are trickier because unlike all the other letters we have written they do not relate so well to the grid. The letter g is made of a circle shape, with an oval/bowl shape under the write-line. You can see the letter g is made of three pen-strokes. The first stroke is just like the first stroke of the letter o for example, except it is a smaller. The second stroke starts like the second stroke of the letter o, but when it joins the first stroke it continues and changes direction in the gap between the bottom of the shape and the write-line. The third stroke completes the oval shape. Finally, we have a little fourth stroke to complete the letter. The letter s is made up of three strokes. The first stroke is sort of an s shape! The second and third strokes complete the letter s. These are easier to get right than the first stroke because they basically follow the circle shape on our construction grid. The secret to this letter is to make both ‘ends’ of the first stroke not too curved. Because the other two strokes are curved they will compensate and give the overall correct shape. Finally, we are left with the letters i and j, which are made from one pen-stroke. You just need to remember to curve the end of the stroke when writing the letter j.
<urn:uuid:ebc9b632-c27d-4adb-85bd-b11864ab1adf>
CC-MAIN-2013-20
http://www.scribblers.co.uk/blog/tag/starting-calligraphy/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946402
1,563
4.15625
4
When it comes to Spanish-style colonial charm, few cities in the Western Hemisphere can rival Old San Juan. But that doesn’t mean that Puerto Rico’s historical significance is exclusively within the capital city’s walls. Roughly 100 miles southwest of San Juan, the lovely town of San Germán holds the venerable distinction of being Puerto Rico’s second oldest city. Founded in 1573 and named after King Ferdinand the Catholic’s second wife Germaine of Foix, San Germán became the island’s first settlement outside of San Juan. Its significance was such that the island was first divided into the San Juan Party and the San Germán Party. The town also became the focal point from which other settlements were established, thus earning the nickname ‘Ciudad Fundadora de Pueblos’ (roughly, Town-Founding City). But while San Juan went on to grow exponentially beyond the old city walls and other cities like Ponce, Mayagüez, Arecibo or Caguas grew in population and importance, San Germán remained a sleepy colonial town and one of the best-kept secrets within the island. From a historical perspective, San Germán’s most famous landmark is Porta Coeli Church. One of the earliest examples of Gothic architecture in the Americas, the chapel was originally built as a convent in 1609 by the Dominican Order. It was reconstructed during the 18th century and expanded with a single nave church of rubble masonry. Listed in 1976 in the U.S. National Register of Historic Places, Porta Coeli was restored by the Institute of Puerto Rican Culture and now houses the Museo de Arte Religioso, which showcases religious paintings and wooden carvings dating back from the 18th and 19th centuries. Porta Coeli overlooks quaint Plazuela Santo Domingo, an elonganted, cobblestoned square enclosed by pastel-colored, colonial-style houses. A block away sits the town’s main square, Plaza Francisco Mariano Quiñones, where the operational church of San Germán de Auxerre is located. Both Porta Coeli and San Germán de Auxerre are part of the San Germán Historic District, which was also listed in the U.S. National Register of Historic Places in 1994 and includes about 100 significant buildings. Though San Germán has long since lost its 16th-century designation as Puerto Rico’s most important city after San Juan, the town is nonetheless a regional powerhouse in southwestern Puerto Rico, housing important insitutions as the main campus of Universidad Interamericana (Interamerican University). Sports enthusiasts will also appreciate that the city is considered “The Cradle of Puerto Rican Basketball” as it is home to one of the island’s oldest and most succesful basketball franchises, Atléticos de San Germán (San Germán Athletics).
<urn:uuid:6ab19afa-f944-43a1-ab9d-b8cf0b819be4>
CC-MAIN-2013-20
http://www.seepuertorico.com/blog/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944115
618
3.171875
3
The basic element in solar modules The wafers are further processed to solar cells in the third production step. They form the basic element of the resulting solar modules. The cells already possess all of the technical attributes necessary to generate electricity from sunlight. Positive and negative charge carriers are released in the cells through light radiation causing electrical current (direct current) to flow. The "Cell" business division is part of SolarWorld subsidiary Deutsche Cell GmbH and SolarWorld Industries America LP. Here, solar cells are produced from the preliminary product, the solar silicon wafer. The group manufactures both monocrystalline as well as polycrystalline solar cells. The monocrystalline as well as polycrystalline solar cells are produced around the clock in one of the most advanced solar cell production facilities. The wafers are produced in the clean rooms of the Deutsche Cell GmbH using the most cutting edge process facilities with the highest level of automation. Through the fully integrated production concept, it is possible to flexibly control the use of all auxiliary materials necessary for production and to continuously optimize material utilization during operation. This concept allows us to assure the unique quality standard of our solar cells and simultaneously reduce the loss rate compared to conventional processes. This not only lowers production costs, it adds to the expertise in the solar cell production for the SolarWorld group. The wafer is first cleaned of all damage caused by cutting and then textured. A p/n junction is created by means of phosphorous diffusion which makes the silicon conductive. In the next step, the phosphorus glass layer produced by diffusion is removed. An anti-reflection layer is added. This which reduces optical losses and ensures electrical passivation of the surface is added. Then, the contacts are attached to the front and back along with a rear contact. Finally, every individual solar cell is tested for its optical qualities and the electrical efficiency measured.
<urn:uuid:23db29b2-778d-4f3d-8483-c82ac082e7e9>
CC-MAIN-2013-20
http://www.solarworld.de/en/solar-power/from-sand-to-module/solar-cells/?cHash=7b2c190ccf04a8c15ab1640a810ffe72&webtoolPid=5062
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92405
391
3.140625
3
by Staff Writers Chicago IL (SPX) Jan 11, 2013 Technologically valuable ultrastable glasses can be produced in days or hours with properties corresponding to those that have been aged for thousands of years, computational and laboratory studies have confirmed. Aging makes for higher quality glassy materials because they have slowly evolved toward a more stable molecular condition. This evolution can take thousands or millions of years, but manufacturers must work faster. Armed with a better understanding of how glasses age and evolve, researchers at the universities of Chicago and Wisconsin-Madison raise the possibility of designing a new class of materials at the molecular level via a vapor-deposition process. "In attempts to work with aged glasses, for example, people have examined amber," said Juan de Pablo, UChicago's Liew Family Professor in Molecular Theory and Simulations. "Amber is a glass that has been aged millions of years, but you cannot engineer that material. You get what you get." de Pablo and Wisconsin co-authors Sadanand Singh and Mark Ediger report their findings in the latest issue of Nature Materials. Ultrastable glasses could find potential applications in the production of stronger metals and in faster-acting pharmaceuticals. The latter may sound surprising, but drugs with the amorphous molecular structure of ultrastable glass could avoid crystallization during storage and be delivered more rapidly in the bloodstream than pharmaceuticals with a semi-crystalline structure. Amorphous metals, likewise, are better for high-impact applications than crystalline metals because of their greater strength. The Nature Materials paper describes computer simulations that Singh, a doctoral student in chemical engineering at UW-Madison, carried out with de Pablo to follow-up some intriguing results from Ediger's laboratory. Growing stable glasses Several years ago, he discovered that glasses grown this way on a specially prepared surface that is kept within a certain temperature range exhibit far more stability than ordinary glasses. Previous researchers must have grown this material under the same temperature conditions, but failed to recognize the significance of what they had done, Ediger said. Ediger speculated that growing glasses under these conditions, which he compares to the Tetris video game, gives molecules extra room to arrange themselves into a more stable configuration. But he needed Singh and de Pablo's computer simulations to confirm his suspicions that he had actually produced a highly evolved, ordinary glass rather than an entirely new material. "There's interest in making these materials on the computer because you have direct access to the structure, and you can therefore determine the relationship between the arrangement of the molecules and the physical properties that you measure," said de Pablo, a former UW-Madison faculty member who joined UChicago's new Institute for Molecular Engineering earlier this year. There are challenges, though, to simulating the evolution of glasses on a computer. Scientists can cool a glassy material at the rate of one degree per second in the laboratory, but the slowest computational studies can only simulate cooling at a rate of 100 million degrees per second. "We cannot cool it any slower because the calculations would take forever," de Pablo said. "It had been believed until now that there is no correlation between the mechanical properties of a glass and the molecular structure; that somehow the properties of a glass are "hidden" somewhere and that there are no obvious structural signatures," de Pablo said. Creating better materials Ultrastable glasses achieve their stability in a manner analogous to the most efficiently packed, multishaped objects in Tetris, each consisting of four squares in various configurations that rain from the top of the screen. "This is a little bit like the molecules in my deposition apparatus raining down onto this surface, and the goal is to perfectly pack a film, not to have any voids left," Ediger said. The object of Tetris is to manipulate the objects so that they pack into a perfectly tight pattern at the bottom of the screen. "The difference is, when you play the game, you have to actively manipulate the pieces in order to build a well-packed solid," Ediger said. "In the vapor deposition, nature does it for us." But in Tetris and experiments alike, when the objects or molecules descend too quickly, the result is a poorly packed, void-riddled pattern. "In the experiment, if you either rain the molecules too fast or choose a low temperature at which there's no mobility at the surface, then this trick doesn't work," Ediger said. Then it would be like taking a bucket of odd-shaped pieces and just dumping them on the floor. There are all sorts of voids and gaps because the molecules didn't have any opportunity to find a good way of packing." "Ultrastable glasses from in silico vapor deposition," by Sadamand Singh, M.D. Ediger and Juan J. de Pablo," Nature Materials. National Science Foundation and the U.S. Department of Energy. University of Chicago Space Technology News - Applications and Research |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:c3f50b79-8c61-44c3-9407-5d3feede364f>
CC-MAIN-2013-20
http://www.spacedaily.com/reports/Study_reveals_ordinary_glasss_extraordinary_properties_999.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932171
1,141
3.171875
3
Mission controllers received confirmation today that NASA's Dawn spacecraft has escaped from the gentle gravitational grip of the giant asteroid Vesta. Dawn is now officially on its way to its second destination, the dwarf planet Ceres. Dawn departed from Vesta at about 11:26 p.m. PDT on Sept. 4 (2:26 a.m. EDT on Sept. 5). Communications from the spacecraft via NASA's Deep Space Network confirmed the departure and that the spacecraft is now traveling toward Ceres. "As we respectfully say goodbye to Vesta and reflect on the amazing discoveries over the past year, we eagerly look forward to the next phase of our adventure at Ceres, where even more exciting discoveries await," said Robert Mase, Dawn project manager, based at NASA's Jet Propulsion Laboratory, Pasadena, Calif. Launched on Sept. 27, 2007, Dawn slipped into orbit around Vesta on July 15, 2011 PDT (July 16 EDT). Over the past year, Dawn has comprehensively mapped this previously uncharted world, revealing an exotic and diverse planetary building block. The findings are helping scientists unlock some of the secrets of how the solar system, including our own Earth, was formed. A web video celebrating Dawn's "greatest hits" at Vesta is available at http://www.nasa.gov/multimedia/videogallery/index.html?media_id=151669301 . Two of Dawn's last looks at Vesta are also now available, revealing the creeping dawn over the north pole. Dawn spiraled away from Vesta as gently as it arrived. It is expected to pull into its next port of call, Ceres, in early 2015. Dawn's mission is managed by JPL for NASA's Science Mission Directorate in Washington. Dawn is a project of the directorate's Discovery Program, managed by NASA's Marshall Space Flight Center in Huntsville, Ala. UCLA is responsible for overall Dawn mission science. Orbital Sciences Corp. in Dulles, Va., designed and built the spacecraft. The German Aerospace Center, the Max Planck Institute for Solar system Research, the Italian Space Agency and the Italian National Astrophysical Institute are international partners on the mission team. The California Institute of Technology in Pasadena manages JPL for NASA. More information about Dawn: http://www.nasa.gov/dawn http://dawn.jpl.nasa.gov
<urn:uuid:d335bafc-4460-4b84-b589-da7814ca2914>
CC-MAIN-2013-20
http://www.spaceref.com/news/viewpr.html?pid=38430
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.905312
491
2.640625
3
|SPACE TODAY ONLINE Covering Space From Earth to the Edge of the Universe| |Cover||Rockets||Satellites||Shuttles||Stations||Astronauts||Solar System||Deep Space||Global Links| Beating Swords Into Plowshares Converting Military Intercontinental Ballistic Missiles to Peaceful Space Launchers Russian Submarine Novomoscovsk Launches Satellites From Barents Sea The Russian nuclear submarine Novomoscovsk used a converted sea-launched ballistic missile to fire two small environmental research satellites into Earth orbit from beneath the Barents Sea in 1998. A Russian Typhoon-class nuclear submarine The unusual launch was the first time a commercial payload had ever been sent from Earth into orbit from a submarine and the first commercial space launch in the history of the Russian Navy. The satellites named TUBSAT were launched on a Shitl rocket which was a converted sea-launched ballistic missile (SLBM). The Shitl Rocket The Shitl rocket family is one of a range of space launch vehicles derived from decommissioned ballistic missiles offered for sale by Russia after the Cold War. The industrial design bureau Makeyev OKB had been formed by the former Soviet Union in the 1950s to produce a storable liquid fuel rocket family. Back then, those missiles were known as R-11 for use on land and R-11FM for use by the navy. Makeyev went on to design and manufacture descendents of the R-11 family, including the infamous Scud-B missile and nearly all of Russia's submarine-launched ballistic missiles (SLBMs). In the 1990s, Makeyev and other OKBs marketed a variety of space rockets converted from surplus SLRBs, which could be launched from the ground, air, sea surface or underwater. During the Cold War, the military SLBM, which later would become the Shitl space rocket, was known as R-29RM and SS-N-23. The manufacturer was designated RSM-54. Shitl is a three-stage liquid-fuel rocket. The satellites replaced the nuclear warhead inside a standard R-29RM re-entry vehicle atop the SS-N-23. The submarine launch plaform was Novomoskovsk K-407, a 667BDRM Delta-IV-class or Delfin-class submarine of the Russian Northern Fleet's 3rd Flotilla. The Shitl's maiden flight took place July 7, 1998, while the submarine was in a Barents Sea firing range off the coast of the Kolskiy Peninsula at 69.3 degrees N by 35.3 degrees E. Prior to launch, the space flight had been viewed as a risk because a different one of the Northern fleet's Delta-class submarines had suffered an accident in one of its rocket tubes on May 5, 1998. The Shtil's former warhead faring housed an Israeli instrument package and the German satellites TUBSAT-N and TUBSAT N-1. The tiny satellites, referred to as nanosatellites, were built and operated by the Technische Universitat Berlin (TUB). Each TUBSAT carried small store-and-forward communications payloads used to track transmitters placed on vehicles, migrating animals and marine buoys. The satellites were dropped off in elliptical orbits ranging from 250 to 500 miles above Earth. They traveled around Earth every 96 minutes. Tubsat-N, designated internationally as 1998-042A, weighed eighteen lbs. while Tubsat-N1, designated 1998-042B, weighed seven lbs. Technically, putting satellites in low Earth orbits is only a small step from delivering long-range warheads. The Russians had been offering the submarine launch facility as a commercial service for some time and previously had conducted sub-orbital test flights. The benefits of a submarine launch are safety and ease of putting a payload into a particular orbit. By comparison, there are safety restrictions on the directions toward which land-based rockets can be launched. On the other hand, these submarine-based missiles converted to space rockets are only big enough to launch small research satellites. They aren't able to launch very large and heavy communications satellites or interplanetary space probes. However, the success of the Shitl launch could open up a valuable small-satellite niche in the space-launch market for the Russians. The Northern fleet reportedly was paid $111,000 for the launch, which helped the submarine crew sharpen skills diminished by a shortage of training funds. Berlin Technical University's Transport and Applied Mechanics Department plans to launch two more TUBSATs. Learn more about nuclear submarines and missiles... Taking Nuclear Weapons off Hair-Trigger Alert Scientific American, November 1997 Launchers from decommissioned missiles sold by Russia Encyclopedia Astronautica DLR German Aerospace Center Top of this page Swords Into Plowshares Titan Minuteman Submarine Tsyklon SS-25 Rockets index STO Cover About STO Search STO Feedback Questions E-Mail © 2003 Space Today Online
<urn:uuid:4536940c-6c0c-4d1f-af8d-330268357f7b>
CC-MAIN-2013-20
http://www.spacetoday.org/Rockets/Plowshares/Submarine.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938555
1,074
2.90625
3
In January 1992, a container ship near the International Date Line, headed to Tacoma, Washington from Hong Kong, lost 12 containers during severe storm conditions. One of these containers held a shipment of 29,000 bathtub toys. Ten months later, the first of these plastic toys began to wash up onto the coast of Alaska. Driven by the wind and ocean currents, these toys continue to wash ashore during the next several years and some even drifted into the Atlantic Ocean. The ultimate reason for the world's surface ocean currents is the sun. The heating of the earth by the sun has produced semi-permanent pressure centers near the surface. When wind blows over the ocean around these pressure centers, surface waves are generated by transferring some of the wind's energy, in the form of momentum, from the air to the water. This constant push on the surface of the ocean is the force that forms the surface currents. Learning Lesson: How it is Currently Done Around the world, there are some similarities in the currents. For example, along the west coasts of the continents, the currents flow toward the equator in both hemispheres. These are called cold currents as they bring cool water from the polar regions into the tropical regions. The cold current off the west coast of the United States is called the California Current. Likewise, the opposite is true as well. Along the east coasts of the continents, the currents flow from the equator toward the poles. There are called warm current as they bring the warm tropical water north. The Gulf Stream, off the southeast United States coast, is one of the strongest currents known anywhere in the world, with water speeds up to 3 mph (5 kph). These currents have a huge impact on the long-term weather a location experiences. The overall climate of Norway and the British Isle is about 18°F (10°C) warmer in the winter than other cites located at the same latitude due to the Gulf Stream. Take it to the MAX! Keeping Current While ocean currents are a shallow level circulations, there is global circulation which extends to the depths of the sea called the Great Ocean Conveyor. Also called the thermohaline circulation, it is driven by differences in the density of the sea water which is controlled by temperature (thermal) and salinity (haline). In the northern Atlantic Ocean, as water flows north it cools considerably increasing its density. As it cools to the freezing point, sea ice forms with the "salts" extracted from the frozen water making the water below more dense. The very salty water sinks to the ocean floor. Learning Lesson: That Sinking Feeling It is not static, but a slowly southward flowing current. The route of the deep water flow is through the Atlantic Basin around South Africa and into the Indian Ocean and on past Australia into the Pacific Ocean Basin. If the water is sinking in the North Atlantic Ocean then it must rise somewhere else. This upwelling is relatively widespread. However, water samples taken around the world indicate that most of the upwelling takes place in the North Pacific Ocean. It is estimated that once the water sinks in the North Atlantic Ocean that it takes 1,000-1,200 years before that deep, salty bottom water rises to the upper levels of the ocean.
<urn:uuid:dfd00b67-c3db-464f-93c1-6d6c5508de9d>
CC-MAIN-2013-20
http://www.srh.noaa.gov/srh/jetstream/ocean/circulation.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954248
678
3.984375
4
Michele Johnson, Ames Research Center Astronomers have discovered a pair of neighboring planets with dissimilar densities orbiting very close to each other. The planets are too close to their star to be in the so-called "habitable zone," the region in a system where liquid water might exist on the surface, but they have the closest-spaced orbits ever confirmed. The findings are published today in the journal Science. The research team, led by Josh Carter, a Hubble fellow at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., and Eric Agol, a professor of astronomy at the University of Washington in Seattle, used data from NASA's Kepler space telescope, which measures dips in the brightness of more than 150,000 stars, to search for transiting planets. The inner planet, Kepler-36b, orbits its host star every 13.8 days and the outer planet, Kepler-36c, every 16.2 days. On their closest approach, the neighboring duo comes within about 1.2 million miles of each other. This is only five times the Earth-moon distance and about 20 times closer to one another than any two planets in our solar system. Kepler-36b is a rocky world measuring 1.5 times the radius and 4.5 times the mass of Earth. Kepler-36c is a gaseous giant measuring 3.7 times the radius and eight times the mass of Earth. The planetary odd couple orbits a star slightly hotter and a couple billion years older than our sun, located 1,200 light-years from Earth To read more about the discovery, visit: the Harvard-Smithsonian Center for Astrophysics and University of Washington press releases. Ames Research Center in Moffett Field, Calif., manages Kepler's ground system development, mission operations and science data analysis. NASA’s Jet Propulsion Laboratory, Pasadena, Calif., managed the Kepler mission's development. Ball Aerospace and Technologies Corp. in Boulder, Colo., developed the Kepler flight system and supports mission operations with the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder. The Space Telescope Science Institute in Baltimore archives, hosts and distributes Kepler science data. Kepler is NASA's 10th Discovery Mission and is funded by NASA's Science Mission Directorate at the agency's headquarters in Washington.
<urn:uuid:21851e90-e451-4be9-b860-e7e63b41efac>
CC-MAIN-2013-20
http://www.staplenews.com/home/2012/6/22/astronomers-discover-planetary-odd-couple.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.892483
478
3.609375
4
During the last 25 years, there has been debate about the value of corporate social responsibility (CSR), particularly as it relates to the rise of “ethical consumers.” These are shoppers who base purchasing decisions on whether a product’s social and ethical positioning — for example, its environmental impact or the labor practices used to manufacture it — aligns with their values. Many surveys purport to show that even the average consumer is demanding so-called ethical products, such as fair trade–certified coffee and chocolate, fair labor–certified garments, cosmetics produced without animal testing, and products made through the use of sustainable technologies. Yet when companies offer such products, they are invariably met with indifference by all but a selected group of consumers. Is the consumer a cause-driven liberal when surveyed, but an economic conservative at the checkout line? Is the ethical consumer little more than a myth? Although many individuals bring their values and beliefs into purchasing decisions, when we examined actual consumer behavior, we found that the percentage of shopping choices made on a truly ethical basis proved far smaller than most observers believe, and far smaller than is suggested by the anecdotal data presented by advocacy groups. The trouble with the data on ethical consumerism is that the majority of research relies on people reporting on their own purchasing habits or intentions, whether in surveys or through interviews. But there is little if any validation of what consumers report in these surveys, and individuals tend to dramatically overstate the importance of social and ethical responsibility when it comes to their purchasing habits. As noted by John Drummond, CEO of Corporate Culture, a CSR consultancy, “Most consumer research is highly dubious, because there is a gap between what people say and what they do.” The purchasing statistics on ethical products in the marketplace support this assertion. Most of these products have attained only niche market positions. The exceptions tend to be relatively rare circumstances in which a multinational corporation has acquired a company with an ethical product or service, and invested in its growth as a separate business, without altering its other business lines (or the nature of its operations). For example, Unilever’s purchase of Ben & Jerry’s Homemade Inc. allowed for the expansion of the Ben & Jerry’s ice cream franchise within the United States, but the rest of Unilever’s businesses remained largely unaffected. Companies that try to engage in proactive, cause-oriented product development often find themselves at a disadvantage: Either their target market proves significantly smaller than predicted by their focus groups and surveys or their costs of providing ethical product features are not covered by the prices consumers are willing to pay. (For a different perspective on these issues, see “The Power of the Post-Recession Consumer,” by John Gerzema and Michael D’Antonio, s+b, Spring 2011.) To understand the true nature of the ethical consumer, we set up a series of generalized experimental polling studies over nearly 10 years that allowed us to gather the social and ethical preferences of large samples of individuals. We then conducted 120 in-depth interviews with consumers from eight countries (Australia, China, Germany, India, Spain, Sweden, Turkey, and the United States). We asked them not just to confirm that they might purchase a product, but to consider scenarios under which they might buy an athletic shoe from a company with lax labor standards, a soap produced in ways that might harm the environment, and a counterfeit brand-name wallet or suitcase. They were also asked how they thought other people from their country might respond to these products — a well-established “projective technique” that often reveals more accurate answers than questions about the respondent’s direct purchases. And they were asked about their own past behavior; for example, all the interviewees admitted purchasing counterfeit goods at some point. The interviews asked participants explicitly about the ramifications of these ethical issues, and the inconsistencies between their words and their actions.
<urn:uuid:073b05ae-7c40-41bb-852f-740d0635d55d>
CC-MAIN-2013-20
http://www.strategy-business.com/article/11103?gko=03d29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958151
804
2.640625
3
History of the Red Mass The “Red Mass” is an historical tradition within the Catholic Church dating back to the Thirteenth Century when it officially opened the term of the court for most European countries. The first recorded Red Mass was celebrated in the Cathedral of Paris in 1245. From there, it spread to most European countries. Around 1310, during the reign of Edward I, the tradition began in England with the Mass offered at Westminster Abbey at the opening of the Michaelmas term. It received its name from the fact that the celebrant was vested in red and the Lord High justices were robed in brilliant scarlet. They were joined by the university professors with doctors among them displaying red in their academic gowns. The Red Mass also has been traditionally identified with opening of the Sacred Roman Rota, the supreme judicial body of the Catholic Church. In the United States, the first Red Mass occurred in New York City on October 6, 1928. This Mass was celebrated at Old St. Andrew’s Church with Cardinal Patrick Hayes presiding. Today, well over 25 cities in the United States celebrate the Red Mass each year, with not only Catholic but also Protestant and Jewish members of the judiciary and legal profession attending the Mass. One of the better-known Red Masses is the one celebrated each fall at the Cathedral of St. Matthew the Apostle in Washington, D.C. It is attended by Justices of the Supreme Court, members of Congress, the diplomatic corps, the Cabinet, and other government departments and, sometimes, the President of the United States. All officials attend in their capacity as private individuals, rather than as government representatives, in order to prevent any issues over separation of church and state. For the most part the Red Mass is like any other Roman Catholic Mass. A sermon is given, usually with a message which has an overlapping political and religious theme. The Mass is also an opportunity for the Catholic church to express its goals for the coming year. One significant difference between the Red Mass and a traditional Mass is that the prayers and blessings are focused on the leadership roles of those present and to invoke divine guidance and strength during the coming term of Court. It is celebrated in honor of the Holy Spirit as the source of wisdom, understanding, counsel and fortitude, gifts which shine forth preeminently in the dispensing of justice in the courtroom as well as in the individual lawyer’ s office.
<urn:uuid:e4d2a3f5-9e6a-4040-9672-d562582ec4df>
CC-MAIN-2013-20
http://www.stthomasmoresantaclara.org/the-red-mass/history-of-the-red-mass/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974992
494
3.65625
4
Reading Classic Literatures Classic literature, even though they were written fifty or hundred years ago, still has the power to affect the readers. The gift of literature to educate and inspire people transcends time. Unfortunately, not all people like to read classic literature. Sometimes, to understand classic literature, you have to be mature enough to enjoy and comprehend these writings. Although we read classic literature because we have to do a report in school, we can also read them for enjoyment. You may have heard of famous authors of classical novels on the television and internet, you can check out their writings and their books. If you really want to get into the habit of reading classical literature, you can start by reading 30 minutes every day. You should have a dictionary near you when reading classical novels since the words used are always deep or its meaning has changed over time. To have a better understanding of the setting and the plot of the story, you can make a little background research on the era or its time period. You can also research on the background of the author. You really have to follow the structure of the story. Most classical literature have complex storyline and plots which makes it hard sometimes to follow the story. The character development is also very extensive. Seeing the overall theme of the story is very important as well as following the basic development of the characters and their story. There are literature companions that you can buy to help you get started with the classical literature. An example of a literature companion is the "Oxford Companion to Classical Literature." Another key to understanding classic literature is by understanding the use of the footnotes. These classical literature are full of footnotes that references the social and culture elements of their time.
<urn:uuid:007f5583-acd1-46d2-9f51-3956850b8832>
CC-MAIN-2013-20
http://www.studyguide.org/reading-classic-literatures
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97326
348
3.203125
3
Many of us act as though we all see the same reality, yet the truth is we don't. Human Beings have cognitive biases or blind spots. Blind spots are ways that our mind becomes blocked from seeing reality as it is - blinding us from seeing the real truth about ourselves in relation to others. Once we form a conclusion, we become blind to alternatives, even if they are right in front of their eyes. Emily Pronin, a social psychologist, along with colleagues Daniel Lin and Lee Ross, at Princeton University's Department of Psychology, created the term "blind spots." The bias blind spot is named after the visual blind spot. Passing the Ball - Watch this Video There is a classic experiment that demonstrates one level of blind spots that can be attributed to awareness and focused-attention. When people are instructed to count how many passes the people in white shirts make on the basketball court, they often get the number of passes correct, but fail to see the person in the black bear suit walking right in front of their eyes. Hard to believe but true! Blind Spots & Denial However, the story of blind spots gets more interesting when we factor in our cognitive biases that come from our social needs to look good in the eyes of others. When people operate with blind spots, coupled with a strong ego, they often refuse to adjust their course even in the face of opposition from trusted advisors, or incontrovertible evidence to the contrary. Two well-known examples of blind spots are Henry Ford and A&P: - Next >> - Next >>
<urn:uuid:6513ee73-d970-4b01-a599-cce767241921>
CC-MAIN-2013-20
http://www.successtelevision.com/index.php/Career/self-perception-and-awareness-and-knowing-how-others-perceive-your-behavior.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954655
328
3.640625
4
Electrical research institute selects IPP as participant in carbon dioxide study The Intermountain Power Project has been selected as one of five electric utilities in the United States and Canada to participate in a study of technology for capturing carbon dioxide emissions from coal-fueled electrcity generation facilities. Conducted by the Electric Power Research Institute, the study will examine the impacts of retrofitting advanced amine-based post-combustion carbon dioxide capture technology to existing coal-fired power plants, indicated EPRI representatives. As global demand for electricity increases and regulators worldwide look at ways to reduce carbon dioxide emissions, post-combustion capture for new and existing power plants could be an important option. However, retrofit of systems to an existing plant presents significant challenges, including limited space for new plant equipment, limited heat available for process integration, additional cooling water requirements and potential steam turbine modifications. "EPRI's analyses have shown carbon capture and storage will be an essential part of the solution if we are to achieve meaningful carbon dioxide emissions reductions at a cost that can be accommodated by our economy," pointed out Bryan Hannegan, vice president of generation and environment at the research institute. "Projects such as this, in which a number of utility companies come forward to offer their facilities and form a collaborative to share the costs of research, are critical to establishing real momentum for the technologies that we will need." In addition to IPP, power plants in Ohio, Illinois, North Dakota, and Nova Scotia will participate in the project. Individual sites offers a unique combination of unit sizes and ages, existing and planned emissions controls, fuel types, steam conditions, boilers, turbines, cooling systems and options for carbon dioxide storage, pointed out EPRI representatives. The study - to be completed during 2009 - will provide the participants with valuable information applicable to their own individual power plants. A report for an individual operation will: â¢Assess the most practical carbon dioxide capture efficiency configuration based on site constraints. â¢Determine the space required for the carbon dioxide capture technology and the interfaces with existing systems. â¢Estimate performance and costs for the post-combustion capture plant. â¢Assess the features of the facility that materially affect the cost and feasibility of the retrofit. "The participants in the Intermountain Power Project are committed to maintaining high environmental standards," said general manager James Hewlet. "This study will help us evaluate options for managing the emissions of greenhouse gases in the future. It is a meaningful step in our three-decade track record of continually improving the power plant's environmental performance."
<urn:uuid:e72b94a0-eaf5-4afc-87be-12f1cb211d61>
CC-MAIN-2013-20
http://www.sunadvocate.com/index.php?tier=1&article_id=15683
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915876
544
2.5625
3
Breast-Feeding a Sick Baby If your baby becomes ill or develops a minor viral illness, such as a cold, flu, or diarrhea, it is best to continue your Reference breast-feeding Opens New Window routine. Breast milk provides your baby with the best possible nutrition. If your baby is too ill to breast-feed, try Reference cup-feeding. With this technique, you feed your baby collected breast milk. Take your baby to visit a health professional if he or she eats very little or not at all. Even if your baby does not have much appetite or is in the hospital on Reference intravenous (IV) Opens New Window fluids, use a pump or hand express your milk on your normal schedule. This will help to maintain your milk production until your baby's appetite returns. |By:||Reference Healthwise Staff||Last Revised: Reference April 14, 2011| |Medical Review:||Reference Sarah Marshall, MD - Family Medicine Reference Kirtly Jones, MD, MD - Obstetrics and Gynecology
<urn:uuid:a1912a04-de80-4ab3-a6ef-f8a8c8472128>
CC-MAIN-2013-20
http://www.sutterhealth.org/health/healthinfo/index.cfm?A=C&type=info&hwid=ue5303&section=ue5303-sec
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.89027
215
2.828125
3
Friday is Earth Day. It’s a good time to consider how to preserve our environment. Have you ever wondered how long it takes a plastic grocery bag to disintegrate? The decomposition rate chart presented on the Commonwealth of Virginia’s website (http://www.deq.virginia.gov/recycle) shows the relative speed of organic and inorganic materials. A banana peel takes 2-5 weeks (put up to three around a rose bush for a healthy fertilizer), a newspaper 3-6 months (shred and add to your compost pile), while a plastic bag will last a decade, a plastic beverage container or tin can a century, an aluminum can 2-5 centuries. To save space for future generations, recycling is the responsible thing to do. Where can you recycle your disposables? The state’s recycling website provides very helpful information for hard-to-dispose of items, such as computers and automobile products. Learn how and where to properly dispose of a variety of electronics, including cellphones, used oil, oil filters, antifreeze, and old medications (do not flush them down the toilet). For more ideas about where to recycle what, visit Earth911 at http://earth911.com/. Another resource with much potential is Freecycle at http://www.freecycle.org/. This is a network for linking those who have something to dispose of and those who are looking for something, organized by zip code. All items offered must be free. Read High Tech Trash: digital devices, hidden toxics, and human health by Elizabeth Grossman for an eye-opening explanation of the science, politics, and crimes in the collection of masses of e-waste. She follows the trail of toxins, including lead, mercury, chlorine and flame retardants, from mining and processing through disposal and dumping in India, China and Nigeria, where unprotected workers boil the refuse to retrieve useful fragments. Humanizing the impact of waste is Paolo Bacigalupi’s award winning novel for teens, Ship Breaker. In a futuristic world, teenaged Nailer scavenges copper wiring from grounded oil tankers for a living, but when he finds a beached clipper ship with a girl in the wreckage, he has to decide if he should strip the ship for its wealth or rescue the girl. This is action-packed and very well-written. Eminent Harvard biologist E. O. Wilson, Pulitzer Prize-winning author of more than twenty works of nonfiction, has written his first novel, Anthill, about the interdependence of life in our biosphere. Raphael Semmes Cody, a lonely child of contentious parents (gentry v. redneck) in south Alabama, relishes summer freedom in a tract of old-growth longleaf pine forest and savanna on Lake Nokobee. He wanders off to observe salamanders and snakes and becomes enthralled by bugs ("every kid has a bug period" says Wilson. "Mine was especially intense and I never grew out of it."). His fascination becomes a lifelong focus, which guides his direction and purpose in mediating competing interests of environmentalists and business. The novel includes "The Anthill Chronicles", a story within the story, which is a riveting account of three colonies of ants, their wars, destruction, and survival, told from their point of view. The simplicity of this satisfying coming of age tale belies an admirable complexity in its portrayal of the interrelatedness of all life. Anthill bears comparison to Huck Finn and Homer’s Iliad in the recounting of epic journeys and the clash of civilizations. It is also very funny and full of sly observations about the "gray wool of the Confederacy" and "zircons in the rough". Anthill is destined to become a classic. For more about caring for the Earth, visit www.tcplweb.org or call 988-2541.
<urn:uuid:784096ad-82f2-4f71-89e3-c7cd918705e8>
CC-MAIN-2013-20
http://www.tcplweb.org/libraries/newspaper-articles/earth-day-1.html?date=2019-02-05&amp;month:int=2&year:int=2013&orig_query=date%3D2019-02-05%26amp%3B
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935031
814
3.390625
3
New Zealand grasshoppers belong to the subfamily Catantopinae. A number of species are present including the common small Phaulacridium of the more coastal areas, the larger species of Sigaus of the tussock lands, and the alpine genera Paprides and Brachaspis, which include some quite large species. These inhabit the alpine areas of the South Island, some preferring scree and others tussock areas. They apparently survive the rigorous alpine winter conditions both as nymphs and as adults, and it is possible that they can withstand complete freezing. All species are plant feeders and lay batches of eggs or pods in short holes in the ground which they excavate with their abdomen. After hatching, the young nymphs moult four or five times before becoming adult. by Graeme William Ramsay, M.SC., PH.D., Entomology Division, Department of Scientific and Industrial Research, Nelson.
<urn:uuid:feefb68d-09c3-45d7-bc1b-52166c84268c>
CC-MAIN-2013-20
http://www.teara.govt.nz/en/1966/grasshoppers
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942986
196
3.515625
4
I’m still following the Assembly Primer for Hackers from Vivek Ramachandran of SecurityTube in preparation for Penetration Testing with BackTrack. In this review I’ll cover data types and how to move bytes, numbers, pointers and strings between labels and registers. Variables (data/labels) are defined in the .data segment of your assembly program. Here are some of the available data types you’ll commonly use. Data types in assembly; photo credit to Vivek Ramachandran # Demo program to show how to use Data types and MOVx instructions .data HelloWorld: .ascii "Hello World!" ByteLocation: .byte 10 Int32: .int 2 Int16: .short 3 Float: .float 10.23 IntegerArray: .int 10,20,30,40,50 .bss .comm LargeBuffer, 10000 .text .globl _start _start: nop # Exit syscall to exit the program movl $1, %eax movl $0, %ebx int $0x80 Moving numbers in assembly Introduction to mov This is the mov family of operations. By appending b, w or l you can choose to move 8 bits, 16 bits or 32 bits of data. To demonstrate these operations, we’ll be using the example above. Moving a byte into a register movb $0, %al This will move the integer 0 into the lower 8 bits of the EAX register. Moving a word into a register movw $10, %ax This will move the integer 10 into the lower 16 bits of the EAX register. Moving a word into a register movl $20, %eax This will move the integer 20 into the 32-bit EAX register. Moving a word into a label movw $50, Int16 This will move the integer 50 into the 16-bit label Int16. Moving a label into a register movl Int32, %eax This will move the contents of the Int32 label into the 32-bit EAX register. Moving a register into a label movb %al, ByteLocation This will move the contents of the 8-bit AL register into the 8-bit ByteLocation label. Accessing memory locations (using pointers) In C we have the concept of pointers. A pointer is simply a variable that points to a location in memory. Typically that memory location holds some data that is important to us and that’s why we’re keeping a pointer to it so we can access the data later. This same concept can be achieved in assembly. Moving a label’s memory address into a register (creating a pointer) movl $Int32, %eax This will move the memory location of the Int32 label into the EAX register. In effect the EAX register is now a pointer to the data held by the Int32 label. Notice that we use movl because memory locations are 4 bytes. Also notice that to access the memory location of a label you prepend the $ character. Dereferencing a pointer (accessing the contents of a memory address) Moving a word into a dereferenced location movl $9, (%eax) This will move the integer 9 into the memory location held in EAX. In other words, if this were C, %eax would be considered a pointer and (%eax) would be the way we dereference that pointer to change the contents of the location it points to. The equivalent in C would like something like this: int Int32 = 2; int *eax; eax = &Int32; *eax = 9; The only difference in the C example is that we had to define eax as an int pointer before we could copy the address of Int32. In assembly we can just copy the address of Int32 directly into the EAX register, circumventing the need for an additional variable. But line 4 of this C example is the equivalent of the assembly example shown above. So to clarify one more time, EAX does not change at all in this example; EAX still points to the same location! However, the data at that location has changed. So if EAX contains the location of the Int32 label, then Int32 now contains 9. So it’s Int32 that has changed, not EAX. Notice that we use the parentheses to access the memory location stored in the register (dereference the pointer). Moving a dereferenced value into a register movl (%eax), %ebx In effect the EBX register is now a pointer to the data held by EAX. Notice that to access the memory location of the register we’re again enclosing the register name in parentheses. Moving strings in assembly I can imagine that reading this you might be thinking, “hey, strings are just bytes of data so why can’t I just move them using the same instructions I just learned?” And the answers to that questions is you can! The problem is that strings are oftentimes much larger. A string might be 1 byte, 5 bytes, or 100 bytes. And none of mov instructions discussed above cover anything larger than 4 bytes. So let’s discuss the string operations that are available to alleviate the pains of copying large strings of data. A key difference between the standard mov operations and the string series of movs, stos and lods operations is the number of operands. With mov, you specify the source and destination via 2 operands. However, with the movs instructions, the source and destination addresses are placed into the ESI and EDI registers respectively. And with stos and lods, the operations interact directly with the EAX register. This will become more clear with some examples. The DF flag DF stands for direction flag. This is a flag stored in the CPU that determines whether to increment or decrement a string’s memory address when string operations are called. When DF is 0 (cleared) the addresses are incremented. When DF is 1 (set) the addresses are decremented. In our examples the DF flag will always be cleared. The usefulness of the DF flag will make more sense in the examples. Clearing the DF flag DF is set to 0. Addresses are incremented where applicable. Setting the DF flag DF is set to 1. Addresses are decremented where applicable. In the example below, the following variables have been defined: .data HelloWorldString: .asciz "Hello World of Assembly!" .bss .lcomm Destination, 100 movs: Moving a string from one memory location to another memory location source: %esi; should contain a memory address where the data to be copied resides; the data at this address is not modified, but the address stored in the %esi register is incremented or decremented according to the DF flag destination: %edi; should contain a memory address where the data will be copied to; after copying, the address stored in the %edi register is incremented or decremented according to the DF flag movsb: move a single byte movsw: move 2 bytes movsl: move 4 bytes movl $HelloWorldString, %esi movl $Destination, %edi movsb movsw movsl In this example, we first move the address of HelloWorldString into the ESI register (the source string). Then we move the address of Destination into EDI (the destination buffer). When movsb is called, it tells the CPU to move 1 byte from the source to the destination, so the ‘H’ is copied to the first byte in the Destination label. However, that is not the only thing that happens during this operation. You may have noticed that I pointed out how the address stored in the %esi and %edi registers are both incremented or decremented according to the DF flag. Since the DF flag is cleared, both %esi and %edi are incremented by 1 byte. But why is this useful? Well, what it means is that the next string operation to be called will start copying from the 2nd byte of the source string instead of the first byte. In other words, rather than copying the ‘H’ a second time, we’ll start by copying the ‘e’ in the HelloWorldString instead. This is what makes the movs series of operations far more useful than the mov operations when dealing with strings. So, as you might imagine, when calling movsw the next 2 bytes are copied and Destination now holds “Hel”. And finally the movsl operation copies 4 bytes into Destination, which makes it “Hello W”. Of course, the memory locations held in both %esi and %edi have now been incremented by 7 bytes each. So the final values are.. %esi: $HelloWorldString+7 %edi: $Destination+7 HelloWorldString: "Hello World of Assembly!" Destination: "Hello W" lods: Moving a string from a memory location into the EAX register source: %esi; should contain a memory address where the data to be copied resides; the data at this address is not modified, but the address stored in the %esi register is incremented or decremented according to the DF flag destination: %eax; the contents of this register are discarded because the data is copied directly into the register, NOT to any memory address residing in the register; no incrementing or decrementing occurs because the destination is a register and not a memory location lodsb: move a single byte lodsw: move 2 bytes lodsl: move 4 bytes stos: Moving a string from the EAX register to a memory location source: %eax; the contents of this register are copied, NOT the contents of any memory address residing in the register; no incrementing or decrementing occurs because the source is a register and not a memory location destination: %edi; should contain a memory address where the data will be copied to; after copying, the address stored in the %edi register is incremented or decremented according to the DF flag stosb: move a single byte stosw: move 2 bytes stosl: move 4 bytes rep: Repeating an operation so you can move strings more easily This will continue executing the movsb operation and decrementing the ECX register until it equals 0. So if you wanted to copy a string in its entirety, you could follow this pseudo-code: * set ESI to the memory address of the source string * set EDI to the memory address of the destination string * set ECX to the length of the source string * clear the DF flag so ESI and EDI will be incremented for each call to movsb * call rep movsb movl $HelloWorldString, %esi movl $DestinationUsingRep, %edi movl $25, %ecx # because HelloWorldString contains 24 characters + a null terminator cld rep movsb Here we have movsb being called 25 times (the value of ECX). Because movsb increments both the ESI and EDI register you don’t have to concern yourself with the memory handling at all. So at the end of the example, the values are.. %esi: $HelloWorldString+25 %edi: $Destination+25 %ecx: 0 DF: 0 HelloWorldString: "Hello World of Assembly!" Destination: "Hello World of Assembly!" More to Come I hope you enjoyed reviewing data types and mov operations. Stay tuned for more assembly tips!
<urn:uuid:64f45300-486b-4e99-b99a-aa17192aec04>
CC-MAIN-2013-20
http://www.techblogistech.com/2011/08/data-types-and-moving-data-in-assembly/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.858473
2,492
3.234375
3
It is not uncommon for patients to have increasing viral loads while on treatment. However, patients can have a disconnect: they may have detectable viral load and yet still be deriving benefit from their failing regimen. Their CD4 T-cell counts are not plummeting and are often still increasing. Overall, their general health remains well. These individuals continue with their daily life and routine, without any clinical consequences. The discordance between T-cells and viral load is often referred to as the Disconnect Syndrome. It is not a new disease, just an observation of the difference between viral and immunologic lab measures (viral load test vs. T-cell count). Usually, patients on antiretroviral treatment demonstrate a drop in viral load (often to undetectable levels) while improving their immune system with an accompanying T-cell rise. The disconnect syndrome of rising viral load along with stable or improving immune markers such as T-cells is more common among patients who have a longer history of being on several antiviral regimens. Viral drug resistance, which is associated with decreased efficacy of treatment, is not uncommon for these patients. They have fewer options than patients on their very first antiviral regimen. Usually, patients with an overtly failing regimen need to undergo changes in their antiviral treatment. This is a basic tenet of care for the chronically HIV-infected individual. This is done to halt progression of HIV disease, to preserve immune system function and to avoid further resistance development. However, in the unique situation of the disconnect syndrome, a question may be posed: Does every discordant patient merit a change in antiretroviral therapy? Sometimes a clinician may consider the fact that the viral load (HIV RNA) has not reached levels high enough to merit exposing their patient to further antiretroviral drugs. Many patients in this disconnect situation have already been exposed to multiple antiviral agents. Since undetectability does not mean one is cured, one must weigh the risks and benefits of modifying the regimen in order to lower the viral load. There exists a dilemma when considering altering a regimen in this unique situation. New antivirals to reduce viral load may forestall the emergence of more resistance mutations. On the other hand, one must consider that changing to yet another new regimen will reduce options for the future. This is critical in situations where new options for specific and heavily treated patients are not plentiful. Realistically, formulating a regimen for a heavily treated patient is often challenging because of the presence of multiple resistance mutations. Therefore the likelihood or durability of fully suppressing viral load with a new regimen is in question. Thus management of patients who are highly treatment experienced and who have a discordant response is a real quandary. It is believed that continuing the failing regimen further selects for resistance mutations, therefore further limiting future therapeutic options. But when there is stability in the elevated viral load together with increasing CD4 counts going yet higher, patients are obviously still deriving clinical benefit. No large prospective clinical trial has been performed to help provide insight for this situation. Patients who manifest a disconnect do not have undetectable viral loads, so by definition they generally have mutations or resistance. These mutations occur in the virus itself, usually in response to drugs used against it. The mutations in turn allow HIV to develop drug resistance. This means what it says: HIV can resist the drug or drugs, therefore making the medications less effective in fighting the virus. Individuals with a discordant response usually exhibit high numbers of mutations against the nucleoside drug class, which often includes the M184V mutation. This specific mutation of M184V (refers to a change in the amino acid switch in HIVs viral gene strand) is most known for being the tell-tale sign of 3TC (Epivir) resistance. But having the 184V resistance mutation has also been associated with sustained responses to antivirals, confirmed in several studies. Generally, cross-resistance would be a concern. Mutations that resist one drug may also resist another, especially one in the same drug class. This may lower the efficacy of new drugs which a patient has never taken before. However, if one has the 184V, without other nucleoside mutations, it does not confer resistance to other nucleosides such as ddI, d4T, ddC or abacavir (Videx, Zerit, Hivid or Ziagen). Also, the M184V seems to result in re-sensitization of the virus to AZT (Retrovir) in patients who previously developed resistance to AZT. Finally, the presence of 184V in highly experienced patients is associated with a better antiviral response to the newest HIV agent, tenofovir (Viread). A complex interaction of viral and other factors are at play in discordant responses to HAART (highly active antiretroviral therapy). These include drug resistance mutations, replicative capacity and immunologic aspects. The initial status of the patient, including CD4 T-cell count and presence of the 184V mutation before antiviral treatment, is predictive of responses to HAART and development of discordance. A lower CD4 count is more predictive of discordance. The more damaged ones immune system has become prior to treatment, the more difficult it may be for the immune system to assist in suppressing viral load later. Often, T-cells remain stable or rise despite not obtaining optimally suppressed viral loads because HIV (though resistant) becomes weakened by antiviral drugs, impairing its ability to replicate. Thus the immune system is able to continue its restoration process. In other words, the antiviral treatments cause a decreased replicative capacity of the virus. In fact, there is a firm relationship between the high numbers of mutations and decreased replicative capacity of virus from people with discordance. The disconnect syndrome can be explained in an alternative way. The M184V and other mutations may result in the virus becoming less fit than wild type. (Wild type is virus that has not mutated, seen usually in non-treated individuals.) The less fit the virus, the less able it is to overcome the effects of other antivirals. Additionally, the reverse transcriptase enzyme, which HIV uses to reproduce itself, is also crippled despite the presence of resistance, and thus becomes less able to help make copies of the virus. HIV can not process its DNA strand (viral gene), and is therefore unable to replicate. Finally, development of increased mutations does not interfere with immune recovery during HAART. Measured by immune cell proliferation and response to interleukin 15 (a specific cytokine, protein produced by immune cells used for research purposes to measure immune response), researchers found that discordant patients had responses similar to fully respondant patients (Stephano Vella and colleagues, 9th Retroviruses Conference, Seattle, February 2002). Without attempting to advise whether patients in a disconnect situation should change or continue their treatment, the questions invoked here are placed on the table. The presence of primary resistance mutations can oddly enough be associated with the provision of some beneficial effects. However, developing resistance or discordance is not the preferred outcome. When a patient is facing this discordant predicament, the next path may not be always clear. Phenomena are occurring in the disconnect syndrome that are below the surface. A patients decisions are often complicated by various confounding issues. This is compounded by the fact that data regarding the long-term outlook of patients continuing in this disconnect pattern is sorely lacking. Some researchers have demonstrated higher progression rates while others concluded that the immunologic deterioration is delayed by an average of three years (Stephen Deeks and colleagues, University of California at San Francisco). However large trials of disconnected patients who continue to maintain good clinical immunologic response to HAART for a specified duration would provide greater insight into the risks. It seems that patients manifesting a disconnect who continue their treatment are stable clinically and not developing opportunistic infections. However, with the ongoing epidemic of resistance, it would be helpful to understand what it all means to a patients health and longevity. Daniel S. Berger, M.D. is Medical Director for NorthStar Healthcare; Clinical Assistant Professor of Medicine at the University of Illinois at Chicago and editor of AIDS Infosource. He also serves as medical consultant and columnist for Positively Aware. Inquiries are welcomed by Dr. Berger; he can be reached at [email protected] or 773.296.2400.
<urn:uuid:cdc2bab7-d594-4f51-9a83-92d392ec04a6>
CC-MAIN-2013-20
http://www.thebodypro.com/content/art1050.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952272
1,739
3.03125
3
In mathematics, hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. The basic hyperbolic functions are the hyperbolic sine "sinh" (typically pronounced /ˈsɪntʃ/ or /ˈʃaɪn/), and the hyperbolic cosine "cosh" (typically pronounced /ˈkɒʃ/), from which are derived the hyperbolic tangent "tanh" (typically pronounced /ˈtæntʃ/ or /ˈθæn/), etc., in analogy to the derived trigonometric functions. The inverse hyperbolic functions are the area hyperbolic sine "arsinh" (also called "asinh", or sometimes by the misnomer of "arcsinh") and so on. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola. Hyperbolic functions occur in the solutions of some important linear differential equations, for example the equation defining a catenary, and Laplace's equation in Cartesian coordinates. The latter is important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity. Hyperbolic functions were introduced in the 18th century by the Swiss mathematician Johann Heinrich Lambert. The hyperbolic functions are: Via complex numbers the hyperbolic functions are related to the circular functions as follows: where is the imaginary unit defined as . Note that, by convention, sinh2x means (sinhx)2, not sinh(sinhx); similarly for the other hyperbolic functions when used with positive exponents. Another notation for the hyperbolic cotangent function is , though cothx is far more common. Hyperbolic sine and cosine satisfy the identity which is similar to the Pythagorean trigonometric identity. It can also be shown that the area under the graph of cosh x from A to B is equal to the arc length of cosh x from A to B. For a full list of integrals of hyperbolic functions, see list of integrals of hyperbolic functions In the above expressions, C is called the constant of integration. It is possible to express the above functions as Taylor series: A point on the hyperbola xy = 1 with x > 1 determines a hyperbolic triangle in which the side adjacent to the hyperbolic angle is associated with cosh while the side opposite is associated with sinh. However, since the point (1,1) on this hyperbola is a distance √2 from the origin, the normalization constant 1/√2 is necessary to define cosh and sinh by the lengths of the sides of the hyperbolic triangle. and the property that cosh t ≥ 1 for all t. The hyperbolic functions are periodic with complex period 2πi (πi for hyperbolic tangent and cotangent). The parameter t is not a circular angle, but rather a hyperbolic angle which represents twice the area between the x-axis, the hyperbola and the straight line which links the origin with the point (cosh t, sinh t) on the hyperbola. The function cosh x is an even function, that is symmetric with respect to the y-axis. The function sinh x is an odd function, that is −sinh x = sinh(−x), and sinh 0 = 0. The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems the "double angle formulas" and the "half-angle formulas" The derivative of sinh x is cosh x and the derivative of cosh x is sinh x; this is similar to trigonometric functions, albeit the sign is different (i.e., the derivative of cos x is −sin x). The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain hanging freely under gravity. From the definitions of the hyperbolic sine and cosine, we can derive the following identities: These expressions are analogous to the expressions for sine and cosine, based on Euler's formula, as sums of complex exponentials. Since the exponential function can be defined for any complex argument, we can extend the definitions of the hyperbolic functions also to complex arguments. The functions sinh z and cosh z are then holomorphic. Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers:
<urn:uuid:34eefbfb-968b-4240-9caa-0182a3ca0559>
CC-MAIN-2013-20
http://www.thefullwiki.org/Hyperbolic_tangent
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893241
1,119
4.0625
4
The road to 'civilisation' is paved with bad intentions. Infestation, in'fes•ta'tion n. the state of being invaded or overrun by pests or parasites. Do people inhabit the lands and forests that they have been living in for thousands of years or do they infest them? The answer to this no-brainer of a question might well lie at the root of the problem being faced by the Jarawas in the Andaman Islands today. The video showing the Jarawa women dancing on the Andaman Trunk Road, apparently for food, is just the latest manifestation of a malaise that is so deep that one might well argue that there is no hope for the Jarawa. In 1965, the Ministry of Rehabilitation, Government of India, published an important document related to the Andaman & Nicobar Islands: ‘The Report by the Inter Departmental Team on Accelerated Development Programme for A&N Islands.' The contents of the report and their purpose were evident in the title itself — it laid out the roadmap for the development of these islands and set the stage for what was to happen over the decades that have followed. This little known report of less than a 100 pages in size is remarkable for the insight it provides into the thinking and the mindset of the times. There is what one might call a shocker on every page of this document and here is a just a sampling: Page 26: …The Jarawas have been uniformly hostile to all outsiders with the result that about half the Middle Andaman is treated as a Jarawa infested (emphasis added) area which is difficult for any outsider to venture… With the present road construction and the colonisation of the forest fringes, friction has become more frequent, and no month passes without a case of attack by the Jarawas. Page 69: The completion of the Great Andaman Trunk Road would go a long way to help in the extraction of forest produces... A nation that had just fought its way out of the ignominy of being a colony was well on the way to becoming a coloniser itself. And those that came in the way could only be pests or parasites infesting the forests that had valuable resources locked away from productive use. It is also pertinent to note here that in 1957 itself, more than a 1000 sq. km of these “Jarawa infested” forests of South and Middle Andaman had already been declared protected as a Jarawa Tribal Reserve under the provisions of the Andaman and Nicobar Protection of Aboriginal Tribes Regulation (ANPATR) — 1956. The 1965 report was in complete violation, or was a result of complete ignorance of this legal protection to the Jarawa and the forests that they have inhabited for thousands of years. The seeds that were sown then have bloomed into myriad noxious weeds today and if one knows this history, the latest video that has generated so much heat is not in the bit surprising. Much space in the media, both print and electronic, has been occupied in the last few days by a range of claims and counter claims — about the date of the video, about the police involvement in its making, the role of tour operators and about fixing blame and responsibility. A little known fact that lies at the root of the issue has been all but forgotten — the existence of the Andaman Trunk Road, where this infamous video was shot about three years ago. The Andaman Trunk Road that the 1965 report offered as a good way of extracting resources from the forests of the Jarawa had been ordered shut by a Supreme Court order of 2002. It's been a decade now and in what can only be called audacious defiance, the administration of this little Union Territory has wilfully violated orders of the highest court of the land. A series of administrators have come and gone but contempt for the Supreme Court remains. Whenever asked about the order, the administration has tried to hide behind technicalities of interpreting the court order and arguing that the court had never ordered the road shut in the first place. They forget that in March 2003, a few months after the SC orders had been passed, they had themselves filed an affidavit with a plea to “permit the use/movement through the Andaman Trunk Road.” If it was not ordered shut, why the plea to keep it open? A few months later, in July 2003, the Supreme Court appointed Central Empowered Committee reiterated explicitly that the court orders include those for the closure of the ATR in those parts where it runs through the forests of the Jarawa Tribal Reserve. The A&N administration has clearly violated the court's order both in letter and in spirit. It is a spirit that was evocatively articulated by Dr. R.K. Bhattacharchaya, former Director of the Anthropological Survey of India, in a report he submitted to the Calcutta High Court in 2004. “The ATR”, he said, “is like a public thoroughfare through a private courtyard… In the whole of human history, we find that the dominant group for their own advantage has always won over the minorities, not always paying attention to the issue of ethics. Closure of the ATR would perhaps be the first gesture of goodwill on part of the dominant towards an acutely marginalized group almost on the verge of extinction”. The video in all its perversity offers us another opportunity, when all others in the past have been brushed aside either due to ignorance, arrogance or then sheer apathy. It's still not too late to make that ‘gesture of goodwill' because otherwise there will be many more such videos down the years and much worse will follow. The lessons from history are very clear on this. And it will hardly be a consolation that a few people will be left saying we told you so. (The writer is associated with Kalpavriksh, one of the three NGOs whose petition before the Supreme Court resulted in orders for the closure of the Andaman Trunk Road in 2002. He is also the author of Troubled Islands — Writings on the indigenous peoples and environment of the A&N Islands.)
<urn:uuid:d3dd668b-c291-441d-b935-9480a07d7d9b>
CC-MAIN-2013-20
http://www.thehindu.com/opinion/op-ed/because-andamans-forests-are-jarawa-infested/article2811842.ece
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969717
1,263
2.59375
3
I was taught at medical school to remember that “what’s common, is common”. So if a patient comes into clinic with a sore throat, the chances are it is simply that — a sore throat. Picking out the unusual diagnoses from seemingly common symptoms is one of the challenges of medicine, particularly when you are in general practice. Back pain is a really common symptom to see in clinic and usually has no sinister cause. But there are some unusual diagnoses that present as back pain for doctors and patients to be mindful of. While lower back pain can be caused by physical injury, or slipped discs, for five per cent of sufferers, the pain is due to inflammation caused by a form of arthritis normally seen in young people. Since back pain is so common, the problem can go undiagnosed for years and lead to long-term damage. But it can be treated. Called axial spondyloarthritis, or axial SpA, it primarily affects the sacroiliac joint (the junction between the spine and the pelvis). Identifying the difference between this pain and mechanical back pain is vital, and there are a range of signals to look for. Typically, symptoms first appear in young people in the prime of their lives — this alone should ring an alarm bell — and often they have back pain or stiffness that has lasted for over three months. This is certainly a trigger to go to the GP. It is common for axial SpA patients to experience pain in the morning which improves with exercise but then worsens with rest. If these symptoms sound like a familiar pattern, a trip to the doctor will be necessary for some blood tests as well as a referral to get an MRI scan. Diagnosis of this condition has been known to take as long as 10 years for some sufferers, but vigilance from doctors in looking for uncommon causes of a common symptom, should hopefully improve the care that sufferers receive.
<urn:uuid:bce777db-55c4-4d61-aa98-dd71307de129>
CC-MAIN-2013-20
http://www.thejc.com/print/89562
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967141
405
2.65625
3
Solar-powered ion drive asteroid probe set for launch Not much va-va-voom, but very economical NASA has confirmed that the "Dawn" space probe to the asteroid belt will indeed launch on Saturday, ending speculation that the mission might be delayed. After launching, Dawn will spend four years in transit to the asteroids, circling the Sun twice and gaining a "gravity assist" on the way by making a close approach to Mars. Planned voyage of the Dawn space probe In 2011, Dawn will go into orbit around Vesta, one of the larger rock-like asteroids in the Belt. After around five months studying Vesta, the probe will depart in early 2012 and complete most of a further circuit round the Sun, finally catching up with Ceres in 2015 and going into orbit around it. Ceres is a large example of the other main asteroid type, thought to be largely icy in composition. The Dawn mission will be the first to go into orbit around a Belt asteroid, and also the first to orbit two different bodies. This unusual flight profile would be all but impossible, according to NASA, were it not for the spacecraft's innovative propulsion system. Dawn will lift off from Earth aboard a conventional, chemical Delta II rocket, but its journey to the asteroids will be propelled - apart from the helping hand from Martian gravity - by means of an ion drive. In an ion drive, thrust is still provided by throwing stuff out of the exhaust just as with a regular rocket. The difference is that rather than chemicals burning and expanding to throw themselves out of a combustion chamber, Dawn's Xenon propellant is squirted out of its back end using electrical power generated by solar panels. As one might expect, this produces an exceptionally feeble thrust; equivalent to about 0.02lb. The clever bit is that the ion engine achieves this thrust very economically, using only a tiny amount of propellant, because it accelerates the Xenon-plasma exhaust to such a high velocity. Dawn takes ages to squirt out a given amount of fuel, but when the ion drive finally does so it has achieved much more with it than a chemical rocket could have done. All this makes the multi-asteroid flightplan achievable within the NASA budget. Now that NASA has addressed some technical concerns, it appears that Dawn's Saturday launch is a go, and then it's just four years until it gets to Vesta. ®
<urn:uuid:dc8a9cff-ba2c-40af-89ae-143c7b188d0a>
CC-MAIN-2013-20
http://www.theregister.co.uk/2007/07/04/solar_powered_spaceship_is_go/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944022
496
2.640625
3
Cloudy outlook for climate models More aerosols - the solution to global warming? Climate models appear to be missing an atmospheric ingredient, a new study suggests. December's issue of the International Journal of Climatology from the Royal Meteorlogical Society contains a study of computer models used in climate forecasting. The study is by joint authors Douglass, Christy, Pearson, and Singer - of whom only the third mentioned is not entitled to the prefix Professor. Their topic is the discrepancy between troposphere observations from 1979 and 2004, and what computer models have to say about the temperature trends over the same period. While focusing on tropical latitudes between 30 degrees north and south (mostly to 20 degrees N and S), because, they write - "much of the Earth's global mean temperature variability originates in the tropics" - the authors nevertheless crunched through an unprecedented amount of historical and computational data in making their comparison. For observational data they make use of ten different data sets, including ground and atmospheric readings at different heights. On the modelling side, they use the 22 computer models which participated in the IPCC-sponsored Program for Climate Model Diagnosis and Intercomparison. Some models were run several times, to produce a total of 67 realisations of temperature trends. The IPCC is the United Nation's Intergovernmental Panel on Climate Change and published their Fourth Assessment Report [PDF, 7.8MB] earlier this year. Their model comparison program uses a common set of forcing factors. Notable in the paper is a generosity when calculating a figure for statistical uncertainty for the data from the models. In aggregating the models, the uncertainty is derived from plugging the number 22 into the maths, rather than 67. The effect of using 67 would be to confine the latitude of error closer to the average trend - with the implication of making it harder to reconcile any discrepancy with the observations. In addition, when they plot and compare the observational and computed data, they also double this error interval. So to the burning question: on their analysis, does the uncertainty in the observations overlap with the results of the models? If yes, then the models are supported by the observations of the last 30 years, and they could be useful predictors of future temperature and climate trends. Unfortunately, the answer according to the study is no. Figure 1 in the published paper available here[PDF] pretty much tells the story. Douglass et al. Temperature time trends (degrees per decade) against pressure (altitutude) for 22 averaged models (shown in red) and 10 observational data sets (blue and green lines). Only at the surface are the mean of the models and the mean of observations seen to agree, within the uncertainties. While trends coincide at the surface, at all heights in the troposphere, the computer models indicate that higher trending temperatures should have occurred. And more significantly, there is no overlap between the uncertainty ranges of the observations and those of the models. In other words, the observations and the models seem to be telling quite different stories about the atmosphere, at least as far as the tropics are concerned. So can the disparities be reconciled?
<urn:uuid:ee3fdce7-621c-4026-a762-105cef4462ab>
CC-MAIN-2013-20
http://www.theregister.co.uk/2007/12/27/anton_wylie_climate_models/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933174
646
3.265625
3
Education is the transmission of civilization.~ William James Durant (1885–1981) and Ariel Durant, born Chaya Kaufman (1898 - 1981) After the murder of Benazir Bhutto, Pakistan dominated the news for a week or so but that has now faded. We were regaled with speculation about the danger to the world posed by an unstable, nuclear-armed, undemocratic state where fundamentalist Muslims find it easy to integrate into society. It was in Pakistan that the Taliban (which took over the government of Afghanistan and provided shelter to Al Qaeda) were originally able to organize and build a foundation. A recent analysis on television suggests that the Taliban are direct descendants of protestors who instigated the mutiny against British imperial rule and Christian missionary zeal in 19th century India. But Pakistan is also seen as a major bulwark in the "War on Terror" and has been the recipient of $5bn in US aid since the attack on the twin towers. Now the excitement has died down, Pakistan has dropped out of media consciousness but its problems remain. And one of its greatest problems is education. Like the rest of public life in Pakistan, the education system is subject to endemic corruption. And this should trouble the rest of the world because education in Pakistan is being exploited by fundamentalists in their drive to recruit new followers. When it was provided with American aid on a massive scale, Pakistan promised to devote some of the money to improving its education system. The World Bank has also allocated a separate $300mn specifically to support schools and colleges – but fearing that the money will disappear into a sink of corruption, it is reluctant to disburse the funds until proper control systems are put in place. These fears are justified. American officials supervising military aid suspect that invoices for supplies are inflated by as much as 30%, enabling millions of dollars to disappear. And in the education system, officials estimate that corruption taps 15% of intended expenditure. Little has been done to improve education in Pakistan. In the Punjab, for example, there are 63,000 state schools, of which: - 5,000 (8%) have been condemned as dangerous structures. - 26,000 (41%) have no electricity. - 16,000 (25%) have no toilets. Many teachers see their jobs as sinecures and don't turn up to work, while local inspectors distrust the information provided by the ministry of education. Few schools have enough classrooms and some resort to teaching in the open air under trees (possibly safer than sitting in a classroom with cracks in the walls and an unstable roof). Often they have to cope with only one quarter of the desks required. Understandably, parents are reluctant to send their children to these underfunded and under-supervised institutions. Two groups of educators have moved in to fill this vacuum: private schools and religious madrassas. It is the madrassas that have attracted most attention and generated hysteria in the press both inside and outside Pakistan. Some of them are run by fundamentalists, preach Jihad, and groom their students to be revolutionary fighters and suicide bombers The media in Pakistan and across the world, supported by wild estimates made by Pakistani police, have exaggerated the scale of this problem. A more restrained study by the World Bank and Harvard University has estimated that the true numbers of children being educated in madrassas represents a little less that 1% of children in the 5-19 age group. These figures must be put into context: - 33% of children are enrolled in state schools. - a further 12% are enrolled in private schools. - 87% of children enroll in primary education, but numbers fall sharply at secondary level. - literacy rates are 63% for men and 36% for women, showing that the standard of education is poor (in comparison, the figures for India are 76% and 54%). Wealthily endowed madrassas The development of the private sector is striking. Private schools now educate one third as many children as those educated in the state sector. The population values education and is willing to make sacrifices to give their children the schooling which the state fails to provide. Much has been said about madrassas (wealthily endowed by Saudi money) providing the only chance for the poorest Pakistani families. But private schools are cheap and all but the very poorest can afford them. So is there nothing to worry about? Indeed no. There are dangers and they are serious ones. The WB/Harvard study showed that, while in most areas of Pakistan madrassas account for less than 1% of school enrolments, in the so-called tribal areas (where Pasto is the main language and there are strong links to Afghanistan) the percentage rises to over 7%. These are the areas the state finds most difficult to control and, if madrassas do have a malign influence, it is here that it would be easiest to foment and develop an anti-democratic movement. Children brought up to hate Muslims The survey also estimated that there are about 175,000 students enrolled in madrassas. If we make a guess that 5% of madrassas are run by fundamentalists, this still means that almost 9000 children are being brought up to hate Muslims who do not meet their own "high" standards. The theory propagated by the extremists is this. The only acceptable law is Sharia law and this should be interpreted strictly (hence the enforcement of headscarves and the like for women … among much worse horrors). It is the duty of good Muslims to create a state which accepts and enforces Sharia. Government leaders who do not concur are the enemy. Those who conspire with the West are the enemy. Muslims who support these governments are the enemy. In this way, the fundamentalist madrassas create a justification for killing other Muslims. The suicide bombers are given a target and a cause. It has, however, very little to do with the West; the majority of victims are much closer to home. But a flow of almost 9000 young men and women (possibly more – other estimates are higher and my guess of 5% of may be optimistic) is more than enough to recruit suicide bombers and build momentum for the movement. Dodging and weaving So let us return to Benazir Bhutto. She and her husband spent the years since she was ousted from power dodging and weaving to avoid convictions for corruption and embezzlement. Indeed, she was convicted of money laundering by a Swiss magistrate, while a British judge found grounds for a prosecution against her and/or her husband for purchasing an estate in the English home counties with the fruits of embezzlement. Despite this track record, the West was keen to have Bhutto as a friend in Pakistan because of the fear that a nation with its own nuclear weapons could fall into the hands of someone worse. The US has provided huge amounts of cash, some of which has been used to buy delivery systems for these weapons of mass destruction, and has only recently begun to worry about whose finger might be on the button. Bhutto provided some hope of a friend to the West and she certainly looked the part, acting like a civilized politician, speaking excellent English, and sending her son to Oxford. She had plenty of support in Pakistan (the first attempt on her life killed more than 130 people because her rally attracted so many supporters). But it is almost certain that she, like other political leaders in Pakistan, was a thief. Some of the money she stole, and the money that leaked away into the pockets of bureaucrats and politicians, was supposed to have been spent on education, on the rebuilding of dangerous schools, and on ensuring that teachers turned up to do their jobs. The public in Pakistan wants education and many people are willing to pay for it. Some of them, however, send children to be taught hatred by cynical clerics who tell them that martyring themselves while killing the opponents of whichever fundamentalist branch of Islam they represent will earn them a place in paradise.
<urn:uuid:ca196da4-3cdc-489d-9476-bb650b4c10d6>
CC-MAIN-2013-20
http://www.thinkhard.org/2008/01/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973039
1,648
2.890625
3
Over 8,000 websites created by students around the world who have participated in a ThinkQuest Competition. Compete | FAQ | Contact Us Men in White Our website teaches the history of one of the most well known groups in the United States. The Ku Klux Klan is a group that is prejudice and unjust. We have covered the history of this group because we just read a book in class that had the Ku Klux Klan in it. We hope that people do not think we support this group because we do not! 12 & under History & Government
<urn:uuid:6c9975c9-4ead-475f-8bed-57df23d2c261>
CC-MAIN-2013-20
http://www.thinkquest.org/pls/html/f?p=52300:100:3420835272614713::::P100_TEAM_ID:501583441
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945571
113
3.015625
3
Over 8,000 websites created by students around the world who have participated in a ThinkQuest Competition. Compete | FAQ | Contact Us The Kyoto Protocol: Changing Climates This site makes concepts like climate change and the Kyoto Protocol interesting and fun for teenagers. It contains information on climate change, the history and contents of the Kyoto Protocol and visual and oral sources of information. The site aims to help teenagers understand the current and important issue as clearly and enjoyably as possible. 19 & under Science & Technology > Earth Science History & Government > International Politics
<urn:uuid:09d2e0eb-edd6-4c39-a9a9-2c728bec7ac3>
CC-MAIN-2013-20
http://www.thinkquest.org/pls/html/f?p=52300:100:4425064156679575::::P100_TEAM_ID:501580413
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.823303
115
2.640625
3
Named as a place to stay by & National Geographic Episode 13 Season 4 WEATHER IN TRINITY Trinity is surrounded by rocky beaches perfect for searching out treasures such as colourful seashells, sea glass and old clay pipes. Visitors who come in late spring early summer just might get the change to witness "the capelin run". This is when thousands of silvery capelin roll up on the beaches to spawn. Not only is it an amazing sight to behold, but it also serves as an indication that the whales, which feed on capelin, will soon be arriving! The Bald Eagle: Bald Eagles are no stranger to Trinity Bay. Visitors can spot eagle's nests while hiking or taking a boat tour. These magnificent birds are often spotted soaring over the town of Trinity and are hard to miss with wingspans sometimes reaching over 6 feet. Immature eagles are brown and do not develop their white heads and tails until they are five years old. Because Trinity Bay is a popular nesting ground, bird watchers can see these majestic creatures at their different ages. The Atlantic Puffin: Atlantic Puffins spend most of their time at sea but return to land during spring and summer to form breeding colonies. One such breeding colony is locatedin Elliston on route 238 and is just a short drive from Trinity. These white and black birds with colourful beaks are only 10 inches tall and flap their wings at over 400 beats per minute. It can often be difficult to photograph a puffin as it flies, but plenty can be spotted resting on land tending their burrows. The Arctic Tern: This bird is known for making the longest annual migration in the animal kingdom, guaranteeing itself two summers, lending it the nickname "bird of the sun". The Arctic Tern can often be spotted diving for small fish and crustacean around Trinity's beaches and directly in front of the Twine Loft Restaurant. Bird watching Links: Eastern Newfoundland Birdfinder The province's tourism site provides information about birds, seabird ecological reserves, and private tour operators. The province's tourism site provides information about birds, seabird ecological reserves, and private tour operators. The Natural History Society of Newfoundland and Labrador's site has a birding checklist for the province, and occasionally information about birding activities. The province's Parks and Natural Areas site lists and describes seabird ecological reserves. This Google group site is where local birders report recent sightings of interest.
<urn:uuid:d39dc008-9771-4524-8e8c-9d65199c8afd>
CC-MAIN-2013-20
http://www.trinityvacations.com/contact-us/frequently-asked-questions/artisan-inn/shoulder-season-may-october/birds-beaches/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917629
517
2.546875
3