SciNote Spotlight: Libutron

SciNote Spotlight is a weekly post highlighting active and interesting science blogs on tumblr.

This week’s SciNote Spotlight blog of the week is:

image

About the Author:

Being a biologist by profession but a naturalist at heart, I am convinced that every effort for the conservation of the biodiversity of the planet is based, firstly, on the information and knowledge of the species and their environment. We cannot effectively conserve what is not known, and as a society, we will make no effort toward conservation if we are not informed and motivated. That’s why this blog is dedicated to showing and providing scientific data on all types of organism, and to addressing topics related to the physical context in which life occurs (geology, mineralogy, meteorology, landscape), as well as artistic expressions reflecting the close relationship between man and nature.

Featured Posts: 

Nomia iridescens: a bee with colourful abdominal stripes

Agalychnis annae: an extraordinary and endangered frog

View all original posts from Libutron here

Question:
How did we get computers to have a “memory”? I mean, if computers are just made up of chips of metal and electricity, how can they store information?
Asked by anonymous

Answer:
Computers have been around for quite some time— perhaps not in the way we typically think of them, but they have been there. At first, it was easy to conceive a mechanical way to store information, the problem came when we began demanding more out of computers, switching into electronic and magnetic components.
The main principle is storing information in one of two states: either 1 or 0. In terms of electrical components, this is simple: you either have a component in the “on” state or “off” state. The ways to process that information, save it, optimize the process, and make it fully automated vary immensely. 
Back in the good old days of computing, memory worked through purely mechanical means. How exactly did we achieve this? Well, one fairly well known method was using punched cards, or Hollerith cards. These were pieces of stiff paper with holes in them. This holes were punched in predefined positions, allowing for early computers— and I mean 1800s computers, not your grandma’s computers— to be able to process data and run automated processes. Note how the concept is fundamentally the same as our system: you still have a set of two distinct states. Several other mechanical ways of accessing and storing information also arose during the early periods of computing, methods which included valves and gears, but these processes were still slow and tedious.
Eventually, we began to need faster, more efficient, and less bulky ways for storing and accessing information.
The first attempt was using electrical valves, which are basically circuits wired so that one valve can be turned on and the other one off. This posed several problems in terms of space efficiency and was incredibly expensive, not to mention highly inefficient in terms of energy consumption. Another concern was how to make these system “non-volatile”, so that you could restart your machine and still have your information there.
Another idea was to place a long tube of mercury with one end on a loudspeaker. Ideally, you would have waves travel through the tube and would be able to detect pulses at the end of the tube. The problem was that you had to constantly circulate these waves, and you could only detect the pulse for a very brief period, right when the wave was “bouncing back”.
Eventually, we got to the point where we managed to create “cores”, which are basically magnetic rings threaded on wires. Bits of information were stored using the direction of the magnetization of the cores. The first cores were huge— storing 1Mbyte required the space of a small car, but we got around to making them smaller and more efficient.
To further optimize our computers, we shifted from magnetized cores toward electronic components. Namely, we’re using transistor chains, which apply a precise voltage to the circuit to produce a pattern of 1’s and 0’s depending on whether or not the current is conducted.
Nowadays, chances are your computer has either a Hard Disk Drive (HDD) or a Solid State Drive (SSD). HDDs are the most common way of storing information on your average computer. The’re basically metal platters with a magnetic coating that stores your information. The platters are spinning rapidly in an enclosed space from which a read/write arm accesses the data. SSDs are a bit more of a novelty for your average PC user, but are faster and more reliable. Instead of having your data stored in a magnetic coating, data in a SSD is stored in an interconnected flash memory chip, much like your USB. Since they do not rely on magnetic coatings, nor do they depend on mechanical parts (like moving arms), SSDs are more reliable and faster, but the drawback is that they are, at least for now, more expensive than HDDs.
In the end, the history of computers revolves around the same central theme: how do we make information readily available and easy to process? Over time, we’ve been demanding more and more out of our computers. As we do so, we of course face increasingly difficult challenges and are forced (or encouraged, if you like) to reinvent our ways in order to keep up with the demand for power and efficiency.
So how did we do it? We say: ingenuity, that’s how.

Answered by Demian L, Expert Leader.
Edited by Margaret G.

Question:

How did we get computers to have a “memory”? I mean, if computers are just made up of chips of metal and electricity, how can they store information?

Asked by anonymous

Answer:

Computers have been around for quite some time— perhaps not in the way we typically think of them, but they have been there. At first, it was easy to conceive a mechanical way to store information, the problem came when we began demanding more out of computers, switching into electronic and magnetic components.

The main principle is storing information in one of two states: either 1 or 0. In terms of electrical components, this is simple: you either have a component in the “on” state or “off” state. The ways to process that information, save it, optimize the process, and make it fully automated vary immensely. 

Back in the good old days of computing, memory worked through purely mechanical means. How exactly did we achieve this? Well, one fairly well known method was using punched cards, or Hollerith cards. These were pieces of stiff paper with holes in them. This holes were punched in predefined positions, allowing for early computers— and I mean 1800s computers, not your grandma’s computers— to be able to process data and run automated processes. Note how the concept is fundamentally the same as our system: you still have a set of two distinct states. Several other mechanical ways of accessing and storing information also arose during the early periods of computing, methods which included valves and gears, but these processes were still slow and tedious.

Eventually, we began to need faster, more efficient, and less bulky ways for storing and accessing information.

The first attempt was using electrical valves, which are basically circuits wired so that one valve can be turned on and the other one off. This posed several problems in terms of space efficiency and was incredibly expensive, not to mention highly inefficient in terms of energy consumption. Another concern was how to make these system “non-volatile”, so that you could restart your machine and still have your information there.

Another idea was to place a long tube of mercury with one end on a loudspeaker. Ideally, you would have waves travel through the tube and would be able to detect pulses at the end of the tube. The problem was that you had to constantly circulate these waves, and you could only detect the pulse for a very brief period, right when the wave was “bouncing back”.

Eventually, we got to the point where we managed to create “cores”, which are basically magnetic rings threaded on wires. Bits of information were stored using the direction of the magnetization of the cores. The first cores were huge— storing 1Mbyte required the space of a small car, but we got around to making them smaller and more efficient.

To further optimize our computers, we shifted from magnetized cores toward electronic components. Namely, we’re using transistor chains, which apply a precise voltage to the circuit to produce a pattern of 1’s and 0’s depending on whether or not the current is conducted.

Nowadays, chances are your computer has either a Hard Disk Drive (HDD) or a Solid State Drive (SSD). HDDs are the most common way of storing information on your average computer. The’re basically metal platters with a magnetic coating that stores your information. The platters are spinning rapidly in an enclosed space from which a read/write arm accesses the data. SSDs are a bit more of a novelty for your average PC user, but are faster and more reliable. Instead of having your data stored in a magnetic coating, data in a SSD is stored in an interconnected flash memory chip, much like your USB. Since they do not rely on magnetic coatings, nor do they depend on mechanical parts (like moving arms), SSDs are more reliable and faster, but the drawback is that they are, at least for now, more expensive than HDDs.

In the end, the history of computers revolves around the same central theme: how do we make information readily available and easy to process? Over time, we’ve been demanding more and more out of our computers. As we do so, we of course face increasingly difficult challenges and are forced (or encouraged, if you like) to reinvent our ways in order to keep up with the demand for power and efficiency.

So how did we do it? We say: ingenuity, that’s how.

Answered by Demian L, Expert Leader.

Edited by Margaret G.

image

No more ice scrapers? New nanomaterial can be used to keep glass free of ice

Those of you who live in colder climates know the feeling of waking up on a winter’s day and having to scrape ice off the windows of your car in freezing temperatures. For those of you who have the fortune of not knowing the feeling, it’s not exactly the definition of fun.

Researchers have developed a material made of tiny strips of carbon called ‘graphene nanoribbons’ that can be used to coat glass to prevent ice and fog from forming on it, while also keeping it transparent. Graphene is a single layer of graphite (the same material used in pencils), and is essentially made up of a flat layer of carbon just an atom thick. Materials like graphene are known as nanomaterials because they are on the nano-scale, which is thousands of times smaller than the thickness of a strand of human hair.

image

[Click the picture above to learn more about graphene nanoribbons.]

image

The use for this new material isn’t just convenient for not having to scrape ice off your car. Graphene can also be used on skyscraper windows, which are prone to forming ice layers that can fall off the side of the building and tumble thousands of feet. When painted over a glass sheet, the nanoribbons form a layer that conducts heat and electricity. When an electric voltage is applied to the side of the glass, the ice melts within minutes, even in temperatures far below freezing. The graphene nanoribbons also allow radio signals to pass through, which means that cellphone and WiFi devices could still be used inside a building or car with this coating.

Even though they’re so small that we can’t see them, graphene and other nanomaterials have the potential to change our lives, keeping us safe and keeping our lives convenient.

Submitted by Allison T, Discoverer.

Edited by Peggy K.

Question:
I know we share 98% of our DNA with chimpanzees. How much of our DNA do we share with other living things (like bacteria or plants or other animals) and what are some elements that are most conserved between different species?
Asked by anonymous

Answer:
We’ve all heard that we share a large amount of DNA with chimpanzees. This isn’t too much of a surprise, really— chimpanzees are complex primates, and in their presence, we’re often awestruck at the similarities between them and us.
What may be more shocking, however, is that an onion – a not-so-complex organism— has 12 times as much DNA as a Harvard professor1. Why so much DNA?
As it turns out, some species, such as the fruit fly, have a tight, clutter-free genome, because they copy only the genes necessary for survival. Other species, such as the onion, copy everything and are left with a genome full of “junk DNA”, or DNA with no known function.
Across all species, the most highly-conserved parts of DNA include the binding sites of protein receptors and the active sites of enzymes, which are areas that are critical for the enzyme to function as a catalyst2.  We can think of these parts of the genome as the sequences that are essential to life; they’re so important that organisms with mutated versions often don’t live to pass the mutations on.
Outside of these areas, however, is where the differences between species lie. Even between closely related species, certain DNA sequences can differ dramatically. For instance, hCONDELs are a collective term for areas of genomic deletions within humans that are otherwise highly conserved in other primates. A good example of an hCONDEL is the deletion of an area of the DNA responsible for the suppression of neurogenesis in other species. The deletion of this region in humans resulted in the development of more brain cells and a larger frontal cortex for us!
Lastly, if you’d like to learn more about genetic overlap between species, this National Geographic link offers a fun game exploring the percent of genetic code humans share with other species.

Answered by Teodora S., Expert Leader.
Edited by Jenny H.

Question:

I know we share 98% of our DNA with chimpanzees. How much of our DNA do we share with other living things (like bacteria or plants or other animals) and what are some elements that are most conserved between different species?

Asked by anonymous

Answer:

We’ve all heard that we share a large amount of DNA with chimpanzees. This isn’t too much of a surprise, really— chimpanzees are complex primates, and in their presence, we’re often awestruck at the similarities between them and us.

What may be more shocking, however, is that an onion – a not-so-complex organism— has 12 times as much DNA as a Harvard professor1. Why so much DNA?

As it turns out, some species, such as the fruit fly, have a tight, clutter-free genome, because they copy only the genes necessary for survival. Other species, such as the onion, copy everything and are left with a genome full of “junk DNA”, or DNA with no known function.

Across all species, the most highly-conserved parts of DNA include the binding sites of protein receptors and the active sites of enzymes, which are areas that are critical for the enzyme to function as a catalyst2.  We can think of these parts of the genome as the sequences that are essential to life; they’re so important that organisms with mutated versions often don’t live to pass the mutations on.

Outside of these areas, however, is where the differences between species lie. Even between closely related species, certain DNA sequences can differ dramatically. For instance, hCONDELs are a collective term for areas of genomic deletions within humans that are otherwise highly conserved in other primates. A good example of an hCONDEL is the deletion of an area of the DNA responsible for the suppression of neurogenesis in other species. The deletion of this region in humans resulted in the development of more brain cells and a larger frontal cortex for us!

Lastly, if you’d like to learn more about genetic overlap between species, this National Geographic link offers a fun game exploring the percent of genetic code humans share with other species.

Answered by Teodora S., Expert Leader.

Edited by Jenny H.

Question: Why doesn’t ocean water freeze?
Asked by anonymous

Answer:
Great question! The answer boils down to three things: the salt content of the water, currents, and the volume of the ocean. Let’s go through these factors one by one.
 The important thing to remember is that the ocean can freeze, but at a lower temperature than pure water. Pure water has a freezing point of 0°C, while seawater has a freezing point of -2°C.
 Seawater has a lower freezing point because, unlike pure water that has nothing dissolved in it, seawater is a solution of salt and water. In fact, there are 35 grams of salt for every 100 grams of water in the ocean.
To understand why salt lowers the freezing point of water, we have to first understand that ice forms when energy, in the form of heat, is removed from water and the strong intermolecular bonds pull water molecules into a lattice structure. Molecules with higher energy will be less rigidly bonded to their neighbors. For example, liquid water has higher energy than solid ice.
Water is a polar solvent, which means that different ends of the molecule have different electric charges, and NaCl, or salt, is an ionic compound. When NaCl is added to the water, the intermolecular forces between water molecules are disrupted. Basically, the water molecules want to associate with NaCl more than they want to hang out with each other, making it harder for them to organize into the lattice structure we need for ice.
Ocean currents, the second factor, occur because of wind, temperature, and salinity differences between different bodies of water. Wind helps move the surface water towards the poles from equatorial regions of the globe. In addition, water movement in the deep ocean is affected by its density, which depends on the salinity and temperature of the water. Cold water is denser than warm water and will tend to sink. The upper, warmer layer of water undergoes evaporation and freezes once it reaches the poles. Both of these processes increase the salt content of the water, making it denser. This now cold and dense water sinks and continues to flow South. The motion of water within the oceans as a result of this process of thermohaline circulation prevents large-scale freezing.

Finally, it is important to consider the surface area to volume ratio of the ocean. The overall volume of the ocean is 1.3 billion cubic kilometres and the total surface area of the ocean is 361,900,000 square kilometres. Since the process of freezing occurs largely where the water meets the cooler air, a body of water with a large surface area to volume ratio freezes much faster than one with a smaller ratio. Although the ocean has a very large surface area, it also has a hefty volume, which renders its surface area to volume ratio quite small.
Sources: 
http://scied.ucar.edu/ocean-move-thermohaline-circulationhttp://sciencequestionswithchris.wordpress.com/2013/04/29/why-dont-the-oceans-freeze/http://oceanservice.noaa.gov/facts/oceanfreeze.htmlhttp://antoine.frostburg.edu/chem/senese/101/solutions/faq/why-salt-melts-ice.shtmlhttp://www.pa.msu.edu/sciencet/ask_st/030492.html

Answered by Simone A., Expert Leader
Edited by Jenny H.

Question: Why doesn’t ocean water freeze?

Asked by anonymous

Answer:

Great question! The answer boils down to three things: the salt content of the water, currents, and the volume of the ocean. Let’s go through these factors one by one.

The important thing to remember is that the ocean can freeze, but at a lower temperature than pure water. Pure water has a freezing point of 0°C, while seawater has a freezing point of -2°C.


Seawater has a lower freezing point because, unlike pure water that has nothing dissolved in it, seawater is a solution of salt and water. In fact, there are 35 grams of salt for every 100 grams of water in the ocean.

To understand why salt lowers the freezing point of water, we have to first understand that ice forms when energy, in the form of heat, is removed from water and the strong intermolecular bonds pull water molecules into a lattice structure. Molecules with higher energy will be less rigidly bonded to their neighbors. For example, liquid water has higher energy than solid ice.

Water is a polar solvent, which means that different ends of the molecule have different electric charges, and NaCl, or salt, is an ionic compound. When NaCl is added to the water, the intermolecular forces between water molecules are disrupted. Basically, the water molecules want to associate with NaCl more than they want to hang out with each other, making it harder for them to organize into the lattice structure we need for ice.

Ocean currents, the second factor, occur because of wind, temperature, and salinity differences between different bodies of water. Wind helps move the surface water towards the poles from equatorial regions of the globe. In addition, water movement in the deep ocean is affected by its density, which depends on the salinity and temperature of the water. Cold water is denser than warm water and will tend to sink. The upper, warmer layer of water undergoes evaporation and freezes once it reaches the poles. Both of these processes increase the salt content of the water, making it denser. This now cold and dense water sinks and continues to flow South. The motion of water within the oceans as a result of this process of thermohaline circulation prevents large-scale freezing.

Finally, it is important to consider the surface area to volume ratio of the ocean. The overall volume of the ocean is 1.3 billion cubic kilometres and the total surface area of the ocean is 361,900,000 square kilometres.
Since the process of freezing occurs largely where the water meets the cooler air, a body of water with a large surface area to volume ratio freezes much faster than one with a smaller ratio. Although the ocean has a very large surface area, it also has a hefty volume, which renders its surface area to volume ratio quite small.

Sources: 

http://scied.ucar.edu/ocean-move-thermohaline-circulation
http://sciencequestionswithchris.wordpress.com/2013/04/29/why-dont-the-oceans-freeze/
http://oceanservice.noaa.gov/facts/oceanfreeze.html
http://antoine.frostburg.edu/chem/senese/101/solutions/faq/why-salt-melts-ice.shtml
http://www.pa.msu.edu/sciencet/ask_st/030492.html

Answered by Simone A., Expert Leader

Edited by Jenny H.

What is it like to live and do science at a South-Pole research station?

Can you imagine living in the frigid and utterly desolate environment of the South Pole for nearly 11 months? Well, we can’t either, but Jason Gallicchio, a postdoctoral researcher at the Amundsen-Scott South Pole Station, has done it.

Gallicchio, an associate fellow for the Kavli Insitute of Physics at the University of Chicago, is part of an astrophysics experiment at the South Pole Telescope. He knows all about the challenges of building and maintaining such a complex scientific instrument in one of the most unforgiving places on the planet. Gallacchio was primarily responsible for the telescope’s data acquisition and software systems, and he also occasionally assisted with some maintenance work.
You might ask why anyone would even put a telescope in such a hostile environment in the first place. It’s not an accident, I promise! Actually, placing the telescope at the South Pole minimizes the interference from the Earth’s atmosphere. One of the primary objectives of the South Pole Telescope is to precisely measure temperature variations in the cosmic microwave background, and getting such precise measurements requires the telescope to be put in a high, dry, and atmospherically stable site. 
The South Pole Telescope is 10 meters across and weighs 280 tons. Researchers use this telescope to study cosmic microwave background radiation (or CMB, as it’s often affectionately called), hoping to uncover hints about the early days of our universe.
As Erik M. Leitch of the University of Chicago explains, CMB is a sort of faint glow of light that fills the universe, falling on Earth from every direction with nearly uniform intensity. It is the residual heat of creation—the afterglow of the Big Bang—streaming through space in these last 14 billion years, like the heat from a sun-warmed rock, re-radiated at night. 
Click here to read more about life at the Amundsen-Scott South Pole Station.
You can learn even more about the topics discussed in this summary at the links below: 
Amundsen-Scott South Pole StationA brief introduction to the Electromagnetic spectrumCosmic microwave backgroundA day in the life of South Pole TelescopeBig Science With The South Pole Telescope

Submitted by Srikar D, Discoverer.
Edited by Jessica F.

What is it like to live and do science at a South-Pole research station?

Can you imagine living in the frigid and utterly desolate environment of the South Pole for nearly 11 months? Well, we can’t either, but Jason Gallicchio, a postdoctoral researcher at the Amundsen-Scott South Pole Station, has done it.

Gallicchio, an associate fellow for the Kavli Insitute of Physics at the University of Chicago, is part of an astrophysics experiment at the South Pole Telescope. He knows all about the challenges of building and maintaining such a complex scientific instrument in one of the most unforgiving places on the planet. Gallacchio was primarily responsible for the telescope’s data acquisition and software systems, and he also occasionally assisted with some maintenance work.

You might ask why anyone would even put a telescope in such a hostile environment in the first place. It’s not an accident, I promise! Actually, placing the telescope at the South Pole minimizes the interference from the Earth’s atmosphere. One of the primary objectives of the South Pole Telescope is to precisely measure temperature variations in the cosmic microwave background, and getting such precise measurements requires the telescope to be put in a high, dry, and atmospherically stable site. 

The South Pole Telescope is 10 meters across and weighs 280 tons. Researchers use this telescope to study cosmic microwave background radiation (or CMB, as it’s often affectionately called), hoping to uncover hints about the early days of our universe.

As Erik M. Leitch of the University of Chicago explains, CMB is a sort of faint glow of light that fills the universe, falling on Earth from every direction with nearly uniform intensity. It is the residual heat of creation—the afterglow of the Big Bang—streaming through space in these last 14 billion years, like the heat from a sun-warmed rock, re-radiated at night. 

Click here to read more about life at the Amundsen-Scott South Pole Station.

You can learn even more about the topics discussed in this summary at the links below: 

Amundsen-Scott South Pole Station
A brief introduction to the Electromagnetic spectrum
Cosmic microwave background
A day in the life of South Pole Telescope
Big Science With The South Pole Telescope

Submitted by Srikar D, Discoverer.

Edited by Jessica F.

To Infinity and Beyond

Boeing and SpaceX have both been awarded contracts by NASA to fly astronauts to the International Space Station (ISS). Ever since NASA retired its space shuttle fleet in 2011, it has been relying on the Russian Soyuz capsule to fly astronauts to the ISS, an arrangement that was meant to be temporary until NASA’s chosen commercial partners under the Commercial Crew Development Project could supply it with a private spacecraft.
Well, with the strained US-Russia relations in recent months, the timing couldn’t have been better! Boeing is a well-known Chicago-based aerospace company known for its commercial and military aircraft. On the other hand, Hawthorne-based SpaceX is a startup run by Paypal cofounder and visionary Elon Musk.
This new development is very significant as it could be the beginning of a new era in space exploration. Until recently, only governments could afford to build and fly spacecraft, but now, private companies are starting to get in on some of the action. In fact, SpaceX has actually already made history before. In 2012, the very year after the shuttles retired, SpaceX became the first private company to launch a spacecraft to dock with the ISS.
With private companies now able to send astronauts and cargo to low-earth orbit destinations like the ISS for relatively low cost, NASA and other government agencies will be able to put more funding into more ambitious exploration missions to uncover even more about our universe.
To read more about the contracts, click on this link.

Submitted by Aram H., Discoverer.
Edited by Peggy K. 

To Infinity and Beyond

Boeing and SpaceX have both been awarded contracts by NASA to fly astronauts to the International Space Station (ISS). Ever since NASA retired its space shuttle fleet in 2011, it has been relying on the Russian Soyuz capsule to fly astronauts to the ISS, an arrangement that was meant to be temporary until NASA’s chosen commercial partners under the Commercial Crew Development Project could supply it with a private spacecraft.

Well, with the strained US-Russia relations in recent months, the timing couldn’t have been better! Boeing is a well-known Chicago-based aerospace company known for its commercial and military aircraft. On the other hand, Hawthorne-based SpaceX is a startup run by Paypal cofounder and visionary Elon Musk.

This new development is very significant as it could be the beginning of a new era in space exploration. Until recently, only governments could afford to build and fly spacecraft, but now, private companies are starting to get in on some of the action. In fact, SpaceX has actually already made history before. In 2012, the very year after the shuttles retired, SpaceX became the first private company to launch a spacecraft to dock with the ISS.

With private companies now able to send astronauts and cargo to low-earth orbit destinations like the ISS for relatively low cost, NASA and other government agencies will be able to put more funding into more ambitious exploration missions to uncover even more about our universe.

To read more about the contracts, click on this link.

Submitted by Aram H., Discoverer.

Edited by Peggy K. 

Question:
What ACTUALLY happens when you delete a file from your computer?

Answer:
When a file is created in the hard drive, a new record for it is added to the Master File Table, which is basically a portion of the hard drive containing a database for various attributes about files: name, creation date, access permissions, size, etc.
When a file deletion request is processed, the operating system removes all the metadata for it from the Master File Table, marking the hard drive’s physical storage blocks— called clusters— that are being removed as unused, allowing more files to be written on them later on.
In simpler terms, even if you choose to permanently delete a file, the deleted data is transferred to a memory cache for possible retrieval if required. You can think of it as your computer putting your deleted data’s space up for sale. Once the cache is full, the old data is overwritten by more recently deleted data, so deleted files can only be retrieved within a certain time frame before they’re overwritten.
For more information, check out this link.

Answered by Juan C., Expert Leader.
Edited by: Margaret G

Question:

What ACTUALLY happens when you delete a file from your computer?

Answer:

When a file is created in the hard drive, a new record for it is added to the Master File Table, which is basically a portion of the hard drive containing a database for various attributes about files: name, creation date, access permissions, size, etc.

When a file deletion request is processed, the operating system removes all the metadata for it from the Master File Table, marking the hard drive’s physical storage blocks— called clusters— that are being removed as unused, allowing more files to be written on them later on.

In simpler terms, even if you choose to permanently delete a file, the deleted data is transferred to a memory cache for possible retrieval if required. You can think of it as your computer putting your deleted data’s space up for sale. Once the cache is full, the old data is overwritten by more recently deleted data, so deleted files can only be retrieved within a certain time frame before they’re overwritten.

For more information, check out this link.

Answered by Juan C., Expert Leader.

Edited by: Margaret G

Two Thumbs Up: A Look at How And Why Humans Got So Good at Using Everyone’s Favorite Digit

I can probably speak for many members of the human race when I say I don’t think about my thumbs very often. Sure, they’re helpful for gripping objects. And most people will recognize the term “opposable”, even if they are unsure of its meaning. (By the way, it references the thumb’s ability to touch all the other digits on the same hand—what an overachiever.) But most people stop at that.
Luckily, University of Kent doctoral student Alastair Key, along with research associate Christopher Dunmore, were a tad more inquisitive about our friend the thumb. They were curious about its origins: namely, why human thumbs are so much more dexterous than those of our primate cousins. To solve this conundrum, they went back 2.6 million years, to the time when the human species is thought to have first used stone tools.
The relationship between human thumb development and stone tool creation has been studied quite a bit. But that research has always focused on the importance of the dominant hand in this process, leaving the non-dominant hand largely unstudied.
The team’s research employed the use of pressure sensors attached to the hands of eight knappers—modern day humans who make stone tools using the same methods our ancestors did. Once the sensors were attached, the knappers went about their business, using a small rock to chip away bits and pieces of a larger one and fashioning it into a sharp point. Through the data gathered, it was determined that the non-dominant thumb was equally, if not more, important in the process of stone tool creation, due to the constant need for dexterity and repositioning of the small rock in the non-dominant hand.
Stone tools were essential in the evolution of modern-day humans. Being able to craft a sharp, strong stone point allowed our ancestors to hunt for prey that would have been impossible to take down otherwise due to its large size. Having such an excellent food source resulted in stone tool users becoming more virile and healthy individuals—and in the animal kingdom, healthy individuals get the ladies. This caused tool creators (and to a further extent, those with more dexterous thumbs) to be chosen for evolutionarily, eventually resulting in the modern human thumb.
If you’re interested in learning more (or just want more fodder for thumb puns), the original article, published in the Journal of Human Evolution, is available below:
 http://www.sciencedirect.com/science/article/pii/S0047248414001845


Submitted by Nick V, Discoverer.
Edited by Carrie K.

Two Thumbs Up: A Look at How And Why Humans Got So Good at Using Everyone’s Favorite Digit

I can probably speak for many members of the human race when I say I don’t think about my thumbs very often. Sure, they’re helpful for gripping objects. And most people will recognize the term “opposable”, even if they are unsure of its meaning. (By the way, it references the thumb’s ability to touch all the other digits on the same hand—what an overachiever.) But most people stop at that.

Luckily, University of Kent doctoral student Alastair Key, along with research associate Christopher Dunmore, were a tad more inquisitive about our friend the thumb. They were curious about its origins: namely, why human thumbs are so much more dexterous than those of our primate cousins. To solve this conundrum, they went back 2.6 million years, to the time when the human species is thought to have first used stone tools.

The relationship between human thumb development and stone tool creation has been studied quite a bit. But that research has always focused on the importance of the dominant hand in this process, leaving the non-dominant hand largely unstudied.

The team’s research employed the use of pressure sensors attached to the hands of eight knappers—modern day humans who make stone tools using the same methods our ancestors did. Once the sensors were attached, the knappers went about their business, using a small rock to chip away bits and pieces of a larger one and fashioning it into a sharp point. Through the data gathered, it was determined that the non-dominant thumb was equally, if not more, important in the process of stone tool creation, due to the constant need for dexterity and repositioning of the small rock in the non-dominant hand.

Stone tools were essential in the evolution of modern-day humans. Being able to craft a sharp, strong stone point allowed our ancestors to hunt for prey that would have been impossible to take down otherwise due to its large size. Having such an excellent food source resulted in stone tool users becoming more virile and healthy individuals—and in the animal kingdom, healthy individuals get the ladies. This caused tool creators (and to a further extent, those with more dexterous thumbs) to be chosen for evolutionarily, eventually resulting in the modern human thumb.

If you’re interested in learning more (or just want more fodder for thumb puns), the original article, published in the Journal of Human Evolution, is available below:

 http://www.sciencedirect.com/science/article/pii/S0047248414001845

Submitted by Nick V, Discoverer.

Edited by Carrie K.

image

Question:

I’ve read previously that scientists have figured out the geometry of the universe and it is flat. How then, can there be wormholes? If wormholes are two sections of spacetime connected, then wouldn’t the fabric of the universe have to be curved?

Asked by anonymous

Answer:

Wormholes are defined, broadly, as theoretical constructs allowed by general relativity that provide a shorter path between two distant regions of space or time by taking advantage of the curvature of spacetime. This definition, together with the theory that the observable universe is flat and that spacetime curves around bodies of macroscopic mass, like planets, stars, and other celestial bodies, automatically presents the idea that spacetime is akin to a uniform, two-dimensional sheet. Many representations of wormholes, like the one below, use this idea.

 image

[Image: A model of ‘folded’ space-time illustrates how a wormhole bridge might form with at least two mouths that are connected to a single throat or tube. Credit: edobric | Shutterstock]

While this model works quite well for explaining the basic concept of a wormhole, it is a rather oversimplified model. An actual wormhole is much harder to imagine and depict because it operates in at least four dimensions – the three spatial dimensions, plus the time dimension. Trying to imagine a wormhole as it actually is introduces complexities that run counter to our common-sense three-dimensional perception, in the same manner that that diagram of the three-dimensional wormhole would give most of the two-dimensional inhabitants of Flatland a headache. Some of them might actually be asking this very same question about the sort of hole a worm might bore through an apple.

Wormholes do not actually operate as mere straight-line tunnels from one region of the universe to another, at least not in the conventional, three-dimensional sense that we can imagine. The entry and exit points of a wormholes would more likely be spheroidal regions of space that could only be identified topologically, because we are not equipped to observe constructs in higher dimensions— we can only theorize about their existence. Even if actual physical evidence of a wormhole were to be found, and even if that wormhole were to be traversable for long enough for you to successfully get through to the other side, you would not be able to see the actual structure of the wormhole – you’d only see a wildly distorted image of whatever’s at the other end.

Also, it must be stated that the observable universe is flat. While “universe” constitutes the set of everything that exists, the “observable universe” constitutes the part of the universe that we can directly observe; this is the volume of the universe around the Earth in which light has had time to reach us. That’s a sphere about 93 billion light-years across, taking into account the expansion of the universe, with the Earth at its center. We don’t have any way of knowing what lies beyond that horizon, because the light from that region hasn’t reached us yet, but regardless of the geometry of the entire universe, the part we can observe appears to be flat as far as we can measure it.

So while the observable universe is, on average, flat (at least to our knowledge), spacetime can curve quite significantly on a local scale, depending on the mass and the gravitational field of the object(s) in the vicinity. Wormholes take advantage of this curvature without affecting the average flatness of the observable universe on a macroscopic scale, so their existence is perfectly in keeping with these findings.

More information about wormholes can be found here:

http://www.space.com/20881-wormholes.html

http://physics.about.com/od/glossary/g/wormhole.htm

http://science.howstuffworks.com/science-vs-myth/everyday-myths/time-travel4.htm

For an idea of what it might look like to travel through a wormhole:

http://www.spacetimetravel.org/wurmlochflug/wurmlochflug.html

http://www.vis.uni-stuttgart.de/~muelleta/MTvis/

Answered by Shreniraj A., Expert Leader.

Edited by Yi Z.