Recent Updates in Nano World

Recent Updates in MEMS World

Recent Updates in Entrepreneurship

Wednesday 2 May 2012

Innovazione

L’innovazione è inefficiente, è indisciplinata, si contraddice, è iconoclasta e si nutre di confusione e di contraddizioni. Essere innovativo è l’opposto di quello che quasi tutti i genitori si aspettano dai loro figli, quasi tutti i dirigenti dalle loro aziende e i capi di stato dai loro paesi. Le persone innovative sono certamente scomode. Eppure senza l’innovazione siamo condannati, tra noia e monotonia, alla decadenza. Uno dei fondamentali di un buon sistema dell’innovazione è la diversità. Si può dire che più è forte una cultura (nazionale, generazionale, istituzionale) e meno si presta ad accogliere il pensiero innovativo. Credenze condivise e ben radicate, norme largamente diffuse, standard di comportamento sono tutti nemici delle nuove idee. E una società che si vanta della propria armonia interna sarà molto restia ad accettare al suo interno un pensiero poco ortodosso.
Una cultura molto eterogenea, all’opposto, incoraggia l’innovazione per merito di chi ha la capacità di guardare ogni cosa da punti di vista differenti. Questa capacità è uno degli stimoli più importanti della creatività. Fare grandi salti col pensiero è una dote comune a chi concepisce per primo grandi idee destinate al successo. Di solito questa dote si accompagna a una vasta cultura e a un ampio spettro di esperienze. 
Il segreto per evitare che questo flusso di grandi idee si inaridisca, consiste nell’accettare queste disordinate verità sull’origine delle idee e continuare a premiare l’innovazione e a lodare le tecnologie emergenti.


Nicholas Negroponte
Fondatore del Media Lab del MIT
(dalla prefazione a: Alessandro Ovi, Top 20. Le tecnologie emergenti, 
Luiss University Press, 2006)

Friday 6 November 2009

'Universal' equation describes how materials behave at nanoscale

Understanding how materials behave at tiny length scales is crucial for developing future nanotechnologies and continues to be a great challenge for both theoretical and experimental physicists alike. Now, a physicist at the Institute of Electronics, Microelectronics and Nanotechnology (IEMN) in Villeneuve d'Ascq, France, has borrowed from 19th century physics to come up with a new 'universal' equation that predicts how size affects the key physical properties of nanometre-sized structures, which behave very differently from their macroscopic counterparts.

The surface-to-volume ratio of a structure increases dramatically as it is made smaller and therefore surface effects can be very important for tiny devices. 'My equation links size effects not only to this surface-to-volume ratio but also to the intrinsic nature of the nanoparticles involved - that is, whether they are fermions or bosons,' Grégory Guisbiers told physicsworld.com.

Source:
physicsworld.com

"Nanoparticle cancer therapy targets tumours and dodges immune system"

A nanotechnology therapy that targets cancer with a 'stealth smart bomb' is to begin patient trials next year in the first clinical test of a pioneering approach to medicine, The Times has learnt.

The nanoparticle, which targets tumour cells while evading the body's immune system, promises to deliver larger and more effective doses of drugs to cancers, while simultaneously sparing patients many of the distressing side-effects of chemotherapy.

Animal studies have indicated that the treatment can shrink tumours 'essentially to zero', while being better tolerated than conventional cancer treatments. Final toxicology studies are about to begin.

A trial involving about 25 cancer patients is scheduled to start within a year. If successful,it could lead to a licensed drug within five years.

The technology, developed by BIND Biosciences, a company based in Cambridge, Massachusetts, should also be suitable for delivering drugs for treating other conditions, as well as for the chemotherapy agents that it has been set up to carry.

Source:
timesonline.co.uk

Tuesday 13 October 2009

MEMS Market Grows @ EDA Blog

According to iSuppli, Microelectromechanical Systems (MEMS) are making major inroads in the consumer- and mobile-electronics worlds. As a result, shipments of MEMS for consumer and mobile electronics is expected to grow from $1.1 billion (2006) to $2.6 billion (2012). STMicroelectronics ranked first in the global consumer/mobile MEMS market in 2008. Texas Instruments, which was first, now ranks second. In 2008, STMicroelectronics more than doubled its revenue from accelerometers, gyroscopes and pressure sensors for consumer and mobile applications, exceeding $200 million in 2008. Other MEMS suppliers include Epson Toyocom, which experienced an 75% increase in MEMS sales due to its new-generation gyroscope for gaming and navigation applications. Bosch Sensortec’s revenue exploded by 167.3% in 2008, driven by strong sales in mobile handsets. Kionix, whose sales grew by 29.9% in 2008, has now expanded to 120 employees. Finally, start-up Invensense only began shipping its MEMS products in 2007 and already exceeded $40 million in revenue in 2008.

Microelectromechanical Systems Market - iSuppli
Microelectromechanical Systems
  • Consumer Electronics Devices
    Shipment revenue for MEMS in consumer electronics devices will increase to $1.1 billion in 2012, up from $699.9 million in 2006. Products in this segment consist of game controllers, digital still cameras, camcorders, MP3 players, personal navigation devices, remote controllers, rear-projection televisions, mini stand alone projectors, sports equipment, white goods, toys, headsets, and USB sticks.
  • Mobile Handsets
    iSuppli predicts mobile handsets will remain the market’s main driver until 2012, not only for accelerometers but also for other devices like Radio Frequency (RF) MEMS filters, actuators for zoom and autofocus, radio frequency MEMS switches, pressure sensors, gyroscopes, and pico projectors. Global revenue from shipments of all types of MEMS for mobile handsets and smart phones will increase to $1.3 billion by the end of 2012, rising at a Compound Annual Growth Rate (CAGR) of 34.4% from $296.8 million in 2007.
  • Notebooks
    Looking at another type of popular product, notebook PCs, MEMS accelerometers are increasingly being employed to detect freefall and quickly park the heads of the Hard Disk Drive (HDD) to protect it from damage. Until this year, such systems mainly were used in professional notebooks. However, the system is being employed in consumer systems starting in 2009. iSuppli predicts the market for notebook PC MEMS — also including microphones and pressure sensors — will rise to $185.9 million in 2012, up from $37.6 million in 2006.
More information: iSuppli Corporation

Thursday 1 October 2009

A brief guide to DNA sequencing - from ArsTechnica

By John Timmer | Last updated September 27, 2009 10:00 PM CT

It's rare for a month to go by without some aspect of DNA sequencing making the headlines. Species after species has seen its genome completed, and the human genome, whether it's from healthy individuals or cancer cells, has received special attention. A dozen or more companies are attempting to bring new sequencing technology to market that could eventually drop the cost of sequencing down to the neighborhood of a new laptop. Arguably, it's one of the hottest high-tech fields on the planet.
But, although these methods can differ, sometimes radically, in how they obtain the sequence of DNA, they're all fundamentally constrained by the chemistry of DNA itself, which is remarkably simple: a long chain of alternating sugars and phosphates, with each sugar linked to one of four bases. Because the chemistry of DNA is so simple, the process of sequencing it is straightforward enough that anyone with a basic understanding of biology can probably understand the fundamentals. The new sequencing hardware may be very complex, but all the complexity is generally there to just sequence lots of molecules in parallel; the actual process remains pretty simple.
In a series of articles, we'll start with the very basics of DNA sequencing, and build our way up to the techniques that were used to complete the human genome. From there, we'll spend time on the current crop of "next-generation" sequencing hardware, before going on to examine some of the more exotic things that may be coming down the pipeline within the next few years.

The basics of copying DNA


A short stretch of DNA
Anyone who's made it through biology knows a bit about the structure of the double helix. Half of one is shown above, to illustrate its three components: its backbone is made up of alternating sugars (blue) and phosphates (red), and each sugar is linked to one of four bases (green). In this case, all of the bases shown are adenine (A), although they could be potentially be guanine (G), cytosine (C), or thymine (T). In the double helix, the bases undergo base pairing to partners on the opposite strand: A with T, C with G.
When a cell divides and DNA needs to be replicated, the double helix is split, and enzymes called polymerases use each of the two halves as a template for an new opposing strand; the base pairing rules ensure that the copying is exact, except for rare errors. Historically, DNA sequencing has relied on the exact same process of copying DNA—in fact, the enzymes that make copies of DNA within a cell are so efficient that biologists have used a modified polymerase to perform sequencing.

Adding a base to DNA
In the animation shown at right, a string of T's is base paired with a partial complement of A's on an opposing strand. The DNA polymerase, which isn't shown, is able to add additional nucleotides (a sugar + base combination) under two conditions: they're in the "triphosphate" form, with three phosphate groups in a row, and they base pair successfully with the complementary strand. As the red highlight indicates, the polymerase causes the hydroxyl group (OH) at the end of the existing strand to react with the triphosphate, linking the two together as part of the growing chain. When that reaction is done, there's a new hydroxyl group ready to react, allowing the cycle to continue. By moving down the strand and repeating this reaction, a new molecule of DNA with a specific sequence is created.

From copying to sequencing

From a sequencing perspective, having a new copy of DNA isn't especially helpful. What we want to know is what the order of the bases along the strand is. Sequencing works because we can get the process to stop in specific places and identify the base where it stops.
The simplest way to do this is to mess with the chemistry. Instead of supplying the DNA with a normal nucleotide, it's possible to synthesize one without the hydroxyl group that the polymerase uses to add the next base. As the animation here shows, the base can be added to the growing strand normally, but, once in place, the process comes to a crashing halt. We've now stopped the process of DNA replication.

Stopping DNA polymerization
Of course, if you supply the polymerase with nothing but terminating bases, it will never get very far. So, for a sequencing reaction, researchers use a mix of nucleotides where the majority are normal but a small fraction lack the hydroxyl group. Now, most of the time, the polymerase adds a normal nucleotide, and the reaction continues. But, at a certain probability, a terminator will be put in place, and the reaction stops. If you perform this reaction with lots of identical DNA molecules, you'll wind up with a distribution of lengths that slowly tails off as fewer and fewer unterminated molecules are left. The point at which this tailing off takes place is dictated by the fraction of terminator nucleotides in the reaction mix.
Now we just need to know what base is present when the reaction stops. This is possible by making sure that only one of the four nucleotides given to the polymerase can terminate the reaction. If all the C's, T's and G's are normal, but some fraction of the A's are terminators, then that reaction will produce a population of DNA molecules that all end at A. By setting up four reactions, one for each base, it's possible to identify the base at every position.
There are only two more secrets to DNA sequencing. First, you need to make sure every polymerase starts copying in the same place, otherwise you'll have a collection of molecules with two randomly located ends. This part is easy, since DNA polymerases can only add nucleotides to an existing strand. So, researchers can "prime" the polymerase by seeding the reaction with a short DNA molecule that base pairs with a known sequences that's next to the one you want to determine.
The other trick is that you need to figure out how long each DNA molecule is in the large mix of reaction products that you're left with. The negative charge on phosphates makes this easy, since it ensures that DNA molecules will move when placed in an electric field. So, if you start the reaction mix on one side of an aqueous polymer mesh (called a gel) and run a current through the solution, the DNA will worm its way through the mesh. Shorter molecules move faster, longer ones slower, allowing the population of molecules to be separated based on their sizes. By running the four reactions down neighboring lanes on a gel, you'll get a pattern that looks like the one below, which can be read off to determine the sequence of the DNA molecule.

DNA sequencing. Given a supply of DNA molecules and primers, the polymerase makes a series of fragments that stop when a terminating base is incorporated. The fragments appear as bands in one of the four lanes that run across the gel at bottom.

Going high(er) throughput

We're now at the state of the art from when I was a graduate student back in the early 1990s and, trust me, it was anything but artful. The presence of the DNA, marked by those dark bands, came from a short-lived radioisotope incorporated into the nucleotides. That meant you had to collect everything involved in the process and pay someone to store it until it decayed to background. The gels were flexible enough that they would shift or bend at the slightest provocation, making the order of bases difficult to read. But not so flexible that they wouldn't tear if suitably disturbed. All told, it took a full day to create something from which, if you were lucky, you could read two hundred bases down each lane, making each gel good for about a kilobase of sequence.
The human genome is about 3 Gigabases—clearly, this wasn't going to cut it, and people were beginning to discuss all manners of exotic approaches, like reading single molecules with a scanning-tunneling microscope.
Fortunately, a couple of changes breathed new life into the old approach. For starters, people got rid of the radioactivity by replacing it with a fluorescent tag. Not only was this a whole lot more convenient, but it enabled a simple four-fold improvement in throughput. Go to any outdoor event, and the glow sticks should indicate that it's possible to craft molecules that fluoresce in a variety of different colors.
By picking four fluorescent molecules that are decently spread out—blue for G's, green for A's, Yellow for T's and red for C's, for example—and linking them to a specific terminating nucleotide, it's possible to link the termination position with the identity of the base there. What once required four separate reactions could now be run at once in a single solution.
The next trick was to get rid of most of the gel. As we noted above, molecules work their way through the gel based on their size, but you needed a long gel if you wanted to image a lot of them at once. The solution, it turned out, was not to image them at once—something that, before the switch from radioactivity to fluorescence, wasn't really possible.
All you really need is just enough gel to separate things out slightly. You can put a gate at the end of the gel and image the fluorescent activity there. One by one, based on their size, the different molecules will pass through the gate, and glow a specific color based on the base at that position. Instead of a couple hundred bases, it was now possible to get about 700 bases of sequence from a single reaction. Thanks to digital imaging, the data, an example of which is shown below, was easy to interpret. Sequences came as a computer file, ready to be plugged into various analysis programs.

The data generated by fluorescent DNA sequencing.
With all of these in place, DNA sequencing was ready to for the same sorts of processes that revolutionized many areas of technology: automation and miniaturization. Instead of a grad student or technician painstakingly adding everything that was needed into individual tubes, a robot could dispense all the reaction ingredients into a small plastic plate that could hold about 100 individual samples. A second robot could then pull the samples and deposit them into a machine that read out the sequencing information. Large gels were replaced by narrow capillaries.
The new sequencing machines could do all of this for many samples in parallel, and the larger sequencing centers had dozens of these machines. As the bottlenecks were opened wider, the human genome project shot past its planned schedule, and a flood of genomes followed.
But with the increased progress came increased expectations. Ultimately, researchers didn't just want to have a human genome, but the ability to sequence any human genome, from an individual with a genetic disease to the genome of a cancer cell, in order to personalize medicine. That, once again, has set off a race for new and exotic sequencing technology. We'll discuss the first wave of these so-called "next generation" sequencers in a future installment.

Wednesday 30 September 2009

Securing the Promise of Nanotechnologies: Towards Transatlantic Regulatory Cooperation

Linda Breggin, Robert Falkner, Nico Jaspers, John Pendergrass and Read Porter, September 2009



This report aims to contribute to the debate on how best to address the risks of emerging nanotechnologies and how to promote coordinated and convergent approaches in the EU and US.
It presents the main findings of a project that was carried out by a consortium of research institutions: Chatham House, the London School of Economics and Political Science (LSE), the Environmental Law Institute (ELI) and the Project on Emerging Nanotechnologies (PEN) at the Woodrow Wilson International Center for Scholars in the United States.


Download Paper here

Thursday 17 September 2009

IBM: A combination of lithographic patterning and self-assembly

IBM: A combination of lithographic patterning and self-assembly

This article is from Small Times



Researchers at IBM and the California Institute of Technology say they have come up with a "breakthrough" to solve various problems looming for future semiconductor manufacturing beyond the 22nm node: a combination of lithographic patterning and self-assembly that arranges DNA structures on surfaces compatible with current manufacturing equipment.

DNA origami, they explain, involves folding a long single strand of viral DNA using shorter, synthetic "staple" strands, which they claim display 6nm-resolution patterns, and could "in principle" be used to arrange carbon nanotubes, silicon nanowires, or quantum dots. Making the starting structures, though, depends on an "uncontrolled deposition" which "results in random arrangements" whose properties are difficult to measure, and to integrate with microcircuitry.

Their approach, detailed in the September issue of the journal Nature Nanotechnology, is to use e-beam lithography and dry oxidative etch to create DNA origami-shaped binding sites on certain materials such as SiO2 and "diamond-like" carbon. Caltech's techniques for preparing the DNA origami structure cause single DNA molecules to self-assemble via a reaction between the long viral DNA strand and shorter synthetic oligonucleotide strands, which fold the viral DNA strand into 2D shapes; these can be modified to be attached by nanoscale components. They tout the ability to create squares, triangles, and stars with 100-150nm dimensions on an edge, and thickness as wide as a DNA double helix. Processing work at IBM used either e-beam or optical lithography to create arrays of the "binding sites" to match those of individual origami structures; key was discovering template material and optimal deposition conditions so that the origami structures bound only to patterns of "stick patches."

Results, from the journal paper abstract:

In buffer with approx. 100 mM MgCl2, DNA origami bind with high selectivity and good orientation: 70%-95% of sites have individual origami aligned with an angular dispersion (±1 s.d.) as low as ±10° (on diamond-like carbon) or ±20° (on SiO2).



Essentially, the researchers explain, the DNA molecules act as "scaffolding," onto which deposited carbon nanotubes would be stuck and self-assembled, at smaller dimensions than conventional semiconductor manufacturing capabilities. "The combination of this directed self-assembly with today's fabrication technology eventually could lead to substantial savings in the most expensive and challenging part of the chip-making process," said Spike Narayan, manager of science & technology at IBM's Almaden (CA) research center, in a statement.

IBM pushes AFM to image molecular structure - Small Times

IBM pushes AFM to image molecular structure - Small Times


August 28, 2009: Researchers at IBM in Zurich, Switzerland, have captured the "anatomy" of a molecule using noncontact atomic-force microscopy (AFM), peering through the surrounding electron cloud to capture images "with unprecedented resolution."
The method, a longtime goal of surface microscopy, involves an AFM operated in an ultrahigh vacuum at very low temperatures (-268°C), to image the chemical structure of individual pentacene molecules (1.4nm in length). Key was using an "atomically sharp" tip apex to measure the forces between the tip and sample. Also, picking up single atoms and molecules showed that the foremost tip atom/molecule governs the AFM contrast and resolution. Terminating the AFM tip with a carbon monoxide (CO) molecule was shown to yield optimum contrast at a height of ~0.5nm, noted IBM scientist Leo Gross, in a statement. Another key: deriving a complete 3D force map of the molecule, enabled by the AFM's mechanical and thermal stability.


Imaging the "anatomy" of a pentacene molecule - 3D rendered view: By using an atomically sharp metal tip terminated with a carbon monoxide (CO) molecule, IBM scientists were able to measure in the short-range regime of forces which allowed them to obtain an image of the inner structure of the molecule. The colored surface represents experimental data. (Image courtesy of IBM Research/Zurich)

Corroborating the results using first-principles density functional theory calculations, the researchers also figured out what caused the atomic contrast: Pauli repulsion between the CO and the pentacene molecule, explained IBM scientist Nikolaj Moll (referring to a quantum mechanical force that prevents two identical electrons from coming too close together). van der Waals and electrostatic forces, the scientists determined, "only add a diffuse attractive background."
The AFM's imagery, seen below compared to a diagram, is striking -- hexagonal shapes of the five carbon rings and carbon atoms are clearly resolved, and hydrogen atoms also can be discerned. (IBM has posted more pictures on Flickr, and even a video on YouTube.)


Ball-and-stick model of the pentacene molecule: five linearly fused hexagonal rings of benzene, comprised of 22 carbon atoms (inner gray balls) and to which are bound 14 hydrogen atoms (outer white balls). The entire molecule is 1.4nm in length; spacing between neighboring carbon atoms is 0.14nm. (Image courtesy of IBM Research/Zurich)


The delicate inner structure of a pentacene molecule imaged with an atomic force microscope, clearly showing hexagonal shapes of the five carbon rings in the pentacene molecule, and even the positions of the hydrogen atoms around the carbon rings. Pixels correspond to actual data points. (Image courtesy of IBM Research/Zurich)
Most significantly, says IBM, this atomic-scale imaging, combined with similar experiments from IBM earlier this summer that measured the charge state of single atoms, help better understand charge distribution at the atomic scale, pointing a way to create molecular-scale devices and networks.
The work was done in collaboration with Peter Liljeroth of Utrecht University, and published in the Aug. 28 issue of the journal Science.


The scanning tunneling/atomic force microscope used to image the "anatomy" or chemical structure of a Pentacene molecule with atomic resolution. (Photo by Michael Lowry; image courtesy of IBM Research/Zurich)

Tuesday 15 September 2009

Decerebrated bacteria as drug vehicles

Instead of using sinthetic nanoparticles loaded with Ab or Ag with specific drug molecules, people from Engeneic in Sydney use small budds of cytoplasm they call EDV (engeneic delivery vehicles) from the division of bacteria like salmonella at their ends instead of their middle. looks like bacteria but they do not have chromosomes: simple cargo ships.
Once deeply washed to remove any toxin, they are loaded with specific drugs, exactly like with nanoparticles.
Targeting is assured being specific through the use of two monoclonal antibodies connected with a linker molecule. The size of such vehicles is less than 400nm so that they can easily pass through the slightly porous vessels supplying tumor cells falling into tumor tissues.

Will see if such method will be effectively less expensive than nanobeads or nanovescicles...

www.newscientist.com