Dragon Go! is a revolutionary new mobile application that hears what you say and delivers the results you want within seconds! How’s this possible? Dragon Go! features a smart natural language understanding functionality that knows what you want simply by how you say it!
And it will deliver you directly to the best mobile sites on the Web offering what you want– such as Fandango for movie showtimes and trailers, Accuweather.com for local weather updates, Pandora® internet radio, and Yelp for local reviews, recommendations and more! You can also share your Dragon Go! results via an easy-to-use pop-up toolbar featuring link share options across email, text messages, Facebook and Twitter.
Featuring the dynamic Dragon Carousel™, Dragon Go! not only delivers you to the best mobile web site featuring what you want, it also delivers complementary results that enable you to slide the carousel from side-to-side to compare information across the most relevant sites for your Dragon Go! request.
For example:
You say….
“Cowboys & Aliens showtimes” – Dragon Go! delivers you directly to Fandango featuring movie trailers, showtimes and ticket purchasing for your local theaters. The Dragon CarouselTM also enables you to see what people are saying about the movie on Twitter and flick over to Wikipedia to learn more about the graphic novel the movie is based on.You say….
“What’s the weather like”? – Dragon Go! takes you to AccuWeather.com delivering your local weather results. And the Dragon Carousel enables you to compare weather results across The Weather Channel and Weather UndergroundAnd that’s not all! In a place where it’s not convenient to speak? You can also type your Dragon Go! requests for the same fast, accurate results!
Dragon Go! – Control Your Personal Universe with No Boundaries
- Smart! Say it your way, Dragon Go! understands
- Innovative Dragon Carousel™ takes you directly to the mobile web site you want
- No endless blue link options – go direct to the best mobile web sites
- Speak it or type it. Dragon Go! delivers the same accurate results fast!
- Compare results across similar web sites with just the flick of your finger
- No boundaries: shop, request music, find local dining reviews, buy tickets and more
- Sharing is easy – pop-up toolbar enables link share across email, text and social media
Learn More
The object of this blog began as a display of a varied amount of writings, scribblings and rantings that can be easily analysed by technology today to present the users with a clearer picture of the state of their minds, based on tests run on their input and their uses of the technology we are advocating with www.projectbrainsaver.com
Saturday, 3 September 2011
Nuance - Dragon Go! In Action - Nuance Dragon Go! Just Say it. Get it. And Go! One App Access for Everything Across the Mobile Web
Speech Analytics: Speech Recognition Leaps Forward - Is it a revolution?
Wednesday, August 31, 2011
Speech Recognition Leaps Forward - Is it a revolution?
Please comment if you experience this technology and if you indeed view it as a revolution.Thx, OferSpeech Recognition Leaps ForwardBy Janie ChangAugust 29, 2011 12:01 AM PTDuring Interspeech 2011, the 12th annual Conference of the International Speech Communication Association being held in Florence, Italy, from Aug. 28 to 31, researchers from Microsoft Research will present work that dramatically improves the potential of real-time, speaker-independent, automatic speech recognition.
Dong Yu, researcher at Microsoft Research Redmond, and Frank Seide, senior researcher and research manager with Microsoft Research Asia, have been spearheading this work, and their teams have collaborated on what has developed into a research breakthrough in the use of artificial neural networks for large-vocabulary speech recognition.
The Holy Grail of Speech Recognition
Commercially available speech-recognition technology is behind applications such as voice-to-text software and automated phone services. Accuracy is paramount, and voice-to-text typically achieves this by having the user “train” the software during setup and by adapting more closely to the user’s speech patterns over time. Automated voice services that interact with multiple speakers do not allow for speaker training because they must be usable instantly by any user. To cope with the lower accuracy, they either handle only a small vocabulary or strongly restrict the words or patterns that users can say.
The ultimate goal of automatic speech recognition is to deliver out-of-the-box, speaker-independent speech-recognition services—a system that does not require user training to perform well for all users under all conditions.
“This goal has increased importance in a mobile world,” Yu says, “where voice is an essential interface mode for smartphones and other mobile devices. Although personal mobile devices would be ideal for learning their user’s voices, users will continue to use speech only if the initial experience, which is before the user-specific models can even be built, is good.”
Speaker-independent speech recognition also addresses other scenarios where it’s not possible to adapt a speech-recognition system to individual speakers—call centers, for example, where callers are unknown and speak only for a few seconds, or web services for speech-to-speech translation, where users would have privacy concerns over stored speech samples.
Renewed Interest in Neural Networks
Artificial neural networks (ANNs), mathematical models of the low-level circuits in the human brain, have been a familiar concept since the 1950s. The notion of using ANNs to improve speech-recognition performance has been around since the 1980s, and a model known as the ANN-Hidden Markov Model (ANN-HMM) showed promise for large-vocabulary speech recognition. Why then, are commercial speech-recognition solutions not using ANNs?
“It all came down to performance,” Yu explains. “After the invention of discriminative training, which refines the model and improves accuracy, the conventional, context-dependent Gaussian mixture model HMMs (CD-GMM-HMMs) outperformed ANN models when it came to large-vocabulary speech recognition.”
Yu and members of the Speech group at Microsoft Research Redmond became interested in ANNs when recent progress in building more complex “deep” neural networks (DNNs) began to show promise at achieving state-of-the-art performance for automatic speech-recognition tasks. In June 2010, intern George Dahl, from the University of Toronto, joined the team, and researchers began investigating how DNNs could be used to improve large-vocabulary speech recognition.
“George brought a lot of insight on how DNNs work,” Yu says, “as well as strong experience in training DNNs, which is one of the key components in this system.”
A speech recognizer is essentially a model of fragments of sounds of speech. An example of such sounds are “phonemes,” the roughly 30 or so pronunciation symbols used in a dictionary. State-of-the-art speech recognizers use shorter fragments, numbering in the thousands, called “senones.”
Earlier work on DNNs had used phonemes. The research took a leap forward when Yu, after discussions with principal researcher Li Deng and Alex Acero, principal researcher and manager of the Speech group, proposed modeling the thousands of senones, much smaller acoustic-model building blocks, directly with DNNs. The resulting paper, Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition by Dahl, Yu, Deng, and Acero, describes the first hybrid context-dependent DNN-HMM (CD-DNN-HMM) model applied successfully to large-vocabulary speech-recognition problems.
“Others have tried context-dependent ANN models,” Yu observes, “using different architectural approaches that did not perform as well. It was an amazing moment when we suddenly saw a big jump in accuracy when working on voice-based Internet search. We realized that by modeling senones directly using DNNs, we had managed to outperform state-of-the-art conventional CD-GMM-HMM large-vocabulary speech-recognition systems by a relative error reduction of more than 16 percent. This is extremely significant when you consider that speech recognition has been an active research area for more than five decades.”
The team also accelerated the experiments by using general-purpose graphics-processing units to train and decode speech. The computation for neural networks is similar in structure to 3-D graphics as used in popular computer games, and modern graphics cards can process almost 500 such computations simultaneously. Harnessing this computational power for neural networks contributed to the feasibility of the architectural model.
In October 2010, when Yu presented the paper to an internal Microsoft Research Asia audience, he spoke about the challenges of scalability and finding ways to parallelize training as the next steps toward developing a more powerful acoustic model for large-vocabulary speech recognition. Seide was excited by the research and joined the project, bringing with him experience in large-vocabulary speech recognition, system development, and benchmark setups.
Benchmarking on a Neural Network
“It has been commonly assumed that hundreds or thousands of senones were just too many to be accurately modeled or trained in a neural network,” Seide says. “Yet Yu and his colleagues proved that doing so is not only feasible, but works very well with notably improved accuracy. Now, it was time to show that the exact same CD-DNN-HMM could be scaled up effectively in terms of training-data size.”
The new project applied CD-DNN-HMM models to speech-to-text transcription and was tested against Switchboard, a highly challenging phone-call transcription benchmark recognized by the speech-recognition research community.
First, the team had to migrate the DNN training tool to support a larger training data set. Then, with help from Gang Li, research software-development engineer at Microsoft Research Asia, they applied the new model and tool to the Switchboard benchmark with more than 300 hours of speech-training data. To support that much data, the researchers built giant ANNs, one of which contains more than 66 million inter-neural connections, the largest ever created for speech recognition.
The subsequent benchmarks achieved an astonishing word-error rate of 18.5 percent, a 33-percent relative improvement compared with results obtained by a state-of-the-art conventional system.
“When we began running the Switchboard benchmark,” Seide recalls, “we were hoping to achieve results similar to those observed in the voice-search task, between 16- and 20-percent relative gains. The training process, which takes about 20 days of computation, emits a new, slightly more refined model every few hours. I impatiently tested the latest model every few hours. You can’t imagine the excitement when it went way beyond the expected 20 percent, kept getting better and better, and finally settled at a gain of more than 30 percent. Historically, there have been very few individual technologies in speech recognition that have led to improvements of this magnitude.”
The resulting paper, titled Conversational Speech Transcription Using Context-Dependent Deep Neural Networks by Seide, Li, and Yu, is scheduled for presentation on Aug. 29. The work already has attracted considerable attention from the research community, and the team hopes that taking the paper to the conference will ignite a new line of research that will help advance the state of the art for DNNs in large-vocabulary speech recognition.
Bringing the Future Closer
With a novel way of using artificial neural networks for speaker-independent speech recognition, and with results a third more accurate than what conventional systems can deliver, Yu, Seide, and their teams have brought fluent speech-to-speech applications much closer to reality. This innovation simplifies speech processing and delivers high accuracy in real time for large-vocabulary speech-recognition tasks.
“This work is still in the research stages, with more challenges ahead, most notably scalability when dealing with tens of thousands of hours of training data. Our results represent just a beginning to exciting future developments in this field,” Seide says. “Our goal is to open possibilities for new and fluent voice-based services that were impossible before. We believe this research will be used for services that change how we work and live. Imagine applications such as live speech-to-speech translation of natural, fluent conversations, audio indexing, or conversational, natural language interactions with computers.”
Speech Recognition Leaps Forward - Microsoft Research
Speech Recognition Leaps ForwardBy Janie ChangAugust 29, 2011 12:01 AM PTDuring Interspeech 2011, the 12th annual Conference of the International Speech Communication Association being held in Florence, Italy, from Aug. 28 to 31, researchers from Microsoft Research will present work that dramatically improves the potential of real-time, speaker-independent, automatic speech recognition.
Dong Yu, researcher at Microsoft Research Redmond, and Frank Seide, senior researcher and research manager with Microsoft Research Asia, have been spearheading this work, and their teams have collaborated on what has developed into a research breakthrough in the use of artificial neural networks for large-vocabulary speech recognition.
The Holy Grail of Speech Recognition
Commercially available speech-recognition technology is behind applications such as voice-to-text software and automated phone services. Accuracy is paramount, and voice-to-text typically achieves this by having the user “train” the software during setup and by adapting more closely to the user’s speech patterns over time. Automated voice services that interact with multiple speakers do not allow for speaker training because they must be usable instantly by any user. To cope with the lower accuracy, they either handle only a small vocabulary or strongly restrict the words or patterns that users can say.
The ultimate goal of automatic speech recognition is to deliver out-of-the-box, speaker-independent speech-recognition services—a system that does not require user training to perform well for all users under all conditions.
Dong Yu“This goal has increased importance in a mobile world,” Yu says, “where voice is an essential interface mode for smartphones and other mobile devices. Although personal mobile devices would be ideal for learning their user’s voices, users will continue to use speech only if the initial experience, which is before the user-specific models can even be built, is good.”
Speaker-independent speech recognition also addresses other scenarios where it’s not possible to adapt a speech-recognition system to individual speakers—call centers, for example, where callers are unknown and speak only for a few seconds, or web services for speech-to-speech translation, where users would have privacy concerns over stored speech samples.
Renewed Interest in Neural Networks
Artificial neural networks (ANNs), mathematical models of the low-level circuits in the human brain, have been a familiar concept since the 1950s. The notion of using ANNs to improve speech-recognition performance has been around since the 1980s, and a model known as the ANN-Hidden Markov Model (ANN-HMM) showed promise for large-vocabulary speech recognition. Why then, are commercial speech-recognition solutions not using ANNs?
“It all came down to performance,” Yu explains. “After the invention of discriminative training, which refines the model and improves accuracy, the conventional, context-dependent Gaussian mixture model HMMs (CD-GMM-HMMs) outperformed ANN models when it came to large-vocabulary speech recognition.”
Yu and members of the Speech group at Microsoft Research Redmond became interested in ANNs when recent progress in building more complex “deep” neural networks (DNNs) began to show promise at achieving state-of-the-art performance for automatic speech-recognition tasks. In June 2010, intern George Dahl, from the University of Toronto, joined the team, and researchers began investigating how DNNs could be used to improve large-vocabulary speech recognition.
“George brought a lot of insight on how DNNs work,” Yu says, “as well as strong experience in training DNNs, which is one of the key components in this system.”
A speech recognizer is essentially a model of fragments of sounds of speech. An example of such sounds are “phonemes,” the roughly 30 or so pronunciation symbols used in a dictionary. State-of-the-art speech recognizers use shorter fragments, numbering in the thousands, called “senones.”
Earlier work on DNNs had used phonemes. The research took a leap forward when Yu, after discussions with principal researcher Li Deng and Alex Acero, principal researcher and manager of the Speech group, proposed modeling the thousands of senones, much smaller acoustic-model building blocks, directly with DNNs. The resulting paper, Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition by Dahl, Yu, Deng, and Acero, describes the first hybrid context-dependent DNN-HMM (CD-DNN-HMM) model applied successfully to large-vocabulary speech-recognition problems.
“Others have tried context-dependent ANN models,” Yu observes, “using different architectural approaches that did not perform as well. It was an amazing moment when we suddenly saw a big jump in accuracy when working on voice-based Internet search. We realized that by modeling senones directly using DNNs, we had managed to outperform state-of-the-art conventional CD-GMM-HMM large-vocabulary speech-recognition systems by a relative error reduction of more than 16 percent. This is extremely significant when you consider that speech recognition has been an active research area for more than five decades.”
The team also accelerated the experiments by using general-purpose graphics-processing units to train and decode speech. The computation for neural networks is similar in structure to 3-D graphics as used in popular computer games, and modern graphics cards can process almost 500 such computations simultaneously. Harnessing this computational power for neural networks contributed to the feasibility of the architectural model.
In October 2010, when Yu presented the paper to an internal Microsoft Research Asia audience, he spoke about the challenges of scalability and finding ways to parallelize training as the next steps toward developing a more powerful acoustic model for large-vocabulary speech recognition. Seide was excited by the research and joined the project, bringing with him experience in large-vocabulary speech recognition, system development, and benchmark setups.
Benchmarking on a Neural Network
“It has been commonly assumed that hundreds or thousands of senones were just too many to be accurately modeled or trained in a neural network,” Seide says. “Yet Yu and his colleagues proved that doing so is not only feasible, but works very well with notably improved accuracy. Now, it was time to show that the exact same CD-DNN-HMM could be scaled up effectively in terms of training-data size.”
The new project applied CD-DNN-HMM models to speech-to-text transcription and was tested against Switchboard, a highly challenging phone-call transcription benchmark recognized by the speech-recognition research community.
First, the team had to migrate the DNN training tool to support a larger training data set. Then, with help from Gang Li, research software-development engineer at Microsoft Research Asia, they applied the new model and tool to the Switchboard benchmark with more than 300 hours of speech-training data. To support that much data, the researchers built giant ANNs, one of which contains more than 66 million inter-neural connections, the largest ever created for speech recognition.
The subsequent benchmarks achieved an astonishing word-error rate of 18.5 percent, a 33-percent relative improvement compared with results obtained by a state-of-the-art conventional system.
Frank Seide“When we began running the Switchboard benchmark,” Seide recalls, “we were hoping to achieve results similar to those observed in the voice-search task, between 16- and 20-percent relative gains. The training process, which takes about 20 days of computation, emits a new, slightly more refined model every few hours. I impatiently tested the latest model every few hours. You can’t imagine the excitement when it went way beyond the expected 20 percent, kept getting better and better, and finally settled at a gain of more than 30 percent. Historically, there have been very few individual technologies in speech recognition that have led to improvements of this magnitude.”
The resulting paper, titled Conversational Speech Transcription Using Context-Dependent Deep Neural Networks by Seide, Li, and Yu, is scheduled for presentation on Aug. 29. The work already has attracted considerable attention from the research community, and the team hopes that taking the paper to the conference will ignite a new line of research that will help advance the state of the art for DNNs in large-vocabulary speech recognition.
Bringing the Future Closer
With a novel way of using artificial neural networks for speaker-independent speech recognition, and with results a third more accurate than what conventional systems can deliver, Yu, Seide, and their teams have brought fluent speech-to-speech applications much closer to reality. This innovation simplifies speech processing and delivers high accuracy in real time for large-vocabulary speech-recognition tasks.
“This work is still in the research stages, with more challenges ahead, most notably scalability when dealing with tens of thousands of hours of training data. Our results represent just a beginning to exciting future developments in this field,” Seide says. “Our goal is to open possibilities for new and fluent voice-based services that were impossible before. We believe this research will be used for services that change how we work and live. Imagine applications such as live speech-to-speech translation of natural, fluent conversations, audio indexing, or conversational, natural language interactions with computers.”
The Case History | Dagga Couple
The Case History
August 2010Raided, arrested, charged.
February 2011Appearance at Magistrates level. Applied to be heard in the Constitutional Court, based on our contention that it is our Human Right to ingest anything we choose to, providing it does no harm to us, or others. Postponed for 60 days by the State.
Our reasoning is based on, and set out in:
May 2011
‘The Report. Cannabis: The Facts, Human Rights and the Law’. Kenn D ‘Oudney & Joanna D ‘Oudney. SRC Publishing. ISBN: 978-1-902848-20-4Affadavit handed in at the Pretoria High Court on the same day as Magistrate’s Court hearing.
July 2011
Magistrates Court gives us leave to appeal to the Pretoria High Court for a Constitutional Court hearing but, at the same time, sets a trial date for end of July. This was so that we could stand trial in the event of not being successful with the High Court application.The Honorable Mr Justice Bertelsman grants us 60 days to institute proceedings to challenge the constitutionality of sections 4(b), 5(b) and Part III of Schedule 2 of the Drugs & Drug Trafficing Act, 1992, as it pertains to Dagga.
October 12 2011
Charges of possession and dealing have been struck off the roll at the Magistrate’s Court, pending the outcome of the constitutional challenge.Date set for the filing of papers containing our argument, at the Pretoria High Court. This is for the court to decide on the constitutionality of the matter.
IF YOU ARE INVOLVED WITH A DAGGA CHARGE AT THE MOMENT OR KNOW SOMEONE WHO IS, THE FOLLOWING INFORMATION IS FOR YOU:
Our case has already set a precedent in South African Law. Your Magistrate needs to see the following documents:
HIGH COURT ORDER dropping charges, pending constitutionality hearing.
IF ANY OF YOU OUT THERE WOULD LIKE MORE INFORMATION ON HOW WE CAN HELP YOU AND YOUR DAGGA CHARGES, EMAIL US, FACEBOOK US, TWITTER US AND WE WILL REPLY TO YOU.
WE CAN DO THIS TOGETHER.
The Dagga Couple
The Dagga Couple (a phrase coined by the South African press) have, for the last year, been preparing a case to apply for the opportunity to ask some very simple questions in the highest court in South Africa, The Constitutional Court.
How come this benign plant has lead to the persecution of so many people, in so many countries, for so long?
Who have we harmed? We were not harming anyone else, and we certainly weren’t harming ourselves.
In August 2010, we had a very heavy handed visit from the South African Police Service (SAPS) who, acting on a tip off, raided our property in search of a ‘drug lab’. What they found was a quiet middle aged couple in their pyjamas and a quantity of Cannabis Sativa (aka Dagga). We were arrested after a five hour ordeal in our kitchen, jailed, and because we had more than 105g of the substance, were charged with dealing in Dagga. We were subsequently granted bail and released.
You can download a brief synopsis of our case so far here:
(pdf 180kb) DaggaCouple case thus far 12.08.11THE REASONS WE WANT THIS MATTER TO BE HEARD IN THE CONSTITUTIONAL COURT
- The South African legal system is sufficiently corrupt that we had the option to pay a large sum of money for our case to “disappear”. After our experience at the hands of the police we are not prepared to just pay our way out of this. Corrupt behaviour will ensure that the police will be breaking down our gate sometime in the future.
- We wish to demonstrate the ignorance at all levels of law enforcement when it comes to the prohibition of Dagga.
- We will provide evidence that the laws prohibiting the use of Dagga in South Africa have their origins in the racist colonial laws of the early 20th century. These laws are also dictated by international statutes based on propaganda in the United States and have no bearing on our local culture.
- The enforcement of the prohibition of Dagga costs the South African taxpayer millions every year. These resources could be utilised in a more efficient manner & the re-legalisation of dagga would pave the way for the development of the hemp (which is also the Dagga plant) industry, which would create jobs in the agriculture, bio fuel, textile & medical industries.
- Our Human Rights have been violated by a law that is unjust & irrational, not supported by any empirical evidence & outdated. The punishment far outweighs the “crime”. Smoking Dagga is a “victimless crime” and should not be seen as a crime at all.
- We reserve the right to smoke whatever we want in the privacy of our own property, with whom we wish. We are not harming anybody & no government has the right to treat us, the tax payers, like criminals.
- The prohibition of Dagga leads to organised crime. This is fact and is supported by extensive research, both locally & internationally. Because the makers of the law have been informed by propaganda that is blatantly incorrect, organised crime surrounding the growing & marketing of Dagga is a major problem in South Africa.
- We reserve the right to self medicate. We are both very healthy individuals and we believe that our daily use of Dagga contributes to our healthy immune systems. Dagga has been used as a medication for thousands of years.
- We both contribute in significant ways to the society around us and, far from impairing our abilities, we believe that our use of Dagga contributes to this being so.
- We propose that the law prohibiting the use of Dagga in South Africa is based on propaganda and hearsay, based on protecting the industries that benefit from its prohibition and not based on protecting its citizens.
In short, the prohibition of Dagga is unscientific, irrational & wrong.
Friday, 2 September 2011
Oracle, IBM loom as spoilers of H-P dream - MarketWatch
Sept. 2, 2011, 1:48 p.m. EDT
Oracle, IBM loom as spoilers of H-P dream
Tech giant’s shift to software, services puts it in crosshairs of rivals
Stories You Might Like
By Benjamin Pimentel, MarketWatch
SAN FRANCISCO (MarketWatch) — Hewlett-Packard Co.’s aggressive push into the high-end corporate tech market has been called a smart move, but the company’s bold dream faces two big would-be spoilers: IBM Corp. and Oracle Corp.
Bitter competitors: from left, H-P CEO Leo Apotheker, IBM chief Sam Palmisano and Oracle CEO Larry Ellison.
ReutersThis was underscored by developments this week.
On Tuesday, Oracle /quotes/zigman/76584/quotes/nls/orcl ORCL -2.96% , the software giant that morphed from H-P’s /quotes/zigman/229301/quotes/nls/hpq HPQ -4.13% close partner into its bitter competitor, escalated its legal war with the Palo Alto, Calif.-based company by filing a cross-complaint in their dispute over the Itanium platform.
Then, IBM /quotes/zigman/230066/quotes/nls/ibm IBM -1.47% , which many see as the corporate behemoth H-P is trying to emulate, announced two acquisitions in a row this week.
Both companies IBM has agreed to buy — Cambridge, U.K.-based i2 and Toronto-based Algorithmics — are focused on data analytics, business software geared to helping companies analyze and make useful the enormous amounts of data they collect.
“Analytics is definitely one of the big applications that’s emerging for enterprises,” Sterne Agee analyst Shaw Wu said in an interview. “Companies have lots of information and they need a way to manipulate it and monetize it, to help them make better decisions. It’s very software intensive.”
Two weeks ago, H-P made its own big move into analytics when it said it was buying British software maker Autonomy Corp. for $10 billion.
The acquisition is seen as a step toward H-P’s grand ambition: to build up enough software muscle to attract big corporate customers and win lucrative IT contracts in which the game is not just about helping corporate customers cut costs, but also grow their businesses.
“Most of their [H-P’s] services work is about integration and outsourcing and keeping stuff running,” said Gartner analyst Martin Reynolds. “It’s not about transforming the way you do business — which is the way IBM would go.”
H-P chief speaks out
H-P Chief Executive Officer Leo Apotheker tells The Wall Street Journal why he's spinning off H-P's PC unit.
H-P has sent a strong signal that that’s where it wants to go.
In fact, it unveiled the Autonomy deal the day it also stunned Wall Street by saying that it’s considering spinning off its personal-computer business, effectively getting out of the consumer market.
Autonomy is particularly strong in the area of processing unstructured data, which Gartner’s Reynolds describes as covering a range of mostly random information from “every e-mail to every lunch order” that companies collect and hope to monetize.
“It looks like a very brave move to get into that market,” he said.
An H-P spokeswoman said in an e-mail that the company is “inventing the next-generation information platform to empower enterprises to leverage all information through a natural, search-based interface.”
Pushing deeper into enterprise software also plays into the strengths of H-P Chief Executive Leo Apotheker, former CEO of SAP. “As an executive who has spent most of my career primarily in software, this is a world I know well,” Apotheker told analysts.
But it’s also a world in which IBM and Oracle have established pretty formidable beachheads.
/quotes/zigman/76584/quotes/nls/orcl
/quotes/zigman/229301/quotes/nls/hpq
/quotes/zigman/230066/quotes/nls/ibm
$ 167.82-2.51 -1.47%Volume: 2.34mSept. 2, 2011 1:51p
Page 1 Page 2
Dutch engineers mull over 2,000m tall man-made mountain (Wired UK)
Dutch engineers mull over 2,000m tall man-made mountain
By Mark Brown02 September 11Leave a commentThe idea might have started as a tongue-in-cheek remark by a newspaper columnist, but now architects, engineers, construction firms and investors are giving serious consideration to building a 2,000-metre-high artificial mountain in the Netherlands.
In the Dutch daily paper De Pers, former athlete Thijs Zonneveld joked that his fellow countrymen should build their own mountain -- complete with alpine slopes, meadows and villages -- in the notoriously flat plains of the Netherlands (the highest point is just 323m above sea level).
But the day his column went live, Zonneveld received serious responses from experts who had already been mulling over the concept. "It made me realise I was not the only one who'd had that idea," Zonneveld told Reuters.
Now his idea has snowballed -- the Dutch Ski Association, Dutch Climbing and Mountaineering Association and Royal Dutch Cycling Union have shown their support, the architect firm Hoffers and Kruger has drawn up plans for the mountain and a work group has assembled to assess feasibility.
The project is provisionally named "Die Berg Komt Er", ("The Mountain Comes"), Yahoo News reports, and will apparently take 30 years and anywhere between £40bn and £270bn to build. Once done, the monster green peak could hide swimming pools, cinemas, sports facilities and its own water supply.
It's an audacious idea, but Zonneveld insists that the plan is serious: "All kinds of big companies have now stepped in, various municipalities and investors are interested."
It's not the first man-made mountain to be proposed. In 2009, German architect Jakob Tigges wanted to erect a 1,000-metre-high mountain at a disused airport in Berlin. As challenges set in, Tigges has settled for a 60-metre mound
@UKHOMEOFFICE , @NICK_CLEGG AND @NUMBER10GOV Cameron and Clegg must now do their moral duty - and save Gary McKinnon | Mail Online
Cameron and Clegg must now do their moral duty - and save Gary McKinnon
Last updated at 7:58 AM on 27th May 2010
The first acid test for Britain's new government is not the economy, but whether it is capable of an act of simple humanity.
Can it deliver on its repeated promise to end the torment inflicted by the state on Gary McKinnon, the hacker with Asperger's syndrome, whom the Home Office wants to send to lengthy imprisonment and likely suicide in a U.S. jail?
The courtroom cruelty was scheduled to begin again on Monday this week. But Gary has been granted a temporary reprieve by the new Home Secretary Theresa May, who has agreed to reconsider medical evidence on his mental state.
Will Prime Minister David Cameron and Deputy Prime Minister Nick Clegg be able to do their moral duty and save Gary McKinnon?
The reprieve is, of course, welcome. But it is not enough. There is a moral duty - not least on Prime Minister David Cameron and his deputy Nick Clegg, both of whom argued so vociferously on Gary's behalf before the election - to honour that promise and ensure that Gary is never extradited to endure ten years in a U.S. jail.
Last year, Mr Cameron was unequivocal: 'Gary McKinnon is a vulnerable young man, and I see no compassion in sending him thousands of miles away from his home and loved ones to face trial.
'If he has questions to answer, there is a clear argument to be made that he should answer them in a British court.'
Before the election, Damian Green, Cameron's Immigration Spokesman (now Immigration Minister) said psychiatrists believe the extradition 'will amount to a death sentence'. He pointed out that 'it would be illegal to send someone to another country to face an explicit death sentence'.
Nick Clegg rightly made the Gary McKinnon case one of his core campaigning issues. he joined Gary's mother Janis Sharp at demonstrations outside the Home Office and wrote articles in this newspaper arguing for clemency on Gary's behalf. 'The life of a vulnerable man is on the line. Gary McKinnon's case is as serious as that,' he wrote.
Gary McKinnon, who suffers from Asperger's syndrome, is facing extradition to the U.S. on charges of hacking into highly sensitive military computers
'It is the basic duty of a government to protect its citizens. Despite what the U.S. authorities say, Gary McKinnon is no cyberterrorist. He is a computer whiz with a serious medical condition.'
Just five months ago, Nick Clegg stood outside the Home Office alongside Gary's mother, urging the government to halt the extradition.
'It is simply a question of doing the right thing,' he said, 'It is wrong to send a vulnerable young man to his fate in the United States when he could and should be tried here.'
On the strength of argument from these politicians and campaigns like those run in the Daily Mail, many decent people who would otherwise have voted Labour cast their vote for Cameron or Clegg.
Now is the moment for these men of mercy to stand by their fine words and do their democratic duty.
To understand why, let us go back to examine the true injustice of the charges against the so-called cyber-terrorist Gary McKinnon.
In 2002, from a council flat and with a battered first generation laptop, McKinnon hacked into U.S. army computers with a gusto and brilliance attributable to his Asperger's.
He left a polite message of political protest against the post-9/11 Bush administration: 'U.S. foreign policy is akin to government-sponsored terrorism these days.'
He did not realise that the damage he was causing would amount to £350,000. He could have been tried for criminal damage in Britain, where he would have received a compassionate sentence - in all probability a suspended one.
Instead, the Virginia state prosecutors lay in wait for two years until the extradition Act was changed and then demanded Britain surrender McKinnon for what the courts accept will be an eight to ten-year prison sentence.
From any view this punishment would be cruel and disproportionate, but the Home Office was unmoved. The then Home Secretary Jacqui Smith quite disgracefully refused to give McKinnon even the benefit that Britain insisted upon for the NatWest Three, namely bail when extradited to the U.S., and the right to serve part of the sentence in the UK.
It was then that a leading expert on Asperger's, Dr Simon Baron-Cohen, diagnosed McKinnon's condition and reported that he was likely to commit suicide if extradited.
But that did not bother the Home Office either. It was not that Smith's successor Alan Johnson was incapable of doing the right thing, he was just incapable of working out how.
Mr Clegg with Janis Sharp, mother of Gary McKinnon, at a protest last December
In the 2003 Extradition Act, Parliament had limited the Home Secretary's discretion to refuse extradition to the U.S. to punishment that was 'inhuman and degrading'. These are the weasel words of the European Convention, which cannot apply to Americans (who are not inhuman) or to their prisons (which are no more degrading than ours).
But the uncivil servants intent on harrying McKinnon out of the country have forgotten that Britain has its own Bill of Rights, forged in the Glorious Revolution of 1689 and forbidding punishment that is 'cruel and unusual'.
This law should today protect UK citizens against sanctions that are overly severe by British standards. A ten-year sentence in a foreign jail, imposed on a suicidal man whose crime would, if prosecuted in the UK, probably receive a suspended sentence, is about as cruel and unusual as it can get.
Nick Clegg was not the only Lib Dem to say so: before the election, Chris Huhne (who is now Energy Minister) asked Alan Johnson whether he was ready to 'accept the real risk that you will have the life of a man on your hands'.
Indeed, last year, virtually all the senior Tories and Lib Dems agreed that they saw no compassion in sending Gary McKinnon to America.
So, over to the new coalition government, then.
Its main difficulty will be to override Home Office advisers who have for years fought an unremitting, expensive and merciless battle against this poor man and his indomitable mother.
They will, perhaps, tell their Ministers that if they reverse the decision, the Americans might take them to court for judicial review.
But this is unrealistic: the Obama administration is unlikely to challenge a decision of the new British government. And even if it does, it is unlikely to be successful.
And even if that happens, Parliament is sovereign and can sweep away any adverse court decision simply by passing the Gary McKinnon (Freedom from Extradition) Act (2010).
McKinnon is a rare and talented individual with Asperger's, just like Stieg Larrson's heroine in The Girl With The Dragon Tattoo, who should have been compassionately dealt with eight years ago for reckless hacking.
Yet Home Office officials - Orwell called them 'the striped-trousered ones who rule' - are still out to get him.
In court they intend to argue that because 'he has no history of serious self-harm or suicide attempts', European law cannot save him from ending his life in an American prison.
That may be so.
But British tradition, infused with Portia's admonition in Shakespeare's The Merchant Of Venice that mercy must always season justice, demands that his torment end.
If they do not have the humanity to free McKinnon, this government was elected under false pretences.
Geoffrey Robertson QC is author of the Justice Game.
@UKHOMEOFFICE ,
@NICK_CLEGG AND @NUMBER10GOV #FREEGARY NOW
GARY McKINNON Brown and Cameron's plea to US - YouTube @UKHOMEOFFICE , @NICK_CLEGG AND @NUMBER10GOV
@UKHOMEOFFICE ,
@NICK_CLEGG AND @NUMBER10GOV
David Cameron: A World View Interview - YouTube
Er..... #freegary #cannabis #honesty #gutless
Nurses hold rallies across the U.S.A on Sept. 1. » Protest In The USA - a community of Protesting in the United States of America
Nurses hold rallies across the U.S.A on Sept. 1.
Posted: September 1, 2011 By: Protest In The USA Categories: Featured Video • User Submission • Video • YouTube Video SubmissionHundreds of nurses and community members handed out bagged lunches to the hungry and held an “Economy is Giving Us the Blues” concert. This was one of 60 events happening in 21 states across the country. Nurses asked members of Congress to pledge their support for a Wall Street Transaction Tax.
Posted in Featured Video, User Submission, Video, YouTube Video Submission | Tagged Chicago, Congress, National Nurses United, nurses, tax, Wall Street
National Day of Action, Sept. 1 - a set on Flickr - 10,000 Nurses, Main Street Residents Converge at 61 Congressional Offices in 21 States Sept. 1 – Call for Tax on Wall Street to Heal America RNs sponsor soup kitchens, street theater, speak outs on t
National Day of Action, Sept. 1
10,000 Nurses, Main Street Residents Converge at 61 Congressional Offices in 21 States Sept. 1 – Call for Tax on Wall Street to Heal AmericaRNs sponsor soup kitchens, street theater, speak outs on the need for jobs, healthcare, education, housing – and outline plan to pay for it
United States of America
September 1, 2011129 photos | 594 viewsitems are from between 31 Aug 2011 & 01 Sep 2011.
10,000 Nurses, Main Street Residents Converge at 61 Congressional Offices in 21 States Sept. 1 – Call for Tax on Wall Street to Heal America
RNs sponsor soup kitchens, street theater, speak outs on the need for jobs, healthcare, education, housing – and outline plan to pay for it
United States of America
September 1, 2011
Flickr - projectbrainsaver
www.flickr.com
|