Sunday, June 30, 2013

SMB Disaster Recovery: Plan Now or Pay Later

altby Michael Krutikov

SMBs havebeen hearing for years about the need to have a disaster recovery(DR) plan. But this advice largely falls on deaf ears. An alarming number ofSMBs, 74 percent, have no plan in place to deal with a disaster that couldpotentially cost them their livelihoods. But where to start? Many SMBs areunaware as to what precisely goes into their disaster recovery plan.Essentially, it’s a matter of understanding where your company is now, whereyou need to be, and what steps are required to bridge the gap. If that seemsoverwhelming, creating a plan can be broken down into a few simple steps.

Understanding What You Have and What YouNeed
First,consider the current state of your business. Take an inventory of your currentresources, beginning with hardware – not just desktop computers, but servers,printers, mobile devices and other devices. Next, assess all the software youare using, especially business-critical applications and databases that allowbasic operations for the business to continue uninterrupted. It’s especiallyimportant to note any systems or programs that would be difficult or impossibleto replace, such as custom-built applications. You should also consider theminimal staff necessary to keep things running until you’re back up and runningbusiness as usual.

With thislist, you can begin to determine what must be restored first in case of a disaster,and continue in order of importance. It is also helpful to specify what risksare likely for your business, considering factors such as geographic locationand what natural disasters are most likely to occur, as well as your industryand risks of attack by cyber criminals. One additional advantage to thisassessment is that you may find ways to improve your current operationalefficiency.

Create a Specific Plan
Once you’veassessed your company’s current status, you are ready to determine what has tobe done, and when it needs to be done, to complete your plan. Within the first threemonths, you should solicit support from each area of your company, andestablish a budget for specific action items creating a well-defined disasterrecovery plan. This phase will include steps such as providing for immediatefile restoration and dealing with compromised devices. The following threemonths will see you meeting goals for recovery times, such as how soon certainapplications need to be made available again. The last six months should bespent preparing for longer-term disasters, such as the physical destruction ofyour facility in the event of a fire or earthquake, allowing completerestoration of business processes from a secondary site for as long as necessary.

]]>

View the original article here

Data Governance Based on Roles and Responsibilities is Key to Avoiding Regulatory Risk

altby Jonathan Sander

Data governance is critical to managing the availability,integrity and security of all data across the enterprise. Every organizationmust comply with today’s copious amounts of external regulations for handlingdata, and data governance is the discipline that helps the enterprise remaincompliant and avoid regulatory risk. A data governance plan defines who is accountable for your unstructureddata held in files, folders and shares across NTFS, NAS devices andSharePoint. It also establishes aset of controls and audit procedures that ensure compliance is continuous.

Along withestablishing who is accountable for the data, a data governance plan definesthe level of access for each of those data stewards. Ideally, this should bebased on each employee’s role and responsibilities, and determined by thebusiness stakeholders who have insight into who should have access to differentsensitive data, and what kind of risk is posed by that access. The criticalneed to maintain regulatory compliance has changed the landscape for businessestoday. In the past, business needed IT to perform a task, and, as long as thetask was executed, nobody really cared howit was done. With today’s transparency and interconnectedness, businesses want governance and oversight to avoidpotentially costly compliance breaches.

]]>

View the original article here

Relentless Attacks Demand Continuous Security

altby Jason Brvenik

Attackers areusing time and patience to their advantage. Yet traditionalsecurity technologies can only detect an attack at a point in time, which onits own is limiting when dealing with sophisticated threats that can disguisethemselves as safe and become malicious tomorrow, or next week. If you missthat one shot at identifying and blocking a threat, then most IT securityprofessionals have no way to continue to monitor that file once it enters thenetwork and take action if it turns out to be malware.

What’s needed is a new, continuous security model that allowsdefenders to constantly track, analyze and be alerted to files previouslyclassified as ‘safe’ or ‘unknown’ but subsequently identified as malware. Then,they need to be able to take action to quarantine those files, remediate andcreate protections to prevent the risk of reinfection.

]]>

View the original article here

Why are People Getting Tired of Mainstream Social Media?

altby M.K.

Alltechnologies run out of their usefulness at some point unless they stay freshand innovative. While mainstream social networks such as Facebook, Twitter,LinkedIn and others have seen an unprecedented growth over the last decade orso, there has not been much innovation to keep users engaged. After all, howmany times can you read information about your friends' visits to burger jointsor drinking coffee at Starbucks? Even updates about their jobs and families cangrow either boring or overwhelming. And sometimes it's just TMI (Too MuchInformation) about their lives.

Accordingto a recent study by Pew Internet and American Life Project, there are clearsigns of Facebook fatigue among teenagers. Other studies have found the sametrend across the board as people are growing increasingly tired of meaninglessposts.

It's notbecause people don't have enough to talk about. After all, when you meet yourfriends in person, they can blabber on for hours. If this is the case, thenwhey are they posting simple and stupid comments? The reason is simple. Peoplejust don't feel comfortable expressing their true feelings on social media dueto negative implications. If you say something negative about your boss orcompany, for example, you may get reprimanded or, worse, get fired. If you talkabout your political affiliation, then you might piss off your colleagues,bosses or even family members. You certainly can't talk about sex, disputeswith your significant others, your thoughts on religion/God, your addictionissues, and many other taboo topics.

]]>

View the original article here

Cloud ghost schools

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/spannermans-edublog/"> In February Prof Sugata Mitra picked up a $1m prize in LA from TED. What exactly this bloke called Ted was doing in La La land is not totally clear but although he himself comes from Cloud Cuckoo land he is clearly a generous sort.

Thus ended my attempt to explain Cloud Schools and the Technology, Entertainment and Design conference to my mum. Why did I bother? Well simply because Prof Mitra is going to staff his Cloud School with grannies so I thought she might be interested.

To be fair the Prof too has had problems communicating his vision to his target demographic who are now convinced he is hiring ghosts to teach their children. Grannies or ghosts, so far so hilarious but this is a seriously important project.

You see Prof Mitra regards it as self-evident that access to the Internet is the key to accessing education for those that cannot access it locally, either because their are no schools, few or no good teachers and even no safe roads. More radically he believes that children can educate themselves if given access to technology ... ok with a little guidance later on, hence the grannies.

He has form. In 1999 he installed hole in the wall computers in a few Indian slums and watched in amazement as the children found their way around them unaided.

This summer at TEDglobal in Edinburgh he explains how his new Cloud Schools will work.

We’ll take futuristic glass kiosks powered by solar cells up mountain with satellite uplinks and other techy paraphernalia for granted. This is all fun stuff and the NSA will no doubt get joy from tracking remote terrorists accessing Wikipedia or posting ‘selfies’ on Facebook but the real genius is as follows.

Most teachers are retired. In the UK there are more qualified teachers not teaching than are practicing the profession. Some of the retirees are still sane, have huge amounts of knowledge about stuff and know how to guide children...better still they are cheap to hire.Children are better at teaching themselves than schools are at teaching children. Prof Mitra knows that his test 300 will spend their days, at first, just playing games. He also knows that they will move onto more and more complicated things, just because that’s what they do. No-one who grew up in the early PC days can gainsay him. From Turing to Torvalds we have a class of self-taught technocrats.

If you remain skeptical just look at our attempts to teach computers, ICT and science in schools. Abject failure. Is there a clue here?

At a certain point in development a student needs access to an expert, a mentor, a guru. Schools cannot guarantee to have one on their books but the Cloud School can. MIT’s EdX project is living proof of their thirst for access to expertise.

This is a project worth watching. I am sure that it will be a roaring success. I am also sure that it will produce results that we cannot anticipate. Good luck to the ghosts and grannies in the clouds say I.

Enhanced by Zemanta]]>

View the original article here

Prism's existence was not a surprise to everyone

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/infosecurity-voice/">
For years the professional security community has been highlighting how cybercrime has evolved to leverage the advantages in automation. It was only a matter of time before those responsible for defending against this felt they had to do the same. Clandestine surveillance has always been a necessary evil in governed society. What’s new here is that ability to leverage the advantages of automation.   

According to the documents revealed by whistleblower Edward Snowden, thanks to the Prism surveillance scheme, the US National Security Agency (NSA) has large-scale access to individual chat logs, stored data, voice traffic, file transfers and social networking data of individuals. Whether these records were gleaned legally or illegally is fuelling controversy. Whether the US has allowed other governments, and particularly GCHQ, access which would be illegal if obtained directly, appears to be fuelling the controversy. At this point, the legalities almost seem secondary. 

What is shaking public confidence in governments, or even the companies the public suspect could be dragged into this kind of activity, is the fact that the boundaries governing such automated activity are not at all clear. What can the US government see? What about data protection laws? Are we all being watched? Does it make a difference which country you live in?

Foreign Secretary William Hague’s insistence that law abiding citizens have nothing to worry about may present little comfort as he points out that there is no legal way for individual citizens to opt-out of such surveillance. I am not sure there ever has been when it comes to public surveillance. But people are less likely to be concerned when the surveillance only affects the few. Now that the technology is there to affect us all and track our every online move, it may be time to clarify some of those boundaries. 

This problem is not new. In 1775 Benjamin Franklin said, "they who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." It’s a sentiment our politicians tend to ignore.

We need an understanding of public expectation. Snowden alleges that the NSA is collecting as much information as possible by default because this is the most efficient way to manage the task. Are we as a society willing to tolerate this in the name of public safety? The desire to protect individual privacy which undermined the UK’s failed ID card scheme would suggest perhaps not. We are however talking about international cyberspace; the UK perspective is not the only one to consider. Neither is that of the US.

Further, the use or even abuse of surveillance is not limited to governments. Companies too are quite likely taking more advantage of their automated ability to track our every online move than they would like us to know. Data protection laws do guide the use of the information, but the amount of information that can be amassed and how it can be manipulated is poorly understood by the public who have not as a result yet had the chance to make their expectations be known. Boundaries here too are unclear.

Whether we agree with Snowden’s actions or not, they offer a reminder that despite becoming an integral part of our developed world cyberspace is still a new frontier. As we set out to conquer it, we must make the effort to articulate what should be done, not just embrace what can be done. Until the rules are known, everyone will continue to make them up as they go along.

John Colley, managing director EMEA
 ]]>

View the original article here

Saturday, June 29, 2013

Oracle embracess the broader Cloud landscape

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/infrastructure-and-operations/"> It's easy to accuse Oracle of trying to lock up its customers, as nearly all its marketing focuses on how Oracle on Oracle (on Oracle) delivers the best everything, but today Ellison's company and Microsoft signed a joint partnership that empowers customer choice and ultimately will improve Oracle's relevance in the cloud world.

The Redwood Shores, California software giant signed a key partnership with Microsoft that endorses Oracle on Hyper-V and Windows Azure, which included not just bring-your-own licenses but pay-per-use pricing options. The deal came as part of a Java licensing agreement by Microsoft for Windows Azure, which should help Redmond increase the appeal of its public cloud to a broader developer audience.

Forrester's Forrsights Developer Survey Q1 2013 shows that Java and .Net are the #2 and #3 languages used by cloud developers (HTML/Javascript is #1). The Java license does not extend to Microsoft's other products, BTW.

This deal gives Microsoft clear competitive advantages against two of its top rivals as well. It strengthens Hyper-V against VMware vSphere,as Oracle software is only supported on OracleVM and Hyper-V today. It gives Windows Azure near equal position against Amazon Web Services (AWS) in the cloud platform wars, as the fully licensed support covers all Oracle software (customers bring their own licenses), and pay-per-use licenses will be resold by Microsoft for WebLogic Server, Oracle Linux, and the Oracle database.

AWS has a similar support relationship with Oracle and resells the middleware, database, and Oracle Enterprise Manager, plus offers RDS for Oracle, a managed database service.

Bring your own license terms aren't ideal in the per-hour world of cloud platforms, so the pay-per-use licensing arrangements are key to Oracle's cloud relevance. While this licensing model is limited today, it opens the door to a more holistic move by Oracle down the line.

Certainly Oracle would prefer that customers build and deploy their own Fusion applications on the Oracle Public Cloud, but the company is wisely acknowledging the market momentum behind AWS and Windows Azure and ensuring Oracle presence where its customers are going. These moves are also necessary to combat the widespread use of open source alternatives to Oracle's middleware and database products on these new deployment platforms.

While we can all argue about Oracle's statements made in last week's quarterly earnings call about being the biggest cloud company or having $1B in cloud revenue, it is clearly no longer up for debate as to whether Oracle is embracing the move to cloud. The company is clearly making key moves to cloud-enable its portfolio. Combine today's moves with its SaaS acquisitions, investments in cloud companies and its own platform as a service, and the picture clearly emerges of a company moving aggressively into cloud.

I guess CEO Ellison no longer feels cloud is yesterday's business as usual.

Posted by James Staten

Enhanced by Zemanta]]>

View the original article here

Download Hosts Withdrawing

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/simon-says/"> English: A download symbol.

With news this week that GitHub is banning storage of any file over 100Mb and discouraging files larger than 50Mb, their retreat from offering download services is complete. It's not a surprising trend; dealing with downloads is unrewarding and costly. Not only is there a big risk of bad actors using download services to conceal malware downloads for their badware activities, but additionally anyone offering downloads is duty-bound to police them at the behest of the music and movie industries or be treated as a target of their paranoid attacks. Policing for both of these -- for malware and for DMCA violations -- is a costly exercise.

As a consequence we've seen a steady retreat from offering downloads, even by those claiming to serve the open source community. First GitHub bowed out of offering the service, claiming that it was "confusing" for the clients. More recently Google followed suit, bringing Google Code Download services to an end. They stated that “downloads have become a source of abuse, with a significant increase in incidents recently”. Community reactions to this have been mixed.

GitHub didn’t have an alternative plan for it’s users and clearly has no desire to be a full-service community host. Google suggested using its Drive cloud file storage service to host files, though this is clearly far from ideal as, for a start, no analytics are available for downloaders. Small projects are left with a rapidly decreasing number of options. They could pay of course, for S3, but for a free downloader solution SourceForge seem to be the only high-profile answer. SourceForge are doing everything in their power to make it easy for users of Google Code and GitHub to transition across to their service and GitHub have even included a link to SourceForge in their help pages, recommending them as a viable alternative. SourceForge assures us that they have no intention of shutting down their upload/download services at all.

SourceForge providing an alternative is potentially handy for those whose projects would otherwise be held up by this lapse in services and they will no doubt welcome the wave of new users. The issue shouldn’t be coming up at all though. Confusion for and abuse by users may sound like reasonable pretexts, but perhaps the real problem encountered by both the closing services is a somewhat less reasonable one. There’s a growing expectation that they should regulate the downloads, acting the part of police on behalf of copyright holders.

The pressure to behave that way, whether through a desire to preserve a safe harbour status or simply to tread carefully in the eyes of the law, is an unreasonable hack that appears to mend copyright law online but in fact abdicates the responsibility of legislators to properly remake copyright law for the meshed society and over-empowers legacy copyright barons. These changes to downloads are an inconvenience for open source developers, but should serve as a warning to the rest of us that the copyright system is beyond simple patching.

Follow Simon as @webmink on Twitter and Identi.Ca and also on Google+

Enhanced by Zemanta]]>

View the original article here

How Can Any Company Ever Trust Microsoft Again?

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/open-enterprise/"> Irrespective of the details of the current revelations about US spying being provided by Edward Snowden in the Guardian, there is already a huge collateral benefit. On the one hand, the US government is falling over itself to deny some of the allegations by offering its own version of the story. That for the first time gives us official details about programmes that before we only knew through leaks and rumours, if at all. Moreover, the unseemly haste and constantly-shifting story from the US authorities is confirmation, if anyone still needed it, that what Snowden is revealing is important - you don't kick up such a fuss over nothing.

But perhaps even more crucially, other journalists have finally been shamed into asking some of the questions they ought to have asked years and even decades ago. This has resulted in a series of extremely interesting stories about NSA spying, many of which contain ancillary information that is just as important as the main story. Here's a great example that appeared over the weekend on the Bloomberg site.

Among other things, it is about Microsoft, and the extent to which it has been helping the NSA spy on the world. Of course, that's not a new fear. Back in 1999, it was asserted that backdoors had been built into Windows:

A careless mistake by Microsoft programmers has revealed that special access codes prepared by the US National Security Agency have been secretly built into Windows. The NSA access system is built into every version of the Windows operating system now in use, except early releases of Windows 95 (and its predecessors). The discovery comes close on the heels of the revelations earlier this year that another US software giant, Lotus, had built an NSA "help information" trapdoor into its Notes system, and that security functions on other software systems had been deliberately crippled.

More recently, there has been concern about Skype, bought by Microsoft in May 2011. In 2012, there were discussions about whether Microsoft had changed Skype's architecture in order to make snooping easier (the company even had a patent on the idea.) The recent leaks seems to confirm that those fears were well founded, as Slate points out:

There were many striking details in the Washington Post’s scoop about PRISM and its capabilities, but one part in particular stood out to me. The Post, citing a top-secret NSA PowerPoint slide, wrote that the agency has a specific “User’s Guide for PRISM Skype Collection” that outlines how it can eavesdrop on Skype “when one end of the call is a conventional telephone and for any combination of 'audio, video, chat, and file transfers' when Skype users connect by computer alone.”

But even that pales into insignificance compared to the latest information obtained by Bloomberg:

Microsoft Corp., the world’s largest software company, provides intelligence agencies with information about bugs in its popular software before it publicly releases a fix, according to two people familiar with the process. That information can be used to protect government computers and to access the computers of terrorists or military foes.

Redmond, Washington-based Microsoft (MSFT) and other software or Internet security companies have been aware that this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments, according to two U.S. officials. Microsoft doesn’t ask and can’t be told how the government uses such tip-offs, said the officials, who asked not to be identified because the matter is confidential.

Frank Shaw, a spokesman for Microsoft, said those releases occur in cooperation with multiple agencies and are designed to give government “an early start” on risk assessment and mitigation.

So let's think about that for a moment.

Companies and governments buy Microsoft's software, depending on the company to create programs that are secure and safe. No software is completely bug-free, and serious flaws are frequently found in Microsoft's code (and in open source, too, of course.) So the issue is not about whether software has flaws - every non-trivial piece of code does - but how the people who produce that code respond to them.

What companies and governments want is for those flaws to be fixed as soon as possible, so that they can't be exploited by criminals to wreak damage on their systems. And yet we now learn that one of the first things that Microsoft does is to send information about those vulnerabilities to "multiple agencies" - presumably that includes the NSA and CIA. Moreover, we also know that "this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments".

And remember that "foreign governments" mean those in EU countries as well as elsewhere (the fact that the UK government has been spying on "friendly" countries emphasises that everyone is doing it.) Moreover, it would be naïve to think that the US spy agencies are using these zero-day exploits purely to break into government systems; industrial espionage formed part of the older Echelon surveillance system, and there's no reason to think that the US will restrain itself nowadays (if anything, things have got far worse.)

That means it's highly likely that vulnerabilities in Microsoft products are routinely being used to break into foreign governments and companies for the purpose of various kinds of espionage. So every time a company installs a new patch from Microsoft to fix major flaws, it's worth bearing in mind that someone may have just used that vulnerability for nefarious purposes.

The implications of this are really rather profound. Companies buy Microsoft products for many reasons, but they all assume that the company is doing its best to protect them. The latest revelations shows that is a false assumption: Microsoft consciously and regularly passes on information about how to break into its products to US agencies. What happens to that information thereafter is, of course, a secret. Not because of "terrorism", but because almost certainly illegal attacks are being made against countries outside the US, and their companies.

That is nothing less than a betrayal of the trust that users place in Microsoft, and I wonder how any IT manager can seriously recommend using Microsoft products again now that we know they are almost certainly vectors of attacks by US spy agencies that potentially could cause enormous losses to the companies concerned (as happened with Echelon.)

But there's another interesting angle. Although not much has been written about it - including by me, to my shame - a new legislative agreement dealing with online attacks is being drawn up in the EU. Here's one aspect of it:

The text would require member states to set their maximum terms of imprisonment at not less than two years for the crimes of: illegally accessing or interfering with information systems, illegally interfering with data, illegally intercepting communications or intentionally producing and selling tools used to commit these offences.

"Illegally accessing or interfering with information systems" seems to be precisely what the US government is doing to foreign systems, presumably including those in the EU too. So that would indicate that the US government will fall foul of these new regulations. But maybe Microsoft will too, since it is clearly making the "illegal access" possible in the first place.

And there's another aspect. Suppose that the US spies used flaws in Microsoft's software to break into a corporate system and to spy on third parties. I wonder whether companies might find themselves accused of all sorts of crimes about which they know nothing, and face prosecution as a result. Proving innocence here would be difficult, since it would be true that the company's systems were used for spying.

At the very least, that risk is yet another good reason never to use Microsoft's software, along with all the others that I have been writing about here for years. Not just that open source is generally cheaper (especially once you take into account the cost of lock-in that Microsoft software brings with it), better written, faster, more reliable and more secure, but that above all, free software respects its users, placing them firmly in control.

It thus frees you from concerns that the company supplying a program will allow others secretly to turn the software you paid good money for against you to your detriment. After all, most of the bug-fixing in open source is done by coders that have little love for top-down authority, so the likelihood that they will be willing to hand over vulnerabilities to the NSA on a regular basis, as Microsoft does, must be vanishingly small.

Follow me @glynmoody on Twitter or identi.ca, and on Google+

]]>

View the original article here

Porn Summit Threatens Britain

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/simon-says/"> The government clearly wishes to be seen to be doing something about the issues of children viewing pornography and of child pornography. To this end they have called a summit, to be chaired by Culture Secretary Maria Miller and attended by major Internet service providers including BT, EE, Facebook, Google, Microsoft, O2, Sky, TalkTalk, Three, Twitter, Virgin Media, Vodafone and Yahoo!  Miller will aim to promote her view that “widespread public concern has made it clear that the industry must take action” and likely push for the Prime Minister’s stated objective to "put the heat on" ISP’s to prioritise the filtering and blocking of obscene and indecent material.

I’m sure you’ve seen some of the holes in this approach already. There are several glaringly obvious flaws with the very premise of this summit. Which is why I’m confident in stating that any solution devised by a summit built on this foundation is bound to harm the internet along with the freedom of its users.

First of all, if the summit is intended to regulate content, why has Mrs Miller invited only ISPs? That's like only inviting postmen to a summit about hate mail. Yes, some of these companies have attempted to make concessions to the government's approach by posting warning messages when certain sites are accessed or looking into filtering options, but they are making these ineffective gestures merely to assuage government zealotry. Something must be done. This is something. Therefore let’s do it.

They are unable to make a real contribution without infringing heavily on the rights and freedoms of other internet users because they are not the group responsible for the offending material. This is a fact to which politicians on both sides of the aisle seem to be impervious. No matter how the ISPs try to explain the logical holes in the argument, Miller and her ilk continue to assert that ISPs should be held responsible for the content they carry.

I called filtering an “ineffective gesture” in the previous paragraph and that’s exactly what it is. Porn filters are impossible because porn is subjective and filters are absolute. Demanding porn filters be imposed an all ISP customers is to demand use of a technology that randomly blocks arbitrary content or, worse, imposes the selective view of unaccountable individuals. Despite the trust placed by politicians in filtering systems like those used by the various mobile carriers. It’s clear that filters do not and cannot work.

Just this week, Open Rights Group has published details of ridiculous failures of filtering by the major providers. When Maria Miller talks about “filtering”, it’s these failures she wants to see applied by default to every internet connection in Britain. Worse, since these systems are all managed by private companies and imposed by private companies, there’s no oversight and no recourse for their customers. Getting a false block removed is almost impossible. Since every connection will have them, even switching providers is no remedy.

The meeting demonstrates clearly that the government has no clue what the internet does or how important it is to society. They appear to model it as a TV system, with regulated providers sourcing material for passive viewers. This overlooks its main value to society as a global nervous system in which contribution of content is as universal as its consumption. Legislators are still trapped by special-interest pleading over selected uses of the internet as a one-way channel for content, and as a consequence are contemplating laws that would utterly cripple that nervous system.

Their solutions all assume the providers select the content and can be instructed to do it differently. We’re all well aware that this is not the case, and that attempts to make it so will cause orders of magnitude more harm than they prevent. Long ago we decided the solution to hate mail was not to make the postman responsible for it. Why are today’s politicians insisting on the equivalent approach for the internet?

Follow Simon as @webmink on Twitter and Identi.Ca and also on Google+

Enhanced by Zemanta]]>

View the original article here

Prism's existence was not a surprise to everyone

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/infosecurity-voice/">
For years the professional security community has been highlighting how cybercrime has evolved to leverage the advantages in automation. It was only a matter of time before those responsible for defending against this felt they had to do the same. Clandestine surveillance has always been a necessary evil in governed society. What’s new here is that ability to leverage the advantages of automation.   

According to the documents revealed by whistleblower Edward Snowden, thanks to the Prism surveillance scheme, the US National Security Agency (NSA) has large-scale access to individual chat logs, stored data, voice traffic, file transfers and social networking data of individuals. Whether these records were gleaned legally or illegally is fuelling controversy. Whether the US has allowed other governments, and particularly GCHQ, access which would be illegal if obtained directly, appears to be fuelling the controversy. At this point, the legalities almost seem secondary. 

What is shaking public confidence in governments, or even the companies the public suspect could be dragged into this kind of activity, is the fact that the boundaries governing such automated activity are not at all clear. What can the US government see? What about data protection laws? Are we all being watched? Does it make a difference which country you live in?

Foreign Secretary William Hague’s insistence that law abiding citizens have nothing to worry about may present little comfort as he points out that there is no legal way for individual citizens to opt-out of such surveillance. I am not sure there ever has been when it comes to public surveillance. But people are less likely to be concerned when the surveillance only affects the few. Now that the technology is there to affect us all and track our every online move, it may be time to clarify some of those boundaries. 

This problem is not new. In 1775 Benjamin Franklin said, "they who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." It’s a sentiment our politicians tend to ignore.

We need an understanding of public expectation. Snowden alleges that the NSA is collecting as much information as possible by default because this is the most efficient way to manage the task. Are we as a society willing to tolerate this in the name of public safety? The desire to protect individual privacy which undermined the UK’s failed ID card scheme would suggest perhaps not. We are however talking about international cyberspace; the UK perspective is not the only one to consider. Neither is that of the US.

Further, the use or even abuse of surveillance is not limited to governments. Companies too are quite likely taking more advantage of their automated ability to track our every online move than they would like us to know. Data protection laws do guide the use of the information, but the amount of information that can be amassed and how it can be manipulated is poorly understood by the public who have not as a result yet had the chance to make their expectations be known. Boundaries here too are unclear.

Whether we agree with Snowden’s actions or not, they offer a reminder that despite becoming an integral part of our developed world cyberspace is still a new frontier. As we set out to conquer it, we must make the effort to articulate what should be done, not just embrace what can be done. Until the rules are known, everyone will continue to make them up as they go along.

John Colley, managing director EMEA
 ]]>

View the original article here

Eliminate IT spend on shelfware

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/management-briefing/"> A recent Flexera Software survey, prepared jointly with IDC, finds that a significant proportion of enterprises’ software spend is associated with unused software, commonly referred to as ‘shelfware’. More than half (56%) of the enterprises polled said that 11% or more of their software spend in the last 12 months is associated with unused software - clearly wasted expenditure.

An earlier survey, confirmed that software over-use, or non-compliance - is also creating waste in the form of software licence audit ‘true-up’ (software used, but not paid for) penalties.

While corporations are leveraging software to improve efficiency, shelfware and non-compliant use are IT blind spots. With software licences and maintenance typically representing one third of overall IT budgets, if optimised, corporations can save up to 25% of their software spend by eliminating shelfware and non-compliant software use.

Below is a list of the most common mistakes organisations make:


Making ad hoc purchases - Lack of controls over software purchases is common and leads to over-buying when end-users buy software directly, rather than purchasing centrally under a volume purchase agreement.
Not fully leveraging Global Enterprise Agreements—Over-buying occurs when, for example, software is purchased under a regional Agreement when the same software is available under a corporate global Enterprise Agreement.
Not tracking installation and use - By not tracking installations of software and its usage, organisations unnecessarily pay maintenance.
Not tracking and analysing detailed usage data for certain types of licences—Not optimising concurrent and named user licences due to lack of analysis to determine the optimal number and type for each user respectively, results in over-buying.
Not tracking renewal dates - Failing to keep track of software licence agreements and renewal dates makes organisations vulnerable to lapses in maintenance, which can prove costly.
Lack of communication between departments - IT operations often don’t work with procurement to ensure that software is installed and used in accordance with the entitlements, causing licence compliance issues.
Not purchasing maintenance at the right time - The right time to purchase maintenance is when organisations are expecting to upgrade and a new release of the software is expected during the term of the maintenance agreement.
Not ascertaining strategic requirements - Lack of a corporate policy to define/manage approved products and standardise results in support cost and curtails the benefits of economies of scale.
Not applying product use rights - Product use rights define how software licences can be consumed. Without applying these, enterprises are unable to optimise licences and accurately ascertain if indeed more licences are required or not.

In light of these common mistakes, and the significant risk and cost associated with them, the following best practices are recommended for any organisation seeking to minimise software spend and maximise use of existing licences:

Define Software Licence Optimisation Policies
It’s critical to define and implement asset and licence optimisation policies throughout the business. This means that there must be specific policies on every aspect of licence management, with an aim to reduce costs and limit the business and legal risk related to the ownership of software.

Focus on the Major Software Publishers
Businesses should focus their software asset management and licence optimisation efforts on the highest value and highest spend applications—as these pose the most risk of software audits. A true-up with one or more of these represent one of the largest potential unbudgeted expenses if businesses find themselves out of licence compliance.

Carefully Monitor Virtual Environments
Software licensing is often under-managed in virtualised environments. The risk of licence non-compliance is greatly increased in virtual server environments because:


It’s easy to create and move new virtual machines running copies of operating systems and software applications;
Publishers’ licensing rules for virtual environments add significant complexity to the already complicated task of managing software licences.

Automating software licence optimisation is the only way to manage virtual environment licence complexity.

Understand Software Publisher Licence Rules and Product Use Rights
Software use rights can significantly impact an organisation’s licence position. Simply put, product use rights define where, how and by whom a piece of software can be installed and/or used. Businesses should take full advantage of use rights, including their rights to upgrade, rights of second use, virtual use, etc. Equally, it is crucial that software usage restrictions are understood to stay in compliance.

Fundamentally, the complexity of licence management is further increasing with virtualisation, cloud computing and mobility. Enterprises must adopt software licence optimisation as a discipline by implementing technology - it automates the process of collecting data and applying licence entitlement rules across vendors and licence models - to generate an optimised position.

Posted by Jill Powell, Client Services Director, Flexera Software


Enhanced by Zemanta]]>

View the original article here

Friday, June 28, 2013

What business can learn from analytics in sport

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/si-and-tech-insights/"> Over the past decade sports have taken analytics to heart, using its tools to bring a more scientific approach to tactics, player management and fan engagement. The use of analytics is quickly becoming a prerequisite for success in the fiercely competitive world of professional sport, with teams as diverse as the St. Louis Cardinals baseball team and Chelsea football club working through tens of thousands of data points to help form their strategies.

As sports become more competitive, the difference between winning and losing can be measured in fractions and one area that has a significant impact is the product at the centre of the game. This can either be mechanical e.g. an F1 racing car or human i.e. the sports player themselves. Taking the former, the vehicle that arrives at the start grid is the output of years of development in wind tunnels, computer simulations and the test track itself where terabytes of information have to be analysed to establish the factors that give an edge.

Once on the track on-going real time monitoring of analytics ensure that edge can be maintained. An example on the human side is that NFL players are now wearing biometric clothing where every aspect of their game can be assessed to provide insight as to how they can change the way they play or provide input to optimise their training regimes i.e. improving the product.

Similarly at the heart of the business is the product or commodity it deals in and markets are now getting crowded and competition fierce. Those companies that can learn from the cutting edge analytics in sports to reduce time to market with shorter product cycles, react early to changes in market conditions or respond to telemetric or sensor feedback to improve operational efficiency will, like all successful teams, rise to the top of their game.

So, as rugby fans sit down to watch The Lions Tour, the question for many businesses and IT decision makers should be ‘what can we learn from the use of analytics in sports to help enhance our own performance?’

In order to ensure a productive team, existing talent has to be nurtured and new talent has to be discovered to allow the continual replenishment of skill. Whilst the current players are a known entity and teams have the control over which metrics can be captured and the depth of analysis that can be undertaken, the identification of new players provides more of an analytical challenge.

Two options exist: the purchase of a known entity, which commands a high price and the purchase of a rookie with potential that comes with a lower price point. The role of analytics is to identify that potential, reduce the risk of a poor performer and give the benefit of a lower overhead. Nowhere is this better illustrated than the example of the Oakland A’s or the Moneyball Team.

With a budget a third of the major leagues, it was forced to identify players that were undervalued or undiscovered and by putting analytics at the heart of the selection were able to put a team together that competed at the highest level - an approach that has been replicated by many subsequent teams.

As in sports, identifying the top talent and ensuring it remains engaged is the key to a business’s success. However increasingly it is found that there is a scarcity of suitable candidates - ironically one of the areas being in analytics itself.

Firms therefore need to look at the role of analytics to identify attributes that indicate success and then use techniques seen in sports to search the reams of external data available to identify potential candidates. Once employed, the companies that are leading on analytics are increasingly using analytics to ensure high productivity, engagement and retention.

As shown in a recent survey, businesses are increasingly looking to turn to high end analytics to drive a competitive edge - with nearly a threefold increase in the number looking to use predictive analytics. With the boundaries between sports and business rapidly blurring there is much to be learnt from the increasingly sophisticated use of analytics in sports that can be transferred into business where analytics needs to move from being a retrospective to a future looking tool. Analytics are here to stay in business and sports and are becoming an integral part of success in both areas.

Posted by Will Gatehouse,Accenture Big Data Lead for EAL
A

Enhanced by Zemanta]]>

View the original article here

GCHQ Revelations Destroy Case for Snooper's Charter

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/open-enterprise/"> So the revelations from Edward Snowden keep on coming, exposing ever-more profound attacks on privacy and democracy in the UK and elsewhere. News that GCHQ is essentially downloading, storing and searching through the entire flow of Internet traffic that comes into and goes out of the UK without any specific warrant to do so is one side of that. That seems to be taking place through an extremely generous interpretation of the out-of-date RIPA law that is supposed to bring some level of accountability to just this sort of thing. The fact that it doesn't shows that we must reform RIPA and make it fit for the Internet age.

That should be a priority for the future, but here I want to concentrate on a more pressing threat: the Snooper's Charter. Despite the fact that it is disproportionate, will create additional risks of private data being misused, and simply won't work, the usual authoritarians on both the Right and Left of politics are still calling for it to be brought in. But prompted by the leaks about GCHQ's activities, "sources" have been revealing to The Guardian some interesting facts beyond Snowden's information that have a direct bearing on the Snooper's Charter:

Last year, the government was mired in difficulty when it tried to pass a communications bill that became known as the "snoopers' charter", and would have allowed the bulk interception and storage of UK voice calls and internet traffic. The source says this debate was treated with some scepticism inside the intelligence community - "We're sitting there, watching them debate the snoopers' charter, thinking: 'Well, GCHQ have been doing this for years'."

In other words, the UK government has been playing us for mugs - pretending that it desperately needs all that private information because "terrorism", when in fact it already has access to it all, but under a shadowy programme that clearly stretches legality to breaking point. What those in power want in fact is not a capability that they already have, simply a legal framework for it.

But there's another interesting statement in The Guardian story quoted above:

The UK source challenges the official justification for the programme; that it is necessary for the fight against terrorism and serious crime: "This is not scoring very high against those targets, because they are wise to the monitoring of their communications. If the terrorists are wise to it, why are we increasing the capability?

This is crucially important: the source, presumably within GCHQ or at least with deep knowledge of what is going on there, admits that even with this totality of knowledge, the law enforcement agencies are "not scoring very high" against the traditional targets - terrorism and serious crime. That's because as I and many others have pointed out, the bad people know how to get around this stuff, which means that it only affects the law-abiding. And if the Snooper's Charter is finally pushed through, and the current activities are put on a legal footing, that situation will not change one jot: in other words, the justification for snooping on all of us, all of the time, will be as weak and insubtantial then as now.

Having all this information will not allow the police to combat terrorism or serious crime any better. Indeed, I suspect it will hinder them, because increasing the size of the haystack does not help find the needles. Far better for the outrageous sums that will be necessary to fund the implementation of the Snooper's Charter to be spent where they are needed: on bolstering conventional policing and intelligence work, not on chasing this insane dream of "total surveillance".

Moreover, as well as grossly exaggerating the supposed benefits of snooping on us all, the government of course minimises the very real risks. Again, it is worth reading the comment from someone who has knowledge of what is being done at GCHQ, published by The Guardian:

Beyond the detail of the operation of the programme, there is a larger, long-term anxiety, clearly expressed by the UK source: "If there was the wrong political change, it could be very dangerous. All you need is to have the wrong government in place. It is capable of abuse because there is no independent scrutiny."

This is why the "nothing to hide, nothing to fear" brigade are so naïve. With the wrong government in place, the immense power that total surveillance brings could and would be abused to ensure that it stayed in power, and even those with "nothing to hide, nothing to fear" would pay the price in terms of lost liberty. The fact that GCHQ has been able to set up a system that spies on all incoming and outgoing communications carried by fibre optics, without any real oversight, means that things are already out of control. Thanks to Snowden and The Guardian, at least we know this fact; if we don't now stop the Snooper's Charter once and for all and also bring in a system of real oversight and control for the GCHQ's activities, we have only ourselves to blame for what might one day happen.

Follow me @glynmoody on Twitter or identi.ca, and on Google+

]]>

View the original article here

Forrester Wave: Public cloud platforms -- the winner is...

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/infrastructure-and-operations/"> First off, we didn’t take what might be construed as the typical approach, which would be to look either at infrastructure as a service (IaaS) or platform as a service (PaaS) offerings.

We combined the two as the line between these categories is blurring. And historical category leaders have added either infrastructure or platform services that place them where they now straddle these lines..

Further, many people have assumed that all developers will be best served by PaaS products and ill-served by IaaS products. Our research has shown for some time that isn't so:

Many developers get value from IaaS because it is so flexible while PaaS products are generally too constraining. The -aaS labels overlook the actual capabilities of the services available to developers. All PaaS products are not the same; all IaaS are not the same. Not all developers are the same. Devs will use the services (PLURAL) with the best fit to their skills, needs, and goals.

The reality we find with enterprises is commonly a mixing of the two classes. Those who prefer PaaS often desire the freedom to drop down to the infrastructure layer when they feel the need for stronger configuration control. The mixing of the two is also highly common in the form of modern applications that mix virtualised workloads with abstracted PaaS executables.

And there isn’t just a single developer audience being served here. Our analysis looked at the market of cloud platform leaders from the point of view of four potential customers:

Rapid devs value graphical automated tools for creating applications and see public cloud platforms as a fresh break from more-limiting business process modelling tools with the potential to yield massive gains in the quantity velocity and quality of application delivery. They rarely desire — and often lack the skills necessary to write complex code control virtual infrastructure or middleware configuration.Coders want to program not manage infrastructure. They want to concentrate on building complex applications and will mostly work in an abstracted environment. They often need to make configuration decisions to get the performance and capabilities they seek so they want access to the IaaS layer but rarely do they want to take on management of the infrastructure configuration.DevOps pros are expert programmers who want control over the configuration the platform the application server database and virtual infrastructure. They don’t like graphical tools and other abstractions that impede access to all of the platform’s “tuning knobs.”Enterprise development managers employ all three developer types who increasingly use a variety of languages and frameworks. Thus for many enterprises the best choice of a public cloud platform will be a service (or a portfolio of platforms) that addresses the best a mix of the above developer types working together on cloud projects.

Through these separate lenses the landscape of public cloud choices takes on vastly different hues and yields different rankings for the vendors. Thankfully the Forrester Wave tool offers clients the flexibility to adjust the criteria weightings in our analysis to reflect your environment and needs and reflow the rankings. So we leveraged this to present Wave findings for each group.

As you might expect the vendor landscape for the Rapid Devs is the most askew from the rest due to the more abstracted needs of this audience. They really aren’t candidates to leverage a pure or mostly IaaS solution. For this audience Microsoft Windows Azure came out ahead for its breadth of capabilities broad applicability and robustness.

Through the lenses of the coderDevOps buyer and enterprise application development & delivery (AD&D) manager Amazon Web Services’ relentless rollouts of middleware infrastructure and managed services has provided a wealth of value to these buyers.

But this is far from a two-horse race. The cloud platforms from CloudBees, Cordys, EngineYard, IBM, Mendix, Miosoft, Rackspace and Salesforce all proved to be strong choices for the different audiences mirrored in our analysis.

Forrester clients can now access the full Wave report from our web site including the modifiable Wave tool used to customise the criteria rankings. We highly encourage you to do this so you examine the market from the point of view of your own organisation. Forrester Leadership Board members can go one step deeper into our analysis as videos of each vendor’s Wave demonstrations will soon be available to you exclusively in the FLB Community.

But where’s the analysis from the infrastructure & operations (I&O) buyers point of view? There isn’t one - because you aren’t the customer of these solutions. You may ultimately become the buyer and you certainly will play a role in the operations of your company’s public cloud tenancy and applications but your analysis of the public cloud should start with understanding how well they serve the needs of your internal customers - the developers. Use the adoption of cloud as a means of fitting yourself into the DevOps movement.

Your care-abouts were not forgotten in this analysis however. Many of the criteria used in the Wave reflect I&O needs such as operational transparency and certifications security architecture and features administrative tools and role-based access. There’s even criteria looking at whether these solutions can be brought into your private cloud or offer non-cloud hybrid solutions such as traditional hosting managed services and colocation.

For a more in-depth preview of this report please join Forrester VP and Principal Analyst John Rymer and myself for a client webinar on this Wave Tuesday June 18th. You can register for this call here.

Posted by James Staten

Cloud Keys An Era Of New IT Responsiveness And Efficiency
The Forrester Wave™: Enterprise Public Cloud PlatformsQ2 2013


Forrester, AWS, CloudBees, Cordys, Google, HP, IBM, IaaS, Mendix, Microsoft Windows Azure, Miosoft, Microsoft PaaS, Rackspace, SoftLayer, forrester wave, salesforce.com

Enhanced by Zemanta]]>

View the original article here

Clear Thinking Needed in a Cloudy World

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/open-enterprise/"> Last week I wrote about the perils of using proprietary software, where companies regularly hand over zero-day vulnerabilities to the US authorities who then go on to use them to break into foreign systems (and maybe domestic ones, too, but they're not owning up to that, yet....). Of course, cloud-based solutions are even worse, as we've known for some time. There, you are handing over all your data to the keeping of a company that may be on the receiving end of a secret US government order to pass it on to them - perhaps with necessary encryption keys too.

Against that background, this looks curious:

Eighteen months on and the Houses of Parliament is now in the process of moving a number of applications to the public cloud as part of plans to create a ‘digital parliament’, while making budgetary savings of 23 percent over four years. This includes a deal to migrate to Microsoft Office 365.

Er, that wouldn't be the same Microsoft as this lot, would it?

“The big outstanding element was data sovereignty,” said Miller. “We needed to know what was happening to that data in the cloud, and that anything that happened to that data was in our control.”

She continued: “We have been looking in a lot of detail at the workings of the Patriot Act in particular, and have had a lot of help from Microsoft in looking at how the Patriot Act in America might involve any services that we put into a cloud.”

Oh, look, there's Microsoft again, offering completely objective advice about how it would never ever hand over UK customer data to the NSA. Except when it is told to, of course...

Fortunately, the Houses of Parliament IT people do seem to have been reading the news recently:

In addition, reports of the unofficial access to servers through the US National Security Agency's Prism scheme were taken into consideration. However, it was found that there was no reason to reassess plans to move data into the cloud, and overall the security benefits of using the cloud were clear.

“We were thinking we have to go back and check our work [following the Prism reports], and make sure that what we have done to measure the risk is adequate to deal with the knowledge that is public and not so public about the American government’s use of data,” Miller said. “In fact, we are reassured that everything we thought about is still covered in the work we have already done.”

So why might that be?

According to Miller much of the data held by the Houses of Parliament is actually relatively low risk. She explained that, other than in certain circumstances, the majority of the data is already destined for the public domain.

This is a crucial point. If you host anything in the cloud run by US companies, it's effectively sending a copy straight to the US government. You should therefore treat it as if it were in the public domain. As the above indicates, the material that the Houses of Parliament plan to put in the cloud is, indeed, destined for the public domain, so using US systems like Microsoft Office 365 is really just giving the US government a sneak preview.

If you're happy with that, by all means continue using US-based clouds and US proprietary software. If, on the other hand, you are placing sensitive or even business-critical material in either of those, now would be a good time to starting drafting that letter explaining to your soon-to-be ex-boss why you have been passing your company's business secrets to the US government, and thence to any US firms that compete with you. Good luck.

Follow me @glynmoody on Twitter or identi.ca, and on Google+

]]>

View the original article here

A new service architecture for business innovation

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/sourcing-and-vendor-management/"> The IT services industry is being challenged on two opposite fronts. At one end, IT organizations need efficient, reliable operations; at the other, business stakeholders increasingly demand new, innovative systems of engagement that enable better customer and partner interactions.

My colleagues Andy Bartels and Craig Le Clair recently published thought provoking reports on an emerging class of software — smart process apps — that enable systems of engagement. In his report, Craig explains that “Smart process apps will package enterprise social platforms, mobility, and dynamic case management (DCM) to serve goals of innovation, collaboration, and workforce productivity.” In other words, smart process apps play a critical role in filling gaping process holes between traditional systems of records and systems of engagement.

While these reports focus on software vendors, I also see service providers like Accenture (with Accenture CAS), IBM (with Emptoris), and Infosys (with Infosys BrandEdge) acquiring and/or developing their own IP-based solutions to help organizations fill the gaps between current and future application requirements. Another interesting player in this space is HP Enterprise Services. I recently wrote about how HP composed a dealer management solution (DMS) leveraging Microsoft Dynamics as a platform on which it developed its own IP. Below I’ve listed a few of the design principles that caught my interest in this particular solution, which I believe will become commonplace among all service providers aiming to deliver business innovation to their clients:

A scalable, reusable, multitenant architecture enables adaptable business processes. The processes that clients go through to buy a car can also apply to other products and services, such as new credit cards or loan applications. These are all interactions that organizations need to manage efficiently and effectively in order to save time for the customer and either increase their satisfaction level or just reduce their frustration. Systems of engagement enable these interactions and relationships. But relationships change and enabling solutions need to be flexible and adaptable to rapidly embrace these changes. The approach that HP has taken is interesting because it leverages a platform and architecture that it can reuse across clients and repurpose across industries.Consulting, BPM, and analytics bring incremental business value to clients. In the DMS offering, HP’s automotive industry specialists work with the client to understand the business challenges and configure a solution aimed at delivering increased business value (higher client satisfaction, for instance). A library of reusable business processes accelerates the deployment of the solution across the different clients and client instances. Finally, analytical tools identify potential improvement areas in terms of business process performance, thus optimizing the solution to deliver the right business outcomes.

business_innovation_services_stack.png
What does this mean? Service providers will no doubt play a big role as integrators of smart process apps. More importantly, I expect that this new service architecture, combination of assets (including BPM, mobile, social and analytics), strong domain expertise, and as-a-service delivery models will allow traditional service providers like HP ES, Accenture, IBM, and Infosys to reinvent the way they deliver value to their clients and help them create business innovation. I welcome your thoughts, as always.

Posted by Fred Giron

Enhanced by Zemanta]]>

View the original article here

Please Help Overturn EU Data Retention Directive

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/open-enterprise/"> The last couple of weeks have been full of the revelations about NSA spying on a massive scale. What has been slightly disconcerting is that the agency and its defenders have essentially tried to argue that the spying doesn't matter because it's only aimed at "foreigners". But that's us: which means that we are the target of this spying, even if others get caught up in it too.

I'll be coming back to the implications of that in another post, but here I just want to point out something else: that it's important to remember that we are already being spied upon on a routine basis by our own governments, thanks to the EU's Data Retention Directive:

The Data Retention Directive requires operators to retain certain categories of data (for identifying users and details of phone calls made and emails sent, excluding the content of those communications) for a period between six months and two years and to make them available, on request, to law enforcement authorities for the purposes of investigating, detecting and prosecuting serious crime and terrorism.

27 EU States have notified the Commission about the transposition of the Directive into their national law. However, of these, Germany and Belgium have only transposed the legislation partially.

That's from the official EU page on the subject, which continues with the following claim:

Law enforcement authorities in most EU States have reported that retained data play a central role in their criminal investigations. These data have provided valuable leads and evidence that have resulted in convictions for criminal offences and in acquittals of innocent suspects in relation to crimes which, without an obligation to retain these data, might never have been solved.

This is, of course, exactly the argument the UK government is using for its even more intrusive Snooper's Charter. The vague, unsubstantiated claims made above sound plausible: that if you track everyone's communications all the time you'll be able to find out stuff that allows you to convict more people. The detailed reality turns out to be rather different, as the case of Denmark demonstrates:

According to the Danish law, all Internet traffic must be logged, registered and stored for one year. As mentioned above, this practice is called session logging. But a casual Internet user can, and usually does, generate an enormous amount of data in a single sitting of casual web surfing. As a result, the police and security services are drowning in a tsunami of user data that they cannot sort and therefore cannot use. According to the above-cited report compiled by the Danish Ministry of Justice, 90 percent of the data collected under the Data Retention Law is acquired via session logging — i.e., Internet surveillance. But the software used by the Danish police has proven inadequate for the task of handling and analyzing the majority of the data, rendering it useless — even as the privacy rights of ordinary citizens not suspected of any crime is routinely violated.

The Danish police themselves admit this:

The police, meanwhile, have concluded that requiring telecoms to store Internet subscriber data has not helped them track criminals, which was the ostensible purpose of the practice.

More data does not equal more information. Indeed, probably just the opposite: had police forces spent more time and resources using conventional, targeted tools, instead of trying to trawl through enormous and growing quantities of data, they might have had rather more luck.

Still, you might think there's not much to be done now. However, it turns that a serious challenge is currently being made to the Data Retention Directive that could cause it to be overturned completely. Digital Rights Ireland has been mounting a slow-burning campaign against the Directive that began back in 2006:

These laws require telephone companies and internet service providers to spy on all customers, logging their movements, their telephone calls, their emails, and their internet access, and to store that information for up to three years. This information can then be accessed without any court order or other adequate safeguard. We believe that this is a breach of fundamental rights. We have written to the [Irish] Government raising our concerns but, as they have failed to take any action, we are now forced to start legal proceedings.

Accordingly, we have now launched a legal challenge to the Irish government’s power to pass these laws. We say that it is contrary to the Irish Constitution as well as Irish and European Data Protection laws.

We also challenge the claim that the European Commission and Parliament had the power to enact the Data Retention Directive. We say that this kind of mass surveillance is a breach of Human Rights, as recognised in the European Convention on Human Rights and the EU Charter on Fundamental Rights which all EU member states have endorsed.

If we are successful, the effect will be to undermine Data Retention laws in all EU states, not just Ireland, and to overturn the Data Retention Directive. A ruling from the European Court of Justice that Data Retention is contrary to Human Rights will be binding on all member states, their courts and the EU institutions.

Digital Rights Ireland Chairman T J McIntyre is also quoted as saying:

These mass surveillance laws are a direct, deliberate attack on our right to have a private life, without undue interference by the government. That right is underpinned in the laws of European countries and is also explicitly stated in Article 8 of the European Convention on Human Rights. The Article specifies that public authorities may only interfere with this right in narrowly defined circumstances.

The information will be collected and stored on everyone, regardless of whether you are a criminal, a policeman, a journalist, a judge, or an ordinary citizen. Once collected, this information is wide open to misappropriation and misuse. No evidence has been produced to suggest that data retention laws will do anything to stop terrorism or organized crime.

We accept, of course, that law-enforcement agencies should have access to some call data. But access must be proportionate. In particular, there should be clear evidence of a need to move beyond the six months of storage which is already used for billing purposes. Neither the European Commission nor the European police forces have made any case as to why they might require years of data to be retained.

That's spot on: nobody is suggesting the police should not have the tools they need, but as the Danish experience clearly shows, giving the police minutely-detailed information about what every one of us is doing is not only a devastating attack on our private life, but it is actually counter-productive for the purposes of law enforcement.

The good news is that seven years later, Digital Rights Ireland's case has finally reached the highest court in Europe:

The Court of Justice of the European Union has joined two cases on the validity of the data retention directive (2006/24/EC) for a hearing before the Grand Chamber on 9 July 2013. The references for a preliminary ruling, brought to the ECJ by the Irish High Court (C-293/12 Digital Rights Ireland) and by the Austrian Constitutional Court (C-594/12 Seitlinger and Others) question the compatibility of the data retention directive with Articles 7, 8 and 11 of the Charter of Fundamental Rights of the European Union, and the ECJ has indicated to the parties that the hearing will focus on Articles 7 and 8 of the Charter.

The rest of that post linked to above contains the gory legal details, but as Digital Rights Ireland explains, the key point remains this:

If we are successful, it will strike down these laws for all of Europe and will declare illegal this type of mass surveillance of the entire population.

That would be a truly massive win for privacy and liberty in the Europe, and it's extraordinary that Digital Rights Ireland has almost single-handedly brought us to this point. If, like me, you are wondering what you could do to support this amazing move, the simple answer is: please make a donation, however small. It's extremely quick and easy to do - I've done it, and I urge you to do the same.

If it helps to overturn the disproportionate EU Data Retention Directive and its pernicious assumption that governments have a right to spy on our past communications, kept for the purpose in huge and thus dangerous databases, it could be the best few quid you've spent in a long time.

Follow me @glynmoody on Twitter or identi.ca, and on Google+

]]>

View the original article here

Thursday, June 27, 2013

Our keyboard-free computing future

type="html" xml:lang="en" xml:base="http://blogs.computerworlduk.com/infrastructure-and-operations/"> I recently spoke with Tim Tuttle, the CEO of Expect Labs, a company that operates at the vanguard of two computing categories: Voice recognition (a field populated by established vendors like Nuance Communications, Apple, and Google) and what we can call the Intelligent Assistant space (which is probably most popularly demonstrated by IBM’s “Jeopardy”-winning Watson).

In their own words, Expect Labs leverages “language understanding, speech analysis, and statistical search” technologies to create digital assistant solutions.

Expect Labs built the application MindMeld to make the conversations people have with one another "easier and more productive” by integrating voice recognition with an intelligent assistant on an intuitive tablet application. They have coined the term “Anticipatory Computing Engine” to describe their solution, which offers users a new kind of collaboration environment. (Expect Labs aims to provide an entire platform for this type of computing).

Here’s how MindMeld works: Imagine 5 colleagues across remote offices - all equipped with Apple iPads - are having a conference call on a particular topic or set of topics. Using the MindMeld application, these users join a collaborative workspace that updates in real-time during the call.

The MindMeld app “listens” to the conversation, surfacing themes and topics word-cloud style. It then leverages the Anticipatory Computing Engine to go out and find relevant content (say, from the web) that it surfaces on those topics. These pictures, videos, articles, and other content create a richer conversation - as well as a record of the collaborative experience - that should drive stronger, more effective, more data-supported collaboration.

In the future, you could imagine MindMeld tapping into proprietary big data sources (like CRM systems) to help inject insights from big data into streams of work within an enterprise - having its Intelligent Assistant act as a content curator in real time.

Here's a video demonstration of MindMeld:

The MindMeld app reveals some interesting end user computing truths:

Some of the most innovative software experiences come first to tablets. Expect Labs developed MindMeld for Apple’s iPad first. The motivation came from the touchscreen environment (which creates a collaboration-oriented user interface); the screen real estate; and the market share of iPad among tablets. The company also plans an Android tablet experience. While the core technologies that drive MindMeld - voice recognition and intelligent assistants - aren’t bounded to tablets, the developers chose tablets as their form factor of choice for their user experience. This isn't completely surprising: According to Forrester's survey of over 2,000 software developers in Q1, 2013, tablets rival smartphones as a form factor that developers either support today or plan to support. The numbers are already close -- 54% target smartphones with the software they develop, while 49% target tablets -- even though smartphones outnumber tablets roughly 5:1 globally today.

What It Means: Tablets are in the driver's seat for empowering innovative computing experiences. They're often the place where you'll find developer interest.
Computing is evolving -- rapidly -- beyond keyboards. “In a world where keyboards aren’t tightly coupled with computing, easier interaction methods are required,” Tim Tuttle told me, describing Expect Labs’ focus on voice recognition. His observation is important as we think of all the scenarios in which keyboards aren’t present: In a car, in our living rooms, in some mobile- and tablet- computing scenarios, with Xbox Kinect, or in a variety of embedded computing scenarios (like wearables or home automation solutions) that Forrester calls Smart Body, Smart World.
What It Means: In addition to thinking “mobile first” for application development, think also in terms of “keyboard-free” device and application scenarios.
The technological limits to voice recognition will abate in the next five years. In a glimpse ahead, Tim Tuttle - who holds a PhD from the MIT Media Lab - mentioned that many of the technical challenges inhibiting successful implementation of voice recognition -based interactions. In particular, he noted that a variety of barriers have fallen in the past 18 months, and predicted that all major performance problems will be overcome in the next five years.What It Means: Jean-Luc Picard’s computer on Star Trek: The Next Generation is now fully in sight - tablets fulfilled the touchscreen aspect, while voice recognition is finally emerging as a productive user interface as well. (No predictions on the Holodeck, though).

Posted by JP Gownder

Enhanced by Zemanta]]>

View the original article here