Comment by aub3bhat
9 years ago
"The features you mention are only fixes introduced to solve problems that were caused by the chosen approach in the first place."
The chosen approach is the only choice! There is a reason why smart people at thousands of companies use Hadoop. Fault-tolerance and Multi-user support are not mere externalities of the chosen approach but fundamental to performing data science in any organization.
Before you further comment, I highly highly encourage you to get a "Real world" experience in data science by working at a large or even medium sized company. You will realize that outside of trading engines, "faster" is typically the third or fourth most important concern. For data and computed results to be used across organization, they need to stored centrally, similarly hadoop allows you to centralize not only data but also computations. When you take this into account, it does not matter how "Fast" command line tools are on your own laptop. Since now your speed, is determined by the slowest link, which is data transfer over the network.
"Gartner, Inc.'s 2015 Hadoop Adoption Study has found that investment remains tentative in the face of sizable challenges around business value and skills.
Despite considerable hype and reported successes for early adopters, 54 percent of survey respondents report no plans to invest at this time, while only 18 percent have plans to invest in Hadoop over the next two years. Furthermore, the early adopters don't appear to be championing for substantial Hadoop adoption over the next 24 months; in fact, there are fewer who plan to begin in the next two years than already have."
So lots of big businesses are doing just fine without Hadoop and have no plans for beginning to use it. This seems very much at odds with your statement that "The chosen approach is the only choice!"
In fact I would hazard a guess that for businesses that aren't primarily driven by internet pages, big data is generally not a good value proposition, simply because their "big data sets" are very diverse, specialised and mainly used by certain non-overlapping subgroups of the company. Take a car manufacturer, for instance. They will have really big data sets coming out of CFD and FEA analysis by the engineers. Then they will have a lot of complex data for assembly line documentation. Other data sets from quality assurance and testing. Then they will have data sets created by the sales people, other data sets created by accountants, etc. In all of these cases they will have bespoke data management and analysis tools, and the engineers won't want to look at the raw data from the sales team, etc.
My experience echos the OP of this thread; having data in one place backed by a compute engine that can be scaled is a huge boon. Enterprise structure, challenges and opportunities change really fast now, we have mergers new businesses, new products and the requirements to create evidence to support the business conversations that these generate is intense. A single data infrastructure cuts the time required to do this kind of thing from weeks to hours - I've had several engagements where the hadoop team has produced answers in an afternoon that were then later "confirmed" from the proprietary datawarehouses days or weeks later after query testing and firewall hell.
For us Hadoop "done right" is the only game in town for this usecase, because it's dirt cheap per TB and has a mass of tooling. It's true that we've underinvested, but mostly because we've been able to get away with it, but we are running 1000's of enterprise jobs a day through it and without it we would sink like a stone.
Or spend £50m.
Is there anything in your Hadoop that's not "business evidence", financials, user acquisition etc?
My point is that there are many many business decisions driven by analysing non-financial big data sets that physically cannot be done with data crunched out in five hours. These may even require physical testing or new data collection to validate your data analysis.
Like I mentioned, anyone doing proper Engineering (as in, professional liability) will have the same level of confidence in a number coming out of your Hadoop system as they would in a number their colleague Charlie calculated on a napkin at the bar after two beers. Same goes for people in the pharma/biomolecular/chemical industries, oil and gas, mining etc etc.
2 replies →
Yeah, that sounds fun, dozens of undocumented data silos without supervision that some poor bastard will have to troubleshoot as soon as the inevitable showstopper bug crops up.
Most medium and big enterprises have a working set of data around 1-2TB. Enough to fit in memory on a single machine these days.