In case you missed it, here’s a quick recap of our webinar.
Using the cloud and Hadoop, we walked through an emergency scenario where data is pulled from public sources and shared with a team of responders.
We then showed the next steps with data reduction/analytics using more traditional tools such as SQL Server, Analysis Services and more.
We concluded with links to additional resources and a recommended reading list for even more information.
Presenter: Rowland Gosling
Click here to view the entire webinar.
Click here to view Rowland's blog.
Due to time constraints, we are not always able to answer all the questions we receive during our webinars. Below are answers to some of these extra questions.
Thank you to all who attended my online workshop this morning! I had two questions I’d like to address:
Q: Can you use the ODBC Hive solution to insert data into Hadoop?
A: Not with our current technology. Hadoop data is basically read-only. There is no ODBC way to update a row for instance. It can be updated using things like Pig but basically what you’re doing is replacing entries.
Q: Can you use CSV/Storage Spaces?
A: I did some checking and the general answer is no. On the other hand why would you want to? There’s blob storage and plain old UNC file paths.
Thanks everyone!