Skip navigation
All Places > All Things Data Science Blogs

The modern analytic platform must enable users throughout the organization to leverage their preferred analytic tools, engines, and functions at scale across multiple data types without creating these costly and inefficient silos. To address these needs, the Teradata Analytics Platform is a modern analytic platform that brings together analytic functions and engines into an integrated environment.



The Teradata Analytics Platform addresses your business needs—today and in the future—and allows analytic users throughout your organization to use their preferred analytic tools and engines across data sources, at scale. Teradata Everywhere enables you to deploy, buy, and move your Teradata Analytics Platform anytime as your business needs change


The Teradata Analytics Platform delivers the best analytic functions and engines, preferred tools and languages, and support for multiple data types



In a Total Economic Impact™ (TEI) study of the Teradata IntelliCloud™ solution in a hybrid cloud environment, Forrester identified the cost saving achieved with a hybrid approach, as well as the business benefits that a current Teradata customer realized with Teradata Everywhere™.



The retail landscape is changing almost daily. The key to what customers want and need can be found using Predictive Analytics.



This article is originally published in


The notion of predictive data analytics as a game changer for retail is nothing new. We’ve heard this now for the past few years, with industry experts noting customer identification and personalization as the most obvious benefits.


It’s certainly a compelling prospect.


But when it comes to achieving that single view of customers – which is what would effectively drive that useful personalization – the rubber hasn’t really met the road. At least, not for most retailers that aren’t Amazon, not yet. As retailers look to implement the omnichannel experience, they’re still very much in the experimental phase, working to integrate online and offline information with the goal of really understanding their customers’ journey.


What we most urgently need, however, is a window into customer intent. What does the customer want? When are they likely to buy? What gets them into a store? Retailers have many questions but few answers. Offline data, specifically location data, gets us closer to resolving the most pressing of these conundrums primarily because it provides major contextual signals. Knowing, for instance, that a customer has visited a store is a significant signal indicating intent. If we can use this data, along with its online counterpart, we can accelerate our ability to deliver the right message at just the right time to the right person.


That’s what predictive analytics is really about: helping retailers forecast intent. If we can make smart, data-backed predictions on what customers are likely to do, and when, retail marketers can then deliver a true curated shopping experience.


Some predictive models are trained using visit data from trillions of events across millions of visits from thousands of locations. This kind of model makes predictions based on key data points about visitors including number of visits, days since the last visit, visit duration, number of locations visited, and more. Historical visit and behavioral data then helps the model refine its accuracy and deliver better predictions of future visits. This class of methodology delivers an actionable, analytical, and novel approach for retail marketers to identify and classify visitors, including understanding how often different groups of customers return.


Online marketers have used predictive analytics to predict where and why users click, but clicks and anonymous visitors don’t translate well into solid predictions on precisely who will visit a store and when. Predictive analytics that rely on location data, as well as online and other offline data, can really move the needle on this. If we know who customers are, and we leverage a large data footprint, we can help retailers specifically predict which shoppers will be visiting their commercial locations within a specific time period.


These data-backed insights can then be taken one step further. Marketers can use them to tailor and personalize their outreach to specific customer groups and even individual customers. It’s a much more effective use of marketing spend.


Imagine, for instance, you’re a marketer at a large national retailer. Predictive metrics will tell you which customers are least likely to visit in the next 30 days. You then target them with a higher-than-average coupon that expires in 14 days, a generous nudge that gets them in the door. Or imagine you’re a fast food chain – and predictive analysis has revealed a set of customers with a medium likelihood of visiting in the next 30 days using their loyalty program. The analysis inspires you to give a free coffee that month when these customers come for breakfast instead of lunch, and it works.


Predictive analysis enables customization and much better targeting. Indiscriminate coupon blasts waste effort and money with no upside in customer acquisition or retention. But predictive analysis is more akin to a surgical strike, enabling retailers to provide differentiated engagement campaigns that can measurably increase loyalty and attract new customers.


The retail industry needs to develop tools, similar to those that e-commerce deploys, in order to thrive. Although location data is – in many ways – nascent, it’s a powerful enabler of predictive analytics in the physical world. And as online and offline signals improve, and machine learning improves with them, retailers will be able to deliver the highly tailored experiences that save customers time, money and effort, while improving their brand reputation and revenue in the process.




In this special guest feature, Rachel Wolan, Vice President of Product at Euclid discusses  what predictive analytics is really about: helping retailers forecast intent. If we can make smart, data-backed predictions on what customers are likely to do, and when, retail marketers can then deliver a true curated shopping experience. Rachel is a seasoned product executive, bringing over 15 years of experience in B2B SaaS product, engineering and analytics. She is responsible for the Product and Design team at Euclid. She received a BS from Northwestern University and an MBA from the University of California at Berkeley.


To read the original article, visit Predictive Analytics Solve Retail's Trickiest Problem: Customer Intent


A ThoughtPoint by Dr. Barry Devlin (, 9sight Consulting


Decision-making and action-taking demand tight integration of very different types of data and function; from collection and analysis, through predicting future states, to taking immediate, operational action. The Production Analytic Platform, based on relational database technology, offers the most appropriate and effective solution.







Dr. Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing, having published the first architectural paper on the topic in 1988. With over 30 years of IT experience, including 20 years with IBM as a Distinguished Engineer, he is a widely respected analyst, consultant, lecturer and author of the seminal book, “Data Warehouse—from Architecture to Implementation” and numerous White Papers. His book, “Business unIntelligence—Insight and Innovation Beyond Analytics and Big Data” ( published in October 2013.

A ThoughtPoint by Dr. Barry Devlin (, 9sight Consulting


As time-to-decision decreases, reliability, maintainability, and other qualities of the Production Analytic Platform become increasingly important in breaking down silos and improving analytics across the whole business. 










Dr. Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing, having published the first architectural paper on the topic in 1988. With over 30 years of IT experience, including 20 years with IBM as a Distinguished Engineer, he is a widely respected analyst, consultant, lecturer and author of the seminal book, “Data Warehouse—from Architecture to Implementation” and numerous White Papers. His book, “Business unIntelligence—Insight and Innovation Beyond Analytics and Big Data” ( published in October 2013.

A ThoughtPoint by Dr. Barry Devlin (, 9sight Consulting


Learn how Teradata Database embeds extensive temporal and time-series storage and analytic function to handle the complexities of analyzing time dependent data, especially from the Internet of Things.










Dr. Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing, having published the first architectural paper on the topic in 1988. With over 30 years of IT experience, including 20 years with IBM as a Distinguished Engineer, he is a widely respected analyst, consultant, lecturer and author of the seminal book, “Data Warehouse—from Architecture to Implementation” and numerous White Papers. His book, “Business unIntelligence—Insight and Innovation Beyond Analytics and Big Data” ( published in October 2013.

Where will machine learning, deep learning, and AI take us in 2018? IT leaders and industry experts spoke with CIO about what to expect in the coming year.



This article was written by Thor Olavsrud (CIO (US)) and originally appeared in CIO


2017 saw an explosion of machine learning in production use, with even deep learning and artificial intelligence (AI) being leveraged for practical applications.


"Basic analytics are out; machine learning (and beyond) are in," says Kenneth Sanford, U.S. lead analytics architect for collaborative data science platform Dataiku, as he looks back on 2017.


Sanford says practical applications of machine learning, deep learning, and AI are "everywhere and out in the open these days," pointing to the "super billboards" in London's Piccadilly Circus that leverage hidden cameras gathering data on foot and road traffic (including the make and model of passing cars) to deliver targeted advertisements.


So where will these frameworks and tools take us in 2018? We spoke with a number of IT leaders and industry experts about what to expect in the coming year.


Enterprises will operationalize AI


AI is already here, whether we recognize it or not.


"Many organizations are using AI already, but they may not refer to it as 'AI,'" says Scott Gnau, CTO of Hortonworks. "For example, any organization using a chatbot feature to engage with customers is using artificial intelligence."

But many of the deployments leveraging AI technologies and tools have been small-scale. Expect organizations to ramp up in a big way in 2018.


"Enterprises have spent the past few years educating themselves on various AI frameworks and tools," says Nima Negahban, CTO and co-founder of Kinetica, a specialist in GPU-accelerated databases for high-performance analytics. "But as AI goes mainstream, it will move beyond small-scale experiments to being automated and operationalized. As enterprises move forward with operationalizing AI, they will look for products and tools to automate, manage, and streamline the entire machine learning and deep learning life cycle."


Negahban predicts 2018 will see an increase in investments in AI life cycle management, and technologies that house the data and supervise the process will mature.  


AI reality will lag the hype once again


Ramon Chen, chief product officer of master data management specialist Reltio, is less sanguine. Chen says there have been repeated predictions for several years that tout potential breakthroughs in the use of AI and machine learning, but the reality is that most enterprises have yet to see quantifiable benefits from their investments in these areas.


Chen says the hype to date has been overblown, and most enterprises are reluctant to get started due to a combination of skepticism, lack of expertise, and most important of all, a lack of confidence in the reliability of their data sets.


"In fact, while the headlines will be mostly about AI, most enterprises will need to first focus on IA (information augmentation): getting their data organized in a manner that ensures it can be reconciled, refined, and related, to uncover relevant insights that support efficient business execution across all departments, while addressing the burden of regulatory compliance," Chen says.


Chad Meley, vice president of marketing at Teradata, agrees that 2018 will see a backlash against AI hype, but believes a more balanced approach of deep learning and shallow learning application to business opportunities will emerge as a result.


While there may be a backlash against the hype, it won't stop large enterprises from investing in AI and related technologies.


"AI is the new big data: Companies race to do it whether they know they need it or not," says Monte Zweben, CEO of Splice Machine.


Meley points to Teradata's recently released 2017 State of Artificial Intelligence for Enterprises report, which identified a lack of IT infrastructure as the greatest barrier to realizing benefits from AI, surpassing issues like access to talent, lack of budget, and weak or unknown business cases.


"Companies will respond in 2018 with enterprise-grade AI product and supporting offerings that overcome the growing pains associated with AI adoption," Meley says.


Bias in training data sets will continue to trouble AI


Reltio's Chen isn't alone in his conviction that enterprises need to get their data in order. Tomer Shiran, CEO and co-founder of analytics startup Dremio, a driving force behind the open source Apache Arrow project, believes a debate about data sets will take center stage in 2018.


"Everywhere you turn, companies are adding AI to their products to make them smarter, more efficient, and even autonomous," Shiran says. "In 2017, we heard competing arguments for whether AI would create jobs or eliminate them, with some even proposing the end of the human race. What has started to emerge as a key part of the conversation is how training data sets shape the behavior of these models."


It turns out, Shiran says, that models are only as good as the training data they use, and developing a representative, effective training data set is very challenging.


"As a trivial example, consider the example tweeted by a Facebook engineer of a soap dispenser that works for white people but not those with darker skin," Shiran says. "Humans are hopelessly biased, and the question for AI will become whether we can do better in terms of bias or will we do worse. This debate will center around data ownership — what data we own about ourselves, and the companies like Google, Facebook, Amazon, Uber, etc. — who have amassed enormous data sets that will feed our models."


AI must solve the ‘black box’ problem with audit trails


One of the big barriers to the adoption of AI, particularly in regulated industries, is the difficulty in showing exactly how an AI reached a decision. Kinetica's Negahban says creating AI audit trails will be essential.


"AI is increasingly getting used for applications like drug discovery or the connected car, and these applications can have a detrimental impact on human life if an incorrect decision is made," Negahban says. "Detecting exactly what caused the final incorrect decision leading to a serious problem is something enterprises will start to look at in 2018. Auditing and tracking every input and every score that a framework produces will help with detecting the human-written code that ultimately caused the problem."


Cloud adoption will accelerate to support AI innovation


Horia Margarit, principal data scientist for big-data-as-a-service provider Qubole, agrees that enterprises in 2018 will seek to improve their infrastructure and processes for supporting their machine learning and AI efforts.


"As companies look to innovate and improve with machine learning and artificial intelligence, more specialized tooling and infrastructure will be adopted in the cloud to support specific use cases, like solutions for merging multi-modal sensory inputs for human interaction (think sound, touch, and vision) or solutions for merging satellite imagery with financial data to catapult algorithmic trading capabilities," Margarit says.


"We expect to see an explosion in cloud-based solutions that accelerate the current pace of data collection and further demonstrate the need for frictionless, on-demand compute and storage from managed cloud providers," he adds.



To read the original article visit 5 artificial intelligence trends that will dominate 2018 - CIO  

Larry H Miller Sports & Entertainment integrated data from diverse businesses—NBA basketball, minor league baseball, a sports arena, a bike race,  sports retail stores, and megaplex theatres—and took advantage of the Teradata IntelliCloud™ to turn analytics into dollars.




A ThoughtPoint by Dr. Barry Devlin (, 9sight Consulting


Read how Teradata Database embeds analytic function that helps your data warehouse evolve into a Production Analytic Platform that supports operational implementation of predictive analytic models developed in the data lake and elsewhere.











Dr. Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing, having published the first architectural paper on the topic in 1988. With over 30 years of IT experience, including 20 years with IBM as a Distinguished Engineer, he is a widely respected analyst, consultant, lecturer and author of the seminal book, “Data Warehouse—from Architecture to Implementation” and numerous White Papers. His book, “Business unIntelligence—Insight and Innovation Beyond Analytics and Big Data” ( published in October 2013.

Official EPIC Press release written by:


In an awards ceremony held during the Teradata PARTNERS Conference, Teradata honored 33 high-performing customers and partnerships with Teradata EPIC Awards for their innovative use of data and analytics in shaping positive business outcomes.


See who walked the Orange carpet...


Teradata EPIC Awards Finalists


To all 2017 EPIC winners and finalists, spread the word of your big win! Get your brag on here.




Related News Links:





Chris Rashilla-Barton is an experienced marketing professional with expertise in global program management from development and planning through implementation, employee engagement and results analysis. Accomplished in developing and managing teaming relationships with internal clients, partners, agencies and vendors. Strategic and creative thinker with effective communications and writing skill.

From self-driving cars to photo recognition, AI is becoming an increasing presence not just in our headlines, but also in our lives. Yet depending on the business problem and data involved, there are challenges to applying AI in an enterprise context.

Enterprise AI is a different game with different rules. Some of the differences, which I’ll cover below, are based on both the kinds of data available in the enterprise and the complexity of the operations in which AI will be used. Here are three implications to consider for using AI in an enterprise context.

artificial intelligence, advanced analytics, Ben MacKenzie, think big analytics

Implication 1: Consider domains where AI has been proven  

AI has had some spectacular successes across a broad range of domains: image recognition, object detection, diagnostic image analysis, autonomous driving, machine translation, sentiment analysis, speech recognition, robotics control, and, of course, Go and chess. Notably, all of these breakthroughs are in domains that humans are quite good at.  This makes sense: deep learning networks are inspired by the architecture of the human brain and, in the case of computer vision, by specific structures within the visual cortex. All of these examples represent problems with a hierarchical structure that is amenable to increasingly abstract representation and understanding of the domain. These domains are also associated with extensive publicly available research, code, and, in many cases, pre-trained models.

On the other hand, the application of AI to domains outside of those listed above is less well developed.  Think of recommender systems, fraud detection, or preventative maintenance models. AI has been applied successfully to each of these domains, but the results are more incremental and the research is much less publically available. In part this reflects the fact that these domains involve closely guarded enterprise data which cannot readily be shared with the broader community and in part the nature of the data itself.

Now, the good news is that many enterprises have problems that involve vision, language or robotics control. Whether it’s computer vision on the factory floor or on inventory management systems, or natural language processing (NLP) for compliance reporting or sentiment analysis, companies can directly leverage an enormous body of research and experience.  For other domains, those lacking established research, pre-trained models, published papers, or notable public success stories, AI should be viewed as part of a continuum with other machine learning and analytical techniques.

Implication 2: AI isn’t magic

Viewing AI as an extension of traditional analytics and machine learning for domains with unproven track records will help organizations avoid ascribing a kind of magic to AI: just feed in enough data and you will get good results. If you have this kind of magical thinking about AI, then drop a rubber duck in a stream and try to get AI to predict where it’s going to end up. You can train that model for the next thousand years and you won’t get good results. Without modelling the individual molecules that make up the stream, the process is fundamentally stochastic; there is nothing AI can do.   

AI isn’t a blanket solution for all of the problems enterprises want to use it for. Just because you’re able to classify images doesn’t mean you’re going to be able to perfectly forecast the amount of soda consumed in the Northwestern US in November.

In assessing the best use cases for AI in your business, look closely at the problem spaces you have available. Do you have any problems in areas for which there is available research?  Do you have problems where you have already been applying machine learning?  These are good candidates for applying AI.

You have probably also heard that AI is very data hungry.  The AI breakthroughs discussed above involved truly massive data sets: millions of images in the case of computer vision models.  It’s impossible to predict exactly how much data you’ll need to make AI successful, but typically the smaller the data set, the more likely you are to be better served with more traditional analytical techniques. Similarly, you have probably also heard that AI and deep learning cut down on the need for manual feature engineering. This is certainly true in the breakthrough domains: computer vision models just look at pixels, NLP models just look at words (or sometimes just characters).  The case with enterprise data is less clear. The data certainly needs to be clean and integrated, categorical features need to be encoded, and time series data needs to be dealt with (sometimes requiring manual feature engineering).  Overall, take the feature engineering claims of AI with a grain of salt for unproven domains and expect to have fairly complex data pipelines.

Implication 3: Try multiple AI experiments to find quick wins

Given the number of unknowns involved with assessing whether AI is right for your use cases or not, it’s better to start by casting a wide net. Apply AI to a larger number of problems — ten, for instance — and see which ones produce the best results. Such an approach means you’re not beating your head against the wall with one problem, forcing a square peg into a round hole, when AI isn’t the right approach for that particular issue. Additionally, with this strategy, you can ensure that you get some results relatively quickly. You’ll then have the patience to create the right AI-based models for use cases that might take longer (and the latitude to rule out AI where it isn’t the right solution to the problem at hand).  

Implementing AI involves many considerations. Many companies have never put machine learning models into production, and so jumping into the deep end with AI will mean they’ll soon find themselves in over their heads. It’s not that AI is more difficult than other machine learning, but deploying, monitoring, versioning, and tracking the performance of models is complicated, and if companies do not have experience with it, their AI implementation will not be smooth. A deliberate approach, where AI is applied gradually to a number of use cases, is a way to improve the transition.


benBen MacKenzie, based in Ottawa Canada, has been a Principal Architect with Think Big Analytics, A Teradata Company, for the last 6 years, and has served as the Global Engineering Lead for the last two.    Ben has been focused on building scalable open-source based analytics solutions across a variety of industries and in his capacity as Engineering lead has helped help align Think Big and customers around a complex technology landscape.  Ben has an extensive background in AI and is excited to be part of the current deep learning inspired AI renaissance in his new role as Directory for AI Engineering. In addition to strong engineering and analytical skills, Ben has a proven track record of employing cutting edge research from the deep learning community to build customer solutions.

Written by

I recently read that to manage customer journeys, those journeys must be defined and accessible. Doesn’t this sound simple and awesome? The problem is that it does not reflect the reality in which most large organizations operate.

Journey management and journey analytics remind me of the chicken and the egg – which comes first? There are journeys you inherently need to manage – onboarding, upgrades, and complaint resolution, for example – and analytics are necessary to monitor and improve those. But there are also important customer journeys and paths that you only discover through analytics and exploration. Are transitions between specific support channels proving to be critical, for example? Or do I just ignore this question if the journey was not previously defined for me?

Hopefully, these few examples demonstrate that journey management and analytics must go hand-in-hand. One of the biggest challenges to-date of enabling this has been getting your data into one place. In some cases, portions of data from a single channel might live in multiple stores. In other cases, data from a single channel may live in isolation in a single repository. And in some cases, bits and pieces of data from multiple channels are transformed or duplicated across multiple stores to the point where their lineage is almost impossible to ascertain.

Thankfully, Teradata has recognized this challenge as an opportunity for innovation.

With recent announcements around the Teradata Analytics Platform and Teradata Everywhere, companies have access to a multitude of analytics tools that can be deployed and used to analyze data in a variety of data stores, on-premises or in the most popular enterprise clouds.

Ultimately, organizations still must agree on which journeys they should actively manage. But the hurdle of getting data into a single repository is no longer there. And while enterprises can now manage and analyze critical journeys by dissecting data from a variety of channels and stores, they have the freedom and flexibility to explore new scenarios and new channels. As most journey pros will tell you, the job of managing customer journeys is continuous. There are always new interactions to investigate and new channels through which to operate.

While the job is never done, at least some critical limitations are being lifted. With Teradata, you can manage the journeys you understand, while continuing to evolve those journeys and define new journeys at the same time.

Are you interested in exploring your customer journeys – defined or not? If so, check out our Path Analysis Guided Analytics Interface, which runs on Teradata, Aster, and the Teradata Analytics Platform.

ryan-garrett-headshotRyan Garrett is senior business development manager for Think Big Analytics, a Teradata company. His goal is to help organizations derive value from data by making advanced analytics more accessible, repeatable and consumable. He has a decade of experience in big data at companies large and small, an MBA from Boston University and a bachelor’s degree in journalism from the University of Kentucky.