From the outset12/30/2023 We constantly assess the algorithms, using not only our data scientists but also – and increasingly – our specialist MLOps analysts, who continuous monitor the validity and accuracy of our models,” he says. “gfknewron is designed so that people can understand the rationale for the recommendations it gives them. This is a critical area of investment, to avoid any risk of sending out potentially misleading guidance. GfK’s system examines all results, and flags or even suppresses any that have possible quality problems – allowing GfK’s human experts to review and accept or correct, as necessary. This includes ensuring any analytical conclusions are not only built on extensive data, but also run through a rigorous quality assurance process. “We remain acutely aware of the importance of getting our solutions right, so we are completely focused on what works and what the limitations are,” Traue explains. GfK’s own decision support system, gfknewron, informs decisions in contexts including forecasting sales, setting prices, making brand decisions, and scenario testing, to name just a few. Decision support must be applied in a very transparent way, allowing the user to keep a key level of control at first, while the system proves itself to be consistently good and helpful.” There is an additional key requirement: company strategists expect to receive clear evidence from the system to back up any actions advised. They can stop instinctively ‘fighting it’ and allow the automation to work,” Traue says. “Drivers can then increasingly trust the car to make the right decisions. However, many people would not be happy to go straight into trusting the automation to take control in this way: first they need to gain confidence in the quality of the support system,” Traue explains.Ĭarmakers have acted by adding warnings when their cars are about to self-brake, or ensuring drivers keep ultimate control through the steering wheel when any correction is being made. That car might automatically brake if you get too close to the driver in front, or correct the steering if you drift lane. “Think of a decision support system as being like an assisted driving car. In order to overcome this issue, the applications running AI algorithms must be designed to build confidence in the outcomes. If not, they can end up walking away from them.” They have to find them indispensable when making major choices. “Users must be able to deeply trust the applications. “The moment that models start guiding strategic decisions, there is a shift in requirements,” explains René Traue, senior data scientist at the market intelligence and consultancy firm GfK. This has become a particular difficulty in a crucial area of AI: decision support. On the one hand, the quality of automated analysis is not clearly understood, and on the other, there is a perceived threat of machines making people’s own expertise redundant. Nevertheless, most organizations face growing problems around users’ trust in algorithms. This is critical, ensuring algorithms deliver valuable insights, analytics and support increased automation. This can quickly morph into a major problem, particularly when AI is introduced to support strategic choices.ĭata science and AI teams focus constantly on methodology and accuracy. In business, data science and artificial intelligence are usually geared towards powerful efficiencies and growth.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |